id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2310.17423
|
The Newtonian Mechanics of Demand
|
Economic engineering is a new field wherein economic systems are modelled in
the same manner as traditional mechanical and electrical engineering systems.
In this paper, we use Newton's theory of motion as the basis for the theory of
demand; thereby establishing a theoretical foundation for economic engineering.
We follow Newton's original development, as set forth in the Principia, to
determine economic analogs to his three laws of motion. The pivotal result is
an operational definition for an economic force, i.e. a want or a desire, in
terms of a price adjustment. With this, we model the price effects of scarcity
and trade friction in analogy with the models for the spring and damping force.
In turn, we define economic benefits and surplus as analogous to the
definitions of mechanical work and energy. These are then used to interpret the
various types of economic equilibrium considered by economists from a
mechanical perspective. The effectiveness of the analogy is illustrated by
applying it to modelling the price and inventory dynamics of various economic
agents -- including consumers, dealers, holders, spot and futures traders --
using linear-time invariant systems theory.
|
Max Mendel
|
2023-10-26T14:29:38Z
|
http://arxiv.org/abs/2310.17423v1
|
# The Newtonian Mechanics of Demand
###### Abstract
Economic engineering is a new field wherein economic systems are modelled in the same manner as traditional mechanical and electrical engineering systems. In this paper, we use Newton's theory of motion as the basis for the theory of demand; thereby establishing a theoretical foundation for economic engineering. We follow Newton's original development, as set forth in the Principia, to determine economic analogs to his three laws of motion. The pivotal result is an operational definition for an economic force, i.e. a want or a desire, in terms of a price adjustment. With this, we model the price effects of scarcity and trade friction in analogy with the models for the spring and damping force. In turn, we define economic benefits and surplus as analogous to the definitions of mechanical work and energy. These are then used to interpret the various types of economic equilibrium considered by economists from a mechanical perspective. The effectiveness of the analogy is illustrated by applying it to modelling the price and inventory dynamics of various economic agents --including consumers, dealers, holders, spot and futures traders-- using linear time-invariant systems theory.
Economic Engineering Newton's Laws of Motion Law of Demand
## 1 Introduction
Newton's laws of motion in mechanics and the law of demand in economics have several commonalities. Both are considered to be fundamental to their respective fields. Also, like Newtonian mechanics, the theory of demand is based on causation, unlike econometric approaches that are based on correlation. In mechanics, it is a change in momentum that causes a change in velocity, and in the theory of demand it is a price change that causes a change in the quantity demanded.
However, while Newton's theory forms the basis for the dynamics of a mechanical system, the theory of demand is focused on the equilibrium of an economic system. In Newtonian mechanics, the rate of change of momentum is defined to be a force and this definition determines the differential equations that describe motion through a balance of forces. An equivalent operational definition of a force is lacking in economics in general, and in the theory of demand in particular. Instead, economists justify in a metaphorical manner how presumed "forces of demand and supply" or an "invisible hand" achieve price equilibrium "in the long run" (see classics such as [1], [2], [3], and [4]).
Our purpose with this paper is to fill that gap by developing a fully dynamic theory of demand based on the format of Newton's formulation of his laws of motion. We are inspired in this by Maxwell, who developed electrodynamics in this manner. In engineering, this method of analogs has been extended to other domains including, amongst others, fluids and thermodynamics. It has also proven to be useful for modelling mixed systems and to design controllers in a manner independent of the physical domain. Further extension would leverage engineering design and automatic-control techniques to economic systems or even mixed econo-electromechanical systems.
In subsequent sections 2 and 3, we follow Newton's initial "Definitions" and "Laws" sections of his Principia. In Section 2, we present the economic analogs to Newton's definitions of inertia (demand), velocity (quantity demanded), momentum (price), and force (want). In Section 3, we use these definitions to formulate the economic analogs to Newton's three laws. The critical step is the analog to the second law. We interpret an economic force as a want or desire and consider it analogous to Newton's motive force. Furthermore, we interpret the force of demand (or supply) as the rate of change of the price, analogous to Newton's definition of the inertial force as the rate of change of momentum. The analog to Newton's second law equates the two, defining a want as a rate of change in price. We recognize this as the law of demand: the more someone wants something, the higher the price they are willing to pay.
Other attempts to use the method of analogs in economics consider price itself, rather than rate of change, to be analogous to a force (see [5] and the references therein). However, without a well-defined cause for price dynamics, the theory that emerges is more Aristotelian than Newtonian in nature. In particular, a Newtonian force is zero in equilibrium. If we were to accept that price is a force, then price is zero in equilibrium and this cannot be considered an accurate description of reality.
To be useful for building economic models, such a Newtonian law of demand requires the economic analogs to force laws or to constitutive relations. We refer to these as price drivers. We show in Section 4 how Hooke's law or the spring law can be considered analogous to the law of scarcity or storage and how the friction laws from mechanics are equally applicable to economics. In Section 5 we show how the theory allows one to consider the economic surplus as the analog to kinetic energy and how the various price drivers allow us to introduce other types of benefits that emerge to be analogous to other forms of energy and work. Section 6 rounds off the theoretical development with a comparison between the various concepts of economic equilibrium to those in mechanics.
A Newtonian theory of demand allows us to model and control economic systems using the same dynamical systems theory that engineers use to model electro-mechanical systems. We refer to this field as economic engineering. In Section 7, we show how even first- and second-order dynamical systems can model sophisticated economic dynamics such as price rigidities, trade cycles, and hyperbolic discounting. It should be noted that the dynamical systems approach is distinct from Forrester's system dynamics approach (see [6]), despite the similarity in names. Although stocks are established by accumulating a flow of goods, system dynamics lacks the concept of a force that may accumulate to establish a price. From a dynamical-systems perspective, such models are kinematic and static, rather than truly dynamic.
Although our primary audience are mechanical engineers, we have attempted to make the paper accessible to anyone with at least a solid high school physics background. The first two sections follow Newton's own development (in the [7] translation), which is eminently readable even today. Feynman [8] gives a modern perspective that is true to Newton's development. In Appendix A, we include the electrodynamic analog familiar to electrical engineers and a hydrodynamic analog which is readily visualized in general.
Our intent here is to fit economics to the mold of mechanics, rather than the other way around. For each mechanical concept or principle, we search for what appears to be the most appropriate economic analog and consider that the "economic-engineering" analog, even though this might be incompatible with some thinking in economics. We intentionally avoid references to any recent or specialized research in the economic literature and, instead, focus on making a connection with the classes, in particular Smith [1] and Marshall [2]. Writing when Newton's ideas have been generally accepted, the influence of his concept a force as the agent of change is evident in their writing, especially Marshall's. We have relied on the well-known texts by Samuelson [3] and Varian [4] for contemporary treatments.
At the advanced level, the development of economics appears to have moved away from mechanics in favor of a pure mathematical approach. We do not attempt to reconcile these developments in terms of the Newtonian way of thinking, with the possible exception of a short discussion in Section 6 on deBreu's development of general equilibrium, where we rely on the graduate level text [9].
## 2 Definitions
Newton starts his Principia with a chapter wherein he defines the concepts that are required to formulate his laws. In this section, we define the analogous economic concepts. After introducing the economic analog to a particle and a physical body, we present the analogs to the requisite kinematic and dynamic variables, which are summarized in Table 1. We distinguish between two analogies, an impedance analogy where the flow of goods is determined by price, and a mobility analogy where the flow of value is determined by the quantity demanded. Newton's development of mechanics follows the mobility analogy, whereas the theory of demand predominantly follows the impedance analogy. The latter is consistent with hydrodynamics and Maxwell's development of electrodynamics familiar to electrical engineers. We show this in detail in Appendix A.
### Inertia as Demand
Newtonian mechanics describes the behavior of an object and the theory of demand and supply describes the behavior of an economic agent. In Newtonian mechanics, the object is idealized as a point particle. We refer to an agent whose behavior is limited to either demanding or supplying as a demander (see Table 1(a)). Such a demander can equally well represent a supplier or it can even alternate between these two roles. This is unusual in economics, where these roles are assigned to distinct agents.
A particle has inertia2 and, analogously, a demander has demand (or supply). We intuit that an agent maintains a flow of goods due to its demand and, analogously, a particle maintains a velocity due to its inertia. Mass is a measure of the inertia and the price elasticity of demand is used by economists as a measure of the demand. The concept was first defined by Marshall in [2], who referred to it as the elasticity of wants. According to Marshall,
Footnote 2: We exclude massless particles like photons here for obvious reasons.
_The amount of the commodity demanded may increase much or little according as the demand is elastic or inelastic..._
We see how Marshall's elasticity \(\varepsilon\) is analogous to the inverse of the mass \(m\), i.e., \(\varepsilon=1/m\), and, hence, it is the price inelasticity that is properly analogous to the mass (see Table 1(a)).
Marshall's use of elasticity is somewhat unfortunate in the present context since, in engineering, elasticity refers to the resistance of bodies to deformations rather then the inverse of inertia. However, the terminology is firmly established both in economic theory and in business practice. Another unfortunate circumstance is that contemporary economic theorists define the price elasticity of demand in a slightly different manner, i.e., in terms of percentage increases rather than absolute ones. In business practice, Marshall's definition is still the prevalent one.
The analog allows us to aggregate the demand from single demanders to multi-agent systems in the same manner as is done for point particle systems. In this way, general economic entities such as corporate bodies can be considered analogous to physical bodies. The total mass is analogous to the aggregate price inelasticity. The reduced mass is analogous to the aggregate price inelasticity and we refer to its inverse as the mutual price elasticity (See Table 1(b)).
\begin{table}
\end{table}
Table 1: Economic analogs to the kinematic and dynamic variables, both in the electrical impedance and the Newtonian mobility analogy.
\begin{table}
\end{table}
Table 2: Particles and bodies locate the presence of inertia in mechanics, while demanders and entities locate the presence of demand in economics. Price elasticity is a measure of demand analogous to the inverse of the mass as the measure of inertia.
### Kinematics as Trade Flow
#### 2.2.1 Physical Space as Commodity Space
We refer to the economic analog for physical space as commodity space.3 Physical space has at most three dimensions, whereas commodity space has a dimension for each distinct type of good. We refer to a coordinate dimension in commodity space as an account. Goods of a common type are said to be fungible and economists refer to them as commodities. This condition allows us to summarize the total amount of a commodity passing hands by a single account balance number, analogous to summarizing total distance travelled with a single coordinate value. Alternatively, within the impedance analogy, we can think of a coordinate value as the total acquisitions, analogous to a total amount of fluid or electrical charge. (See Table 0(a).)
Footnote 3: Apparently, this terminology was first used by Fisher [10] but has not been adopted in economic texts.
Points in physical space are identified using a coordinate system. Further borrowing terminology from accounting, we refer to its analog as a chart of accounts. Figure 0(b) shows a choice of basis for a commodity plane spanned by a chart consisting of two accounts. Unlike the physical space of Newtonian mechanics, which is considered Euclidean, commodity space is merely a space of points with no structure other than enough smoothness to allow us to do calculus. For instance, the account balances need not share the same units and, hence, the Euclidean metric has no a priori economic meaning since one cannot directly compare apples to oranges.
In practice, the judgement of whether goods are fungible or not to qualify being a commodity depends on their specification. Commodity markets such as the Chicago Mercantile Exchange (CME) provide a list of distinct types of commodities (such as corn or pork bellies) each with a set of contract specifications containing the quality requirements for goods to be considered as such for purposes of trading on the exchange. Also, manufactured products, especially those that are mass produced, are highly fungible and should be considered to be commodities in the current sense. Many other manifestations of commodities occur. Notably, money of a particular type, such as cash or demand deposits, can also be thought of as a commodity.
The choice of a particular chart of accounts is a practical matter, depending on the purposes of the model. One can group all consumption goods in a single basket with a single account on one extreme, or differentiate generically between, say, apples and oranges, or specify precisely the type of apple, its size, color, quality and further attributes on the other extreme. To facilitate the analogy with Newtonian mechanics, we assume the account values to be real numbers, despite that commodities are not infinitely divisible into smaller and smaller units.
#### 2.2.2 Motion as Trade
The trade activity of a demander over time can be represented by a time-parametrized path in commodity space, analogous to the representation of the motion of a particle in physical space. Such a path is shown in Figure 0(b) for two commodities. The activity starts at some initial time \(t_{i}\) and logs all acquisitions and dispositions. This charts a set of balances \(q=q(t)\) for each of the commodities at any subsequent time \(t\).
#### 2.2.3 Velocity as Quantity Demanded
At any given moment in time, a particle has a specific velocity, equally a demander demands a specific certain quantity of goods. The measure of velocity is the distance traveled per unit of time and the measure of quantity demanded is the
Figure 1: The analogy between motion and trade.
amount of a commodity acquired per unit of time. Velocity is defined as the tangent vector to the path of motion. We thus define the quantity demanded as the tangent vector to the path of trade activity. This is shown graphically in Figure 0(b) for a flat two-dimensional commodity space.
Analytically, this means that the derivative \(v:=\mathrm{d}q/\,\mathrm{d}t=\dot{q}\) that represents the quantity demanded is a directional derivative. In this way, the quantity demanded gives the time rate at which the agent is acquiring or disposing of the commodity. Alternatively, and more appropriate to the mobility analog, would be to consider the analog to velocity to be a covector \(\mathrm{d}q=v\,\mathrm{d}t\), so that \(v\) represents the marginal change in the account level(s) per unit of time. (see Table 1 and Appendix A.)
Economists distinguish between a quantity demanded and one supplied and both are considered to be non-negative numbers. However, our definition of \(v\) implies that it is a vectorial quantity, taking on both positive and negative component values. This allows us to unify the two economic concepts into a single vectorial quantity.
#### 2.2.4 Acceleration as Needs
When a demander needs more (or less) of a commodity, it increases (or decreases) the quantity demanded. In economics, this is also referred to as an extension (or contraction) of the demand, terminology that is consistent with the mobility analogy. It is analogous to the acceleration (or deceleration) of a particle in mechanics. In the flow picture of the impedance analogy, we refer to the analog to acceleration as the needs of the agent. It is also vector, expressing how much more or less of the mix of commodities an agent needs to demand or supply. (see Table 1.)
#### 2.2.5 Center of Mass as Center of Demand
Like a particle, the demander is considered to be highly idealized, only representing a point location of demand with inelasticity \(m\)( in Figure 0(b)). For a system of point particles distributed in space, there is a unique point called the center of mass which moves as if it were a point particle with mass \(m_{\mathrm{\textsc{tot}}}\). The analogous center of demand represents an agent whose demand is governed by the aggregate inelasticity of demand (see Table 1(b)). The kinematics pictured in Figure 0(b) apply equally well to the behavior of the center of demand for multi-agent economic systems. The economic analogs to the account balances and the quantity demanded vector are shown in Table 3.
The concept of a center of demand permits us to treat multi-agent systems as if they were a single agent system --having effective balances, demands, and needs-- analogous to the concept of the center of mass in mechanics.
### Dynamics and the Price Mechanism
#### 2.3.1 Momentum as Price
Newton defines the quantity of motion to be the product of the mass with the velocity, or \(p=mv\). This is now referred to as momentum. In economics, if we measure such a quantity for trade in terms economic value, say in $ per unit of the commodity demanded, then the analogous "value of the trade" in that commodity represents the price the demander ascribes to the commodity (see Table 1). Economists also refer to it as the agent's reservation price, borrowing terminology from the auction process. In business it is also referred to as the transfer price. The engineering diagram in Figure 1(b) emphasizes the personal nature of the vector \(p\) by attaching it to the demander.
There is a momentum associated with each dimension of space. Analogously, there is a price associated with each type of commodity. In economics, the analogous relationship to the definition \(p=mv\) of momentum is called a demand schedule (see Table 1(a)), referring to the use of tables rather than to a function to match price and quantity demanded. Economists view the price as the independent variable, so the relationship is better expressed as \(v=\varepsilon p\), where the quantity demanded is the dependent variable and \(\varepsilon\) is known is the price elasticity of demand (see also Table 1(b)). In fact, this is also the case in mechanics, since the causality of the mass is such that velocity can only be changed by changing its momentum by applying a force. The graph of a demand schedule, such as the one shown in Figure 1(c), should therefore be read off from the vertical to the horizontal axis.
We exploit our vectorial definition of \(v\) to create a unified demand schedule, applicable to an agent acting either as a demander or a supplier, consistent with the definition of momentum. The analogous relationship \(v=\varepsilon p\) for a
\begin{table}
\begin{tabular}{l l l} & **Mechanics** & **Economics** \\ \hline \(q_{\textsc{\textsc{CM}}}=\sum m_{i}q_{i}/m_{\textsc{\textsc{tot}}}\) & Center of Mass & Center of Demand \\ \(v_{\textsc{\textsc{CM}}}=\sum m_{i}v_{i}/m_{\textsc{\textsc{tot}}}\) & Center-of-Mass Velocity & Aggregate Quantity Demanded \\ \(a_{\textsc{\textsc{CM}}}=\sum m_{i}a_{i}/m_{\textsc{\textsc{tot}}}\) & Center-of-Mass Acceleration & Aggregate Needs \\ \hline \end{tabular}
\end{table}
Table 3: Position and velocity analogs for multi-agent economic systems. The index \(i\) ranges over agents.
demand schedule should be read as a single vector equation for all accounts simultaneously, where price \(p\) and quantity demanded \(v\) are vectorial quantities and the \(\varepsilon\) is a non-negative scalar. This means that \(p\) can take on both positive and negative values. The choice \(m\geq 0\) implies that \(p\) has the same sign as \(v\). Therefore, if supply is considered negative, then so are the asking prices and, consequently, the bids are positive (see Figure 1(c)). When there is no demand, the price is zero.
In the theory of demand economists formulate two schedules; one for an agent such as a consumer, who only demands, and one for an agent such as a producer, who only supplies. Figure 1(d) shows Marshall's original of the familiar picture of the determination of economic equilibrium in terms of crossing demand and supply curves. Contrary to the unified schedule, prices remain positive, elasticities switch signs, and the curves are shifted from the origin. Economists run the quantity supplied in the opposing direction on the ordinate and our vectorial analysis obviates the need for this by using the sign to distinguish between bid and ask prices. As a consequence, the elasticity can be chosen to remain non-negative.4 In Section 3.1, we provide the justification for shifting the curves. The economists' picture of economic equilibrium in terms of crossing curves is reformulated in Section 6.3.
Footnote 4: This choice is predicated on the non-negativity of mass in mechanics. Interestingly, it is possible to develop a consistent theory with non-positive masses (see the discussion in Feynman [8]). However, the choice of non-negative masses is firmly entrenched in mechanics, impelling us to make the analogous choice.
### Force as Want
A force in mechanics formalizes the intuitive idea of a push or a pull. If we think of pushing for something as wanting or desiring to acquire it, then a such a push is analogous to a _want_ or a _desire_. The opposing pull then represents the _incentive_ or _effort_ inducing us to dispose of it. As vectors, wants and incentive are the negatives of each other and we use the concept of a want to denote any economic force (see Table 1). When emphasizing its force nature, we refer to a "force of desire" or even a "force of wanting."
The terms want and desire appear frequently in economics texts --albeit informally-- to indicate the presence of an economic force. The early literature (e.g., Smith [1] and Marshall [2]) tend to use want, while modern texts (e.g., Samuelson [3] and Varian [4]) tend to use desire. In spoken English, a desire is considered stronger than a want and this is also a useful distinction to make for economic forces.
Newton distinguishes between several types of forces whose economic analogs are important for the formulation of the laws in Section 3. We summarize these in Table 4 and discuss them below.
Newton conceives of the inertial force as the tendency for a body to resist a change in its state of motion as measured by momentum. The analogy between inertia and demand (Table 1) immediately suggests that this is analogous to the force of demand, or force of supply for the opposite (see Table 4). Newton defines the inertial force as the rate of change
Figure 2: Unified demand and supply schedules into a single linear vectorial schedule.
\(\dot{p}=\mathrm{d}p/\mathrm{d}t\) of the momentum and, analogously, we let the force of demand equal the rate of change of the agent's price. The force of demand is thus personal to the agent, analogous to how the inertial force is a property of the body.
Newton defines the motive force5 as a force that is exerted on a body to change its state of motion. Its analog as an economic motive is immediate (see Table 4). In [2], Marshall introduces a concept of the "force of an economic motive" as follows:
Footnote 5: Newton refers to the “impressed force” in the formulation of the definition, while referring later to the motive force. According to Maxwell ([11]), the impressed force corresponds to the impulse of the motive force instead. Indeed, Maxwell refers to the electrodynamic analog of the motive force as the electro-motive force (or EMF).
_It concerns itself chiefly with those desires, aspirations and other affections of human nature, the outward manifestations of which appear as incentives to action in such a form that the force or quantity of the incentives can be estimated and measured with some approach to accuracy ;_
Contrary to the inertial force, the motive force is applied externally and is not a property of the body. Analogously, economic motives are exogenous to the agent, rather than personal like the force of demand.
Like their mechanical counterparts, we can distinguish amongst numerous sources for economic motive forces. In Section 4 we consider several that are analogous to common forces in engineering including the economic analog to the gravitational force (constant needs), the spring force (convenience) and the friction force (friction).
## 3 Laws
Newton builds on and justifies his definitions with a chapter --with the same title as this chapter-- wherein he posits his three laws of motion. The following three subsections presents the application of his laws to economics, in order of their numbering. Table 5 summarizes the analogs to his laws with laws or basic principles in economics that are both generally known and considered fundamental.
### I: Free Motion as Freedom from Want
Newton's first law stipulates that a particle remains at rest unless a force acts on it. At rest means that the particle maintains a constant velocity. For economics, this means that the agent maintains a constant quantity demanded, i.e., that the vector representing \(v\) is constant in both magnitude and direction. In mechanics, the condition is also referred to as free motion, meaning that the motion is free of a net force. Since we interpret an economic force as a want, we formulate the economic analog of this condition as freedom from want. (see Table 5).
Newton's first law requires that the condition of being force-free can be determined independent of the behavior of any other object. Rest in an airplane going at constant speed can be determined with the window shades closed. Analogously, we assume that the condition of freedom from want can be determined by an agent without considering the quantities demanded of any other agents.
Newton's first law is considered a restatement Galileo's principle of relativity, which stipulates that only the relative velocity of two particles with respect to each has physical meaning. The economic analog to the relative velocity is known as the excess demand (Table 6) and the implication of the first law is that only the excess demand between two agents is economically meaningful. It is vector and positive, negative, or zero values indicating whether one agent demands from, supplies to, or does not trade with the other. Hence, an agent may represent a middleman, acting simultaneously as a demander and a supplier to two counterparties.
\begin{table}
\begin{tabular}{l l l} & **Mechanics** & **Economic** \\ \hline \(\dot{p}\) & Inertial Force & Force of Demand \\ \(F\) & Motive Force & Economic Motive \\ \hline \end{tabular}
\end{table}
Table 4: Newton’s categorization of forces and their economic analogs.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Law** & **Mechanics** & **Economic** \\ \hline I & Free Motion & Freedom from Want \\ II & Law of Motion & Law of Demand \\ III & Action = Reaction & Demand = Supply \\ \hline \end{tabular}
\end{table}
Table 5: Newton’s three laws of motion and their economic analogs.
In the modern interpretation, the first law postulates both the existence and the equivalence of inertial reference frames. An inertial reference frame is a coordinate frame wherein the first law is valid and, hence, it is moving at a constant velocity \(v^{*}\). For economics, we refer to a demand frame and its existence implies that, in principle, a benchmark level of demand \(v^{*}\) exists that can be used to determine excess demand. In mechanics one chooses a relatively immovable object such as the wall in Figure 1(b). We set its velocity \(v^{*}=0\) and all velocities are then assumed to be relative to this. In economics, a close-to perfectly inelastic entity such as a market consisting of many agents acting collaboratively, can be used to by an agent to mark its demand to the market. We set its quantity demanded \(v^{*}=0\) and the quantities demanded of the individual agents are then assumed to be the excess over the market.
When modelling, it is convenient to mark all quantities demanded to a single market whose \(v^{*}=0\), analogous to the engineering practice of choosing a single inertial reference frame. In this way, all the unified demand curves are guaranteed to pass through the origin as required. Shifts in the demand curve are achieved by switching to a different demand frame. The amount of the shift as measured on the demand axis is determined by the relative demand of one demand frame over the other.
### II: The Law of Motion as the Law of Demand
In the formulation of his second law of motion, Newton equates6 the motive force to the change in motion of an object. In the light of the analogs of the previous section, this translates roughly into the statement that the more something is wanted, the higher it is bid up. This coincides with the intent behind the law of demand and, therefore, we consider this law to be the analog to Newton's second law (see 5).
Footnote 6: Newton actually posits proportionality, but the proportionality constant is without exception taken to be 1.
Furthermore, Newton's formulation implies the law concerns vector equations. These are written either as
\[F =\dot{p} \text{or} F\,\mathrm{d}t =\mathrm{d}p \tag{1}\]
corresponding to the mobility and the impedance analogy, respectively. Both equations specify a cause-and-effect relationships, with the cause on the left of the equality sign and the effect on right of it. In mechanics, a force is the _cause_ of a change in momentum. For economics, this implies that the agent's motive \(F\)_cause_ its price change \(\dot{p}\).
In the \(F=\dot{p}\) formulation, the force \(F\) is interpreted as the time rate of a flow \(\dot{p}\) of momentum.7 A want \(F\) is interpreted as a flow of economic value that causes the rate of additions \(\dot{p}\) to the value \(p\) of the commodity. A similar point of view was taken by Adam Smith when formulating his value-added theory of price (see [1]). This is still the basis for pricing in the theory of production, in national accounting, and familiar to many from the manner in which value-added taxes are formulated.
Footnote 7: Newton, in fact, referred to the overdot notation as a fluxion.
In the \(F\,\mathrm{d}t=\mathrm{d}p\) formulation, the impulse of the force \(F\,\mathrm{d}t\) drives the change in momentum \(\mathrm{d}p\). Here, we interpret the marginal change \(\mathrm{d}p\) in price to be caused by the inducement \(F\,\mathrm{d}t\) (see Table 7). In economic theory, this approach to price forming is the one taken by the marginalists. We also recognize it in the price discovery process in auctions and the equation can be interpreted as stating that an agent's wants induce it to bid up the price.
Using a demand schedule, we determine the dynamics of the quantities demanded from the price dynamics. We substitute the unified demand schedule \(p=mv\) (see Table 1(a)) into the statement of the second law \(F=\dot{p}\) (Equation 1) to find:
\[F=ma+v\dot{m}. \tag{2}\]
It follows that a want can have two effects (see Figure 3):
\begin{table}
\begin{tabular}{l l l} & **Mechanics** & **Economics** \\ \hline \(v=v_{2}-v_{1}\) & Relative Velocity & Excess Demand \\ \hline \end{tabular}
\end{table}
Table 6: The excess demand of one demander over another is a vector quantity, analogous to the relative velocity of particles.
\begin{table}
\begin{tabular}{l l l} & **Mechanics** & **Economics** \\ \hline \(F\,\mathrm{d}t\) & Impulse of the Force & Inducement \\ \hline \end{tabular}
\end{table}
Table 7: The analogy between the impulse of the force and an economic inducement.
1. \(F=ma\): a movement along the demand line, i.e., an extension \(a=\dot{v}\) of the demand caused by the agent bidding up the price, and
2. \(F=v\dot{m}\): a rotation of the demand line, i.e, an adjustment \(\dot{m}\) of the price elasticity of demand that quantifies a change in the agent's personal values.
Another effect should be added to these if we allow for an extension in the demand of the reference market:
1. \(F=-m\dot{v}^{*}\): a shift of the demand line, i.e., a contraction \(\dot{v}^{*}\) of the reference demand leading to a perceived increase in its price (a fictitious economic force).
If we ascertain that the price elasticity and the ambient conditions are relatively constant compared to the changes in demand, then the second law can be restated in the \(F=ma\) form and only movements along the demand curve need be considered. This is the typical usage in engineering. In the impedance analogy, this version of the second law states that desirability and needs are proportional to each other. Depending on the elasticity, the same needs may give rise to widely varying desires.
Although economists rarely explicitly consider rotations of the demand curve, they frequently consider convex demand curves rather than the linear ones that are consistent with the definition of momentum (see, e.g., Marshall's curves in Figure 6(c)). Particles whose mass explicitly depends on the velocity cannot be entertained in classical mechanics and, hence, we exclude convex demand curves from consideration. However, the effect may nevertheless be ascribed to an implicit rotation that occurs due to the timing of changes in the elasticity. For instance, the mass of a rocket ship decreases while it picks up speed because it emits exhaust fumes. The demand curve for the analogous economic process would indeed appear as a convex curve if we did not account for the effects of \(\dot{m}\) on the price movements conform Equation 2.
Economists commonly shift the demand curve when the ambient economic conditions change. Such a shift is analogous to an analysis in a non-inertial reference frame. For example, from the perspective of an accelerating train, one is pushed back into one's chair by what appears as a fictitious force. Analogously, if the reference market changes its quantity demanded at a rate \(\dot{v}^{*}\), the agent will resist changing its own quantity demanded as it desires to preserve its initial demand \(v\). To force it to keep up with the changing market conditions, an it has to be provided with an incentive \(F=-m\dot{v}^{*}\). Once the market has completed its adjustment, the agent will be left with a price change \(\Delta p=\int m\dot{v}^{*}\,\mathrm{d}t\) that serves to shift its demand curve. If we wish, we may avoid having to shift the demand curve by agreeing on a single reference frame to mark any elastic agent demand to market for the entire analysis. Indeed, from the perspective of the ground, the fictitious force appears as the force of the train on the rider and can be seen as the analog to a movement along the demand curve, rather than a shift.
### III: Action=Reaction as Demand=Supply
Newton's third law relates to an interaction between two particles and, consequently, its economic analog must relate to trade between two agents, a demander and a supplier. Newton postulated that this interaction consisted of a pair of equal but opposing forces, an action and an opposing reaction. Analogously, we assume that trade involves two equal but opposing wants, a force of demand and a force of supply. The third law is commonly summarized by that statement that action=reaction and we summarize its economic analog as demand=supply, with the understanding that this concerns the forces of demand and supply (see Table 5).
Figure 3: The analogy between Newton’s second law and the law of demand. The want \(F\) relates to a movement along (red), or a rotation (orange) of the demand curve. A shift in the demand curve (purple) corresponds to a fictitious force or envious desire that arises due to a comparison with an elastic benchmark.
The idea that price is determined in the context of a transaction between agents is already present in the writing of Adam Smith. In his famous diamond-water paradox [12],
_A diamond, on the contrary, has scarcely any use-value; but a very great quantity of other goods may frequently be had in exchange for it._
Smith illustrates the distinction he makes between what he calls value in exchange vs value in use. The former is established through trade between agents and the latter can be determined by the agent itself. The third law implies, therefore, that \(p\) corresponds to Smith's value in exchange. This excludes an interpretation such as present value, which qualifies as a value in use. Indeed, a present value calculation can be performed without a counterparty to the transaction and cannot be a price \(p\) in this context.
Like the expressions for the second law, the third law is also written in on of two ways:
\[F_{s}=-F_{d}\] or \[F_{s}\,\mathrm{d}t=-F_{d}\,\mathrm{d}t \tag{3}\]
Here \(F_{d}\) and \(F_{s}\) are the motive forces of the demander and the supplier, respectively. However, the equalities should now be interpreted as a simultaneity, rather than a cause and effect relationship.
In the \(F_{s}=-F_{d}\) formulation, the third law states that the economic value flows from the demander (the action) to the supplier (the reaction). (see Figure 3(a)) This implies that no value is lost during trade and, hence, a law of conservation of value in exchange is shown to apply, analogous to the law of conservation of momentum. If we agree that positive flows are said to be debited and negative flows credited (see also Appendix A), then the law can also be interpreted to imply that the total value of debits equals the total value of credits. Therefore, the third law can be seen to enforce a form of double-entry bookkeeping for economic value: the account of the supplier is debited at same rate that the account of the demander is credited.
In the \(F_{s}\,\mathrm{d}t=-F_{d}\,\mathrm{d}t\) formulation, the third law describes how the increasing marginal cost of the supplier is balanced by the diminishing marginal utility of the demander. In Figure 3(b) we show how economists' picture of demand and supply implies such a balance consistent with the third law. Because the demand line is downward sloping, the sign change is automatically accounted for.
Newton's third law shows how price can be used as a common unit of account, one of the roles of money. Although price itself is personal to each agent, they do agree on the marginal price adjustments during an exchange. To wit, a credit entry \(\mathrm{d}p_{a}\) in commodity account of a party in a transaction must equal the debit entry \(\mathrm{d}p_{b}\) in the corresponding account of the counterparty to comply with the third law. Over time, these entries integrate, thus describing the creation and destruction of credit money that represents the same amount of value to all agents.
In the modern formulation, the third law is extended to systems consisting of multiple particles and is reformulated as the law of conservation of momentum. Such an extension is carried over to economics in a straightforward manner. This leads to an analogous economic law of conservation of value for systems closed to the entry or exit of agents. Alternatively, it leads an overall balancing of credits and debits within an economic system, consistent with the principles of accounting. Finally, when reasoning with price rather than value, it is perhaps more appealing to consider the average per-capita price level in the system. The law then implies the stability of the average price level in a closed economic system.
Conservation of value implies that the demand of the system as a whole depends only on those wants that are exogenous to the system. In fact, the aggregate demand follows the schedule \(p_{\mathrm{TOT}}=m_{\mathrm{TOT}}v_{\mathrm{CM}}\) and the second law now applies to
Figure 4: Newton’s third law of motion in the mobility analogy (left) and the impedance analogy (right).
the total value \(p_{\text{TOT}}\) or, alternatively, to the average price level. The center of demand (see Table 2b) thus behaves as if it were a single agent and this justifies extending the analysis to systems of particles.
In this interpretation, the third law can be seen to apply to aggregate demand. In a closed economic system, endogenous demand is met by endogenous supply, appearing in canceling debit-and-credit pairs of economic value between two parties. This implies that the dynamics of the system as a whole depends only on those wants that are exogenous to the system. In fact, the aggregate demand follows the schedule \(p_{\text{TOT}}=m_{\text{TOT}}v_{\text{CM}}\) and the second law now applies to the total value \(p_{\text{TOT}}\) or, alternatively, to the average price level. It follows, furthermore, that the center of demand (see Table 2b) behaves as if it were a single agent.
## 4 Force Laws as Price Drivers
To make Newton's laws useful in engineering practice, one has to specify the motive force \(F\) that drives the change in the momentum \(\dot{p}\). Analogously, to make the law of demand useful for economic engineering, one has to specify the economic motive force \(F\) that drives the rate of change in price \(\dot{p}\). In mechanics, such a force specification is known as a force law or a constitutive relationship. In economics, force laws are also referred to as price- or value drivers.
The simplest, non-trivial price driver is a constant one, driven by the basic needs, say \(g\), of an agent. Adam Smith argued [12] that although the needs of all agents are limited and essentially identical, their desires may differ substantially:
_The capacity of [the landlord's] stomach bears no proportion to the immensity of his desires, and will receive no more than that of the meamest peasant._
We translate this into a familiar mechanical setting by considering these needs as analogous to a gravitational acceleration \(g\). Then the corresponding desire \(F_{g}=mg\) depends on the elasticity of the particular agent. With constant needs, desires are proportional to price inelasticity and the desires of highly inelastic agents (Smith's landlords) will be appear immense (see Equation 2).
_Storage Laws:_
Scarcity is traditionally seen as an important price driver in economics. In the words of Adam Smith [1]
_If the commodity be scarce, the price is raised._
This can be modelled mechanically by a potential force, such as is provided by a spring. The force law for a linear spring is given in Table 5. In the theory of storage, the balance \(q\) is thought of as the inventory stock of the commodity. If the inventory level is low and the commodity is scarce, the position is said to be short and, when high and abundant, it is said to be long. Notice how suggestive this type of language is of the physical shortening and lengthening of a spring.
The stock quantity \(q\) drives the price with a force \(F=kq\) that expresses the convenience (or inconvenience) of holding that quantity of the commodity. It is also known as the convenience yield [13] or the own yield [14] of the commodity.
Figure 5: Force laws and their economic analogs and the naming of the corresponding forces, driving variables, and parameters.
Like the spring force, the force of convenience is a restoring force, so \(F\) incentivizes the agent to reduce stocks when inventories are high and increase stocks when they are low. At the desired level \(q=0\) and \(F=0\) (See Figure 4(a).) The analog to the spring stiffness is the stock elasticity (of the convenience), not to be confused with the price inelasticity (of demand), the analog to the inertia.
Non-linear storage laws are more common than not, both in economics and in mechanics. For example, an inventory stockout is analogous to the spring in Figure 4(a) hitting the wall. The arbitrarily large inconvenience force from the limits of the storage are analogous to the arbitrarily large motive force from the wall. In addition, if the spring is stretched beyond its elastic region into the plastic region, its stiffness \(k\) decreases and eventually vanishes entirely upon fracture. An economic instance of this phenomenon, when the commodity concerns money, is a liquidity trap, where additional money stocks fail to provide any additional convenience. For a final example, we extend the gravitational analog for constant needs given above to arbitrarily large distances or stock amounts. The analog to Newton's law of universal gravitation then implies that the convenience \(F\propto m/q\) and, hence, that the Smith's universal needs \(g\propto 1/q\) are inversely related stock level, vanishing entirely in the limit when \(q\to\infty\).
#### Friction Laws
Trade friction is also a ubiquitous price driver in economics. Economists also refer to any process that impedes the flow of goods as friction. This is entirely analogous to usage in mechanics, where one thinks of friction as resisting the motion. In practice, one imagines a handler such as a broker, a shipper, or other such intermediary who charges a cut \(F=F(v)\) depending on the flow \(v\) of goods being handled.
The constitutive law for a mechanical damper can serve as a linear model for trade friction (see Table 5 and Figure 4(b)). Its specification implies that the impeding force \(F=bv\) is always in the direction of the quantity demanded, thus guaranteeing that the handler has to be incentivized to provide its services. Therefore, any law that complies with this requirement may serve as a friction law, as well as a linear one. For example, handling costs can be for fixed fee \(F=b\operatorname{sgn}v\), analogous to static friction, perhaps maintained by a constant fee, which is analogous to kinetic friction. Alternatively, at high velocities, the linear viscous friction becomes quadratic in velocity in accordance with the drag equation.
## 5 Mechanical Energy as the Economic Surplus
In mechanics, Newton's definition of a force is used to define the concepts of work and energy. We proceed in an analogous fashion, using the Newtonian economic force to develop the concepts of economic benefit and surplus. These are presented in the following two subsections and summarized in Table 5(a).
Figure 6: Economic surplus and its flow as analogs to energy and power.
### Work as Benefit
In mechanics, a force does work when an object is moved over some distance. Analogously, we say in economics that a want allocates benefits when an agent acquires some quantity of goods. The rate at which this is done,
\[\dot{W}=Fv \tag{4}\]
is called the delivered power in mechanics. In economics, we refer to it as the benefit allocation rate.
The product should be read as an inner product between the wants and the quantity vectors, thus yielding a scalar. In mechanics, the positive direction is customarily taken from the supply of power to the load. In economics, following the same custom, we take the allocation rate to be positive from the benefactor to the beneficiary.
Economically, the inner product operator represents the process of attaining satisfaction. Due to its symmetry, the operator can be thought of in two ways: either the want \(F\) is _satisfied_ by the quantity demand \(v\), or the the want is _met_ by the flow \(v\). 8 For the special case when \(F\) is the force of the needs \(g\) (see Section 4) and the power is given by \(\dot{W}=gmv\) we say that the needs \(g\) are fulfilled.
Footnote 8: The language of differential geometry gives us the most compelling formulation of the allocation rate. The product in Equation 4 represents the natural action of a 1-form on a vector (see Appendix A). With \(F\) a vector, we say that the want is _satisfied_ by the quantity \(v\). When \(F\) is a covector, we say rather that it is _met_ by the flow \(v\).
The total benefits \(\Delta W\) received over a period of time are determined by integrating the allocation rate \(\dot{W}\) over the actual trade path in the commodity space, such as the one in picture in Figure 1. The integral in Table 5(a) should be interpreted as a line integral along the trade path. In general, there is no guarantee that a different path connecting the same initial and final account balances accrues the same benefits. In Section 5.2.2 we categorize the cases where accrued benefits are path independent and only depend on the account balance.
### Energy as Surplus
#### 5.2.1 Kinetic Energy and the Direct Surplus
Marshall defined economic surplus (which he referred to as economic rent [Figure 6(c)]) for consumers and producers in terms of areas in a supply and demand graph (see Figure 6(b)). Using the concept of a unified demand schedule, a single notion of surplus - applicable to demanders and suppliers alike, - emerges and we immediately recognize the mechanical analog to be kinetic energy (see Figure 6(a)).
Economists also refer to the surplus as the gross benefit associated with a good (see [4]). Using the analog to the work-energy principle --or, analogously, the benefit-surplus principle-- we can turn this into an equality between benefits and surplus. Starting with the benefit allocation rate \(\dot{W}\) we arrive at the surplus allocation rate \(\dot{T}\) by reasoning as follows:
\[\dot{W}=Fv=v\dot{p}=\dot{T}. \tag{5}\]
The critical step is formed by the third equality where we use the Newtonian law of demand to express the want in terms of a price movement. The inner product \(v\dot{p}\) describes the process at which the agent's price adjustments \(\dot{p}\) serve
Figure 7: Economic surplus as analog to kinetic energy.
to satisfy its demand \(v\) for the commodity. To emphasize this link to trade, we consider \(T\) to be the _direct_ economic surplus. It is analogous to kinetic energy, so named to emphasize the link with motion.
We accrue the surplus allocation rate by integrating it over the applicable time period. Because \(v\dot{p}\,\mathrm{d}t=v\,\mathrm{d}p\) can be recognized as the marginal increase in surplus due to a price margin \(\mathrm{d}p\) over the reservation price, the integration depends only on the final price and we find the expression in Table 5(a). The integration is shown graphically in Figure 6(a). The area representing the surplus is swept out by the reallocation rate \(\dot{T}\) as it moves along the price axis. We see that the surplus is naturally a function of price \(p\), analogous to kinetic energy as a function of momentum.9
Footnote 9: Economists typically integrate along the quantity axis instead (see, e.g., the distance PQ in Marshall’s graphic in Figure 6(c)) and consider the surplus a number rather than a function. Although the value of the surplus will be the same, the functional dependence on the price is lost and, with it, the dynamics of the surplus allocation.
_Kinetic Coenergy as Economic Costs_
Marshall referred to the area underneath the supply line as expenses (see Figure 6(c)). We refer to this as economic costs, emphasizing that this is a flow of economic value rather than an accounting term for a money flow. The unified picture in Figure 6(a) shows that it is analogous to kinetic coenergy. (See Table 8.)
The economic costs to the agent for extending its demand at a rate \(\dot{v}\) are obtained after multiplication by the value \(p\) to obtain the rate \(p\dot{v}\). Economists refer to \(p\dot{v}=p\,\mathrm{d}v\) as an increase in marginal costs. Integrating the former rate over a time period, or the latter marginal up to the current level of demand, gives the agent's economic costs (see Table 8). This is shown graphically in Figure 6(a). The economic costs are naturally a function of the quantity demanded \(v\).
The nature of economic surplus is further understood by contrasting it with costs. Consider the economic revenue \(pv\) to the agent. In Figure 6(a) it is evident that the area representing revenue is the sum of the areas of surplus (green) and costs (gray). Economically, this means that surplus equals the revenue net of cost (see Table 8) and, hence, can be thought of as a form of economic profit. The surplus becomes a measure of the value an agent receives over and above the intrinsic utility of a commodity to said agent. In mechanics, it is the Legendre transform that gives the analogous relationship between energy and coenergy. The Legendre transform formally establishes a relationship between energy and coenergy as functions (of \(p\) and \(v\), respectively) rather than as numbers.
#### 5.2.2 Potential Energy as Indirect Surplus
In mechanics, when the work done by a force only depends on the change in position and not on the path of the motion connecting these, the force is said to be conservative. Analogously, we speak of a convenience force when benefits depend only on the change in the amount in storage and not on the trade path (see Section 4 for the linear case). In mechanics, the work received is then referred to as potential energy. We refer to its economic analog as indirect surplus.
The direct surplus \(T\) can be reallocated to indirect surplus \(V\) and back again. This is analogous to kinetic energy being transformed into potential energy and vice versa. This is also known as reactive power in electrodynamics or reversible work in thermodynamics. According to the benefit-surplus principle, the reallocation rate is:
\[\dot{T}=v\dot{p}=-F\dot{q}=-\dot{V}. \tag{6}\]
The critical equality is the third one. Using Newton's second law in reverse, we set the price adjustment \(\dot{p}\) equal to the negative of the convenience force \(F=F(q)\). The minus sign indicates that the convenience force, like the spring force, is assumed to be a restoring force and acts in the direction inverse to the stock \(q\) (see Section 4). Simultaneously, the quantity demanded \(v\) draws from the inventory at a rate \(\dot{q}\). As a result, indirect surplus grows at the expense of direct surplus and vice versa. Hence, if the demander is the beneficiary, the storage becomes the benefactor and vice versa.
To determine indirect surplus, we accrue its allocation rate \(\dot{V}\). We recognize \(\mathrm{d}V=F\dot{q}\,\mathrm{d}t=F\,\mathrm{d}q\) as the marginal benefit of holding an additional marginal amount \(\mathrm{d}q\) at convenience \(F\). A storage law determines the convenience \(F=F(q)\) as a function of the amount \(q\) in storage and, hence, the \(\mathrm{d}V\) can be integrated to yield the indirect surplus \(V=V(q)\) as a function of the stock \(q\). The most elementary storage law is a constant one, in which we find that \(V=mgq\), or that indirect surplus increases linearly with inventory level \(q\), analogous to potential energy in a gravitational field. The
\begin{table}
\begin{tabular}{l l l} & **Mechanics** & **Economics** \\ \hline \(T^{*}=\frac{1}{2}mv^{2}\) & Kinetic Coenergy & Direct Costs \\ \(T=pv-T^{*}\) & Legendre Transform & Surplus = Revenue - Costs \\ \hline \end{tabular}
\end{table}
Table 8: Kinetic coenergy as the direct economic costs and its relationship with the economic surplus.
linear storage law from Section 4 gives an expression that depends quadratically on \(q\) (see Table 6a), recognizable as the expression of the potential energy of a spring.
### Dissipation as Consumption
In mechanics, friction impedes motion and in economics friction impedes the flow of trade. In both cases, this implies that --as vectorial quantities-- the friction force \(F=F(v)\) is in the same direction as the movement \(v\). Applying the benefit-surplus principle, we find that:
\[\dot{T}=v\dot{p}=-Fv:=-P\leq 0 \tag{7}\]
The minus sign is required to assure that the friction does impede the trade flow. The quantity \(P=Fv\) is known as active power or the rate of energy dissipation. The alignment of \(F\) and \(v\) implies that it is non-negative and, hence, Equation 7 implies that surplus, like energy, exclusively decreases in the presence of friction. In economics, we refer to this process as consumption and characterize an allocation of surplus to friction as consumptive (see Table 6c).
The linear law \(F=bv\) leads to quadratic consumptive allocation rates \(P=bv^{2}\) that are, in fact, positive for any non-zero coefficient \(b\). Many non-linear laws are applicable, depending on the circumstances. Fixed fee structures of the form \(F=b\operatorname{sgn}(v)\) lead to rates of the form \(P=b|v|\), also non-negative. In Section 6, we analyze quadratic friction analogous to drag forces.
Integrating consumptive allocation over time yields the consumption \(\Delta Q\) (see Table 6a). The \(\Delta\) notation emphasizes that, like benefits and unlike surplus reallocations, the integral cannot be evaluated in a path-independent manner and, hence, consumption is not a bonafide function.
### Power Balance as Efficient Allocation
The analog between energy and surplus implies that, like energy, economic surplus is a conserved property of a closed system. In mechanics, the manifestation of this is that energy can be neither created nor destroyed, but must flow from one body to another. The flow of energy is also called power and a power balance guarantees that it is conservative. For economics, we refer instead to an allocation of surplus and say that the allocation is efficient to indicate it is conservative (see Table 6c).
#### Reallocative Efficiency
When energy is transferred between kinetic and potential forms, the transfer is reversible and the power is called reactive to emphasize that it can flow in either direction. We refer to the analogous allocation between direct and indirect surplus as reallocative to emphasize that the roles of beneficiary and benefactor may switch. Although in Figure 6b, the storage is the beneficiary, this is simply because \(\dot{V}=F\dot{q}\) happens to be positive because the convenience and the inventory fill rate vectors are aligned. The reallocative efficiency \(\dot{T}+\dot{V}=0\) that is analogous to the reactive power balance (see Table 6c) follows from Equation 6. We can see this graphically since the rate of reduction of the area corresponding to the direct surplus in Figure 8a equals the rate of increase of the area corresponding to the indirect surplus in Figure 8b.
Integrating the reallocative balance (Equation 6), we find that \(H=T+V\) is a constant. In mechanics, the analogous constant is known as the total mechanical energy. We refer to its economic analog as the disposable surplus because it can be reallocated at will, analogous to how the mechanical energy can be converted to any other form of energy.
Figure 8: Allocative efficiency and the conservation of total economic surplus. The consumptive allocation is necessarily positive.
#### Consumptive Efficiency
When kinetic energy is dissipated by a damper, the transfer is irreversible and the power is called active to emphasize that the energy flow is in one direction only. We refer to the analogous allocation as consumptive to emphasize that the frictional entity is at all times the beneficiary. In Figure 5(b), this is suggested by the hydraulic check valve. It is seen graphically by realizing that the area in Figure 7(c) is at all times positive and, hence, the direct surplus is necessarily deallocated in Figure 7(a). The consumptive balance \(\dot{T}+P=0\) follows from Equation 7 and is the analog to the active power balance (see Table 5(c)). Integrating the consumptive balance, we find that \(T+\Delta Q\) is a constant.
In mechanics, the lost energy \(\Delta Q\) is thought of as being _dissipated_ and the rate of dissipation \(P\) is referred to as the active power (see Figure 5(b)). The picture of dissipation from statistical mechanics is that of the available energy is distributed over the multitude of particles and degrees of freedom. In economics, we picture the surplus as being _distributed_ over agents and _diversified_ over the different types of commodities.10 Adam Smith [12] famously imagined an invisible hand that governs economic processes. He writes:
Footnote 10: For the damper in Figure 5(b) that represents the handling services, the analogy can be made explicit as follows. The work done on the damper serves to increase the average kinetic energy of the particles that make up viscous fluid it contains. The energy content of the work is thus dissipated over numerous particles, each travelling at its own speed in various directions. For economics, we think of the fluid particles as the, presumably numerous, agents involved in the intermediation of trade flow and the the dissipation process describes how the surplus is distributed among the agents who diversify their portfolios over the available commodity types.
_They are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants..._
_The analogy with statistical mechanics becomes apparent if we think of "they" as agents and "necessaries" as economic surplus as it is diversified over various products and commodity types._
_A fundamental law of physics (the second law of thermodynamics), postulates that dissipation is an irreversible process. Extending this to economics, this implies that consumption is also irreversible. This further implies that consumptive allocation, like active power, cannot be negative, as witnessed by the expression \(P=bv^{2}\) for linear friction. Smith's invisible hand can thus be visualized as the thermodynamic arrow of time, irrevocably pushing the system to an equilibrium where surplus vanishes and trade stops._
#### General Allocative Efficiency
_The overall power balance requires that the energy flows add up to zero. This corresponds to the general allocative efficiency listed in the top row of Table 5(c). This can be accrued over time by integrating to obtain the constancy of total surplus:_
\[H+\Delta Q+\Delta W=E.\]
_In mechanics, the constant_ \(E\) _is known as the total energy and the equality as the law of conservation of energy. An analogous equality in economics occurs for the calculation of GDP as the sum of consumption, investment and government expenditures, customarily written as_ \(Y=C+I+G\)_. If we think in terms of value rather than money, then if consumption is_ \(\Delta Q\)_, investment is the indirect surplus_ \(V\)_, and the government the benefits_ \(\Delta W\)_, the calculation of GDP is consistent with the law of conservation of surplus._11__
Footnote 11: Exports net of imports also makes up part of \(\Delta W\). Our analysis does suggest adding the value of the direct economic surplus to GDP as well.
_Smith argued that the invisible hand irrevocably leads markets to attain equilibrium as long as there are no government interventions. Therefore, if we let_ \(\Delta W=0\) _to exclude any interventions, it follows that when the consumption_ \(\Delta Q\) _grows in time with_ \(P\geq 0\)_, the disposable surplus_ \(H\) _must decrease and ultimately vanish altogether. Then, any trade activity ceases and the system thus attains equilibrium._
## 6Equilibrium
### Economic Equilibrium as Mechanical Equilibrium
_In_ _[_2_]__, Marshall introduces the concept of economic equilibrium as follows:_
_The simplest case of balance or equilibrium between desire and effort is found when a person satisfies one of his wants._
_Our definition of an economic force puts us in a position to make the analogy with mechanical equilibrium. In engineering practice, equilibrium is determined with the aid of a free-body diagram. The agent's desires (or wants) and efforts (or incentives) are balanced by adding them vectorially to determine a net want_ \(F_{\textsc{net}}\) _(see Figure_ 9_). Using our Newtonian law of demand (Equations 1), any net want results in a price adjustment_ \(\dot{p}=F_{\textsc{net}}\)_. Hence, only when the net
want \(F_{\textsc{Net}}=0\) do we obtain a constant equilibrium price. This condition is known as price equilibrium in economics and dynamical equilibrium in mechanics (see Table (a)a).
Price equilibrium alone does not imply that stocks remain constant since these grow linearly with the quantity demanded. In the special case where \(\dot{q}=0\), i.e. the level of demand equals that of the benchmark market level, inventory stocks also remain constant. In mechanics, this is referred to as static equilibrium and economists refer to competitive equilibrium in this context (see Table (a)a).
### Stable Equilibrium and the Invisible Hand
An economic equilibrium need not be stable. Adam Smith [1] referred to an equilibrium price value that is stable as the "natural price":
_The natural price is as it were the central price to which the prices of all commodities are continually gravitating. Different accidents may sometimes keep them suspended a good deal above it, and sometimes force them down even somewhat below it. But whatever may be the obstacles which hinder them from settling in this center of repose and continuance, they are constantly tending towards it._
In [2], Marshall makes an explicit analogy with the manner in which stability is described in mechanics:
_When demand and supply are in stable equilibrium, if any accident should move..., from its equilibrium position, there will be instantly brought into play forces tending to push it back to that position; just as, if a stone hanging by a string is displaced from its equilibrium position, the force of gravity will at once tend to bring it back to its equilibrium position._
We formalize these ideas using the analysis of equilibrium in mechanics. There, a distinction is made between stable and asymptotically stable systems. For economics, this implies that an equilibrium is _stable_ when the neither the price nor the stock diverge indefinitely from their equilibrium values. The equilibrium is _asymptotically stable_ if these values actually converge in the _long run_ to their equilibrium values. For instance, analogous to the momentum of a stone hanging from a string, the price may oscillate indefinitely around its equilibrium value when there is no trade friction. In the presence of friction, however, the price fluctuations around the equilibrium value diminish and price and stock will move arbitrarily closely to their equilibrium values. In Section 7, we work this out for several applications.
The conditions for asymptotic stability are precisely those of the invisible hand (see Section 5.4). A friction force implies a consumption allocation \(P>0\) and, as long as no benefits are being allocated and \(W=0\), this implies that the disposable surplus shrinks until it vanishes entirely. In the language of dynamical-systems theory, this means that the disposable surplus can be used as a Lyapunov function to prove asymptotic stability, thus formalizing the Smith's picture of the invisible hand as the driver for long-term economic equilibrium to the "natural price."
### Mutual Equilibrium as the Two-Body Problem
In [2], Marshall describes the _mutual_ equilibrium between two agents, a demander and a supplier, as follows:
_When the demand price is equal to the supply price, the amount produced has no tendency either to be increased or to be diminished ; it is in equilibrium._
Mutual equilibrium is traditionally pictured to the point where demand and supply lines intersect. In Marshall's original picture in Figure (c)c, this is the point labeled A and in Figure (b)b, this is the point labeled as the origin \((0,0)\).
The analogous analysis in mechanics is known as the two-body problem, and we show how the economists' picture can be constructed using its solution (Figure 10). In the two-body problem, there is no body that is massive enough to serve as an inertial reference frame. Instead, the inertial frame is attached to the center of mass. Analogously, if there are no perfectly inelastic agents to mark the demand to market, we can we use the pair's center of demand as a benchmark. Since there are no third parties involved, the third law (Section 3.3) implies that the midpoint price \(p_{\textsc{CM}}/2\) is a conserved quantity and we choose this and its corresponding aggregate demand level as the origin of the unified demand lines. The
Figure 9: Mechanical analysis of economic equilibrium.
individual agents' prices then represent the spread over the midpoint price and their quantities demanded or supplied the excess over the aggregate level of demand.
Using the center of demand as the benchmark, the two-body problem is reduced to that of a single agent in a force field. In mechanics, the reduced mass is used to do this, but for economics the arguments become particularly intuitive and straightforward by using the aggregate elasticity \(\varepsilon=\varepsilon_{d}+\varepsilon_{s}\) instead (see Table (b)b). It is readily verified that the agents follow a single mutual demand line \(v=\varepsilon p\) that relates the mutual excess demand \(v=v_{d}-v_{s}\) to their mutual price spread \(p=p_{d}-p_{s}\). In economics, this relationship is known as the Marshallian demand function. It generalizes the single-agent case, which is recovered when one of the agents is perfectly inelastic. For an inelastic supplier, we substitute \(\varepsilon_{s}=0\) and mutual elasticity \(\varepsilon=\varepsilon_{d}\) reduces to that of the demander alone. The center of demand then coincides with the supplier whose supply line becomes vertical and we recover the situation pictured in Figure 2.
To investigate the forces that push the agents to an equilibrium, we assume the agents' mutual trade is brokered by a handler with friction coefficient \(b>0\). Consistent with the third law, the force of demand must equal the force of supply. In the diagram in Figure (a)a this is shown as a flow \(F\) of value from the demander, through the handler, to the supplier. In the economists' picture in Figure (b)b, we show this as equal but opposite inducements \(F\,\mathrm{d}t\) for the agents to adjust their prices. The price cut \(F\) the handler charges to mediate the trades acts to reduce the price spread over time until it effectively vanishes.12 In Figure (a)a, this manifests itself in time, as the masses representing the agents ultimately move with the same velocity (the quantity demanded) and all trades stops. In Figure (c)c it gives rise to movements along the demand and supply curves leading to an equilibrium at the crossing of the two.
Footnote 12: In fact, since the cut \(F=\varepsilon bp\), we have that \(\dot{p}=-\varepsilon bp\) and the price development consists of an exponential decay at rate \(\varepsilon b\) (see also Section 7).
The passage to equilibrium can also be argued using the invisible hand alone. In Figure 10, we picture the total surplus \(T=T_{d}+T_{s}\) by lightly colored areas. The consumptive allocation \(P=\dot{T}_{d}+\dot{T}_{s}\) corresponds to the portion that is lost over time and we shade this in the darker tones. As long as the trade is not friction free, the consumptive allocation is always positive. Therefore, the areas representing the surplus are reduced until the lack of available surplus causes all trading activity to stop.
Without consumptive allocation, agents would eternally remain out of equilibrium. For instance, if the damper in Figure (a)a is replace by a spring that represents some mutual storage facility, the agents would continually stock up or draw from inventory to provide the demand and supply to the counterparty, who would be doing the same thing, albeit 180' out of phase. As a result, trading activity would sweep out the areas corresponding to the surplus in both directions and on both sides of the equilibrium point in Figure (b)b, never reaching the equilibrium state (see, further, Section 7.2). In
Figure 10: Economic equilibrium and the two-body problem.
practice, this is rare since some form of surplus loss, either through handling, custody, or other types of consumption, is inevitable. The appearance of the invisible hand thus guarantees that equilibrium is reached in the long run.
### General Equilibrium and the Inertia Tensor
Marshall assumed that the demand for a particular commodity is independent of the demand for any other of the available commodities in the analysis. This assumption is known in economics as partial or Marshallian equilibrium. It is to be contrasted with general or Walrasian equilibrium, which considers all possible types of commodities and their interdependencies simultaneously. In this section, we extend the Newtonian approach to general equilibrium analysis.
The vectorial nature of quantity demanded \(v\) and price \(p\) allows us to interpret the demand schedule \(v=\varepsilon p\) as a linear map \(\varepsilon:p\mapsto v\) between vector spaces. In economics, this map is known as the Walrasian demand function. The linear operator \(\varepsilon\) is tensor that we refer to as the elasticity tensor. In the one-dimensional case, this tensor reverts to a scalar used in the previous sections.
Using a chart of accounts, we can represent the elasticity tensor as a matrix. It right-multiplies a column vector of prices to give a column vector of quantities demanded. We refer to the diagonal entries of the matrix as principal elasticities and the off-diagonal entries as cross elasticities. In an engineering diagram, the choice of a chart is visualized as a choice of coordinate frame of account directions. In Figure 11, this is shown for a chart consisting of commodities, \(a\) (apples) and \(b\) (bananas), and a chart for baskets (fruit salads), \(\alpha\) and \(\beta\).
In the commodity chart, the cross elasticities \(\varepsilon_{ab}\) are non-zero (Figure (a)a), indicating the presence of the substitution effect. In the particular basket chart we selected, the cross elasticities vanish and the matrix is diagonal. In the engineering diagram, the basket accounts are orthogonal, whereas the commodity accounts are not (Figure (b)b). Algebraically, the basket accounts form an eigenvector basis for the elasticity tensor, with eigenvalues equal to the principal elasticities of the diagonal matrix. In such an eigen-chart, the Walrasian demand function reduces to the Marshallian demand function and the general equilibrium analysis is converted to a set of independent partial equilibrium problems.
The economic surplus is a scalar, independent of the choice of coordinate chart. This is analogous to kinetic energy in higher dimensions. To determine it, we generalize the expression in Table (a)a by thinking of the elasticity tensor as a bilinear map \(\varepsilon:(p,p)\mapsto T\), taking two copies of the price vector and yielding a scalar value of the surplus. In the matrix representation, this can be written and evaluated explicitly, with \(p^{T}\) the transpose of \(p\), as follows:
\[T=\frac{1}{2}p^{T}\varepsilon p=\frac{1}{2}\varepsilon_{aa}p_{a}^{2}+ \varepsilon_{ab}p_{a}p_{b}+\frac{1}{2}\varepsilon_{bb}p_{b}^{2}=\frac{1}{2} \varepsilon_{\alpha a}p_{\alpha}^{2}+\frac{1}{2}\varepsilon_{\beta\beta}p_{ \beta}^{2}.\]
The first equality gives the general expression in the matrix representation for the surplus and the remaining equalities evaluate this for the two-account example. A comparison with the expression in Table (a)a shows that an additional cross term is required to account for the addition to the surplus that arises due to the substitution effect. For the eigen-baskets, this term is absent and the surplus reduces to a sum of Marshallian surpluses.
It follows that any general equilibrium problem can be reformulated in terms of a set of partial equilibrium problem. The direct surplus is non-negative, as is the kinetic energy. Therefore, the elasticity tensor must be positive-definite and, hence, an eigen-basket decomposition necessarily exists for any choice of commodities. This corroborates with the usual assumption for the matrix in economics (see, e.g., [4]).
Although in principle the tensor methods allow us address any problem in general equilibrium, this becomes when a large number of commodity types are involved. An industrialized economy, in particular, trades in an enormous number of distinct goods, making the elasticity tensor unwieldy for calculations and infeasible to assess. The same is true for mechanical systems that have a large number of degrees of freedom. In this case, other theories and methods such as
Figure 11: The demand tensor of general equilibrium as an inertia tensor.
thermodynamics that investigate the behavior of averages have proven to be effective in physical systems. We postpone this analogy to a follow-up paper.
## 7 Economic Engineering
In systems and control engineering, the method of analogs is exploited to model systems in a uniform manner, irrespective of their physical domain. Economic engineering extends the method to the economic domain.
Especially for linear systems, powerful techniques for analysis and control have been developed and analogies allow us to extend these to economic systems. The law of demand is a linear law. Therefore, a system consisting of demanders and linear price drivers, such as the storage and friction laws in Table 5, is a linear dynamical system. If, in addition, we assume the price elasticity and driver parameters to be constant, we obtain a class of systems called linear time-invariant, or LTI systems. These are described by linear differential equations with constant coefficients, which are widely studied in systems-and-control engineering. In this section, we apply these techniques to the description of several simple economic systems to illustrate their effectiveness.
Dynamical systems are defined by their behavior, i.e., how they respond to an external input, which is known as an exogenous variable in economics. The response or output is known as an endogenous variable. Dynamical systems are analyzed by investigating their response to standard test inputs. Impulses or steps are known as shocks in economics and the sinusoidal AC signals are referred to as seasonal or cyclical. The transient response is known as the short run and the steady state as the long run. The relationship between the input and output are given by a transfer function. In the following subsections, we describe the economic concepts of price and inventory rigidity using transfer functions. (See Figure 12.)
### First-Order Systems
In reality, there are few agents who are pure demanders, buying a constant quantity \(v\) of a commodity. Instead, some degree of trade friction is always present due to handling costs. These serve to impede the flow of trade and we represent them by a damper \(b\) that links the demander \(m\) to the perfectly inelastic market represented by the inertial wall (see Figure 13(a)). Such a system is a trader, capable of both acquiring and disposing of the commodity. We subject the trader to exogenous pressure \(F\) and investigate its price response. A free-body diagram for the demander gives the first-order differential equation for the trader's reservation price (see Table 13(b)).
We first consider the free response, i.e., the price evolution of the buyer from an initial price \(p_{0}\) onward without any exogenous price pressures. The differential equation governing the price is \(\dot{p}=-\gamma p\), where the rate \(\gamma=b/m\) is known as the damping rate in mechanics and the discount rate in economics (see Table 14(a)). In this form, the notion of \(\gamma\) as a discount rate is made explicit by formulating as the percentage decline the price suffers over time. The solution \(p=p_{0}e^{-\gamma t}\) is known in economics as the exponential discount function.
In economics, the factor \(\mathrm{DF}=e^{-\gamma t}\) is known as the discount factor. It quantifies the agent's time preference by measuring the degree to which an agent prefers receiving the goods sooner rather than later. These are used in models for intertemporal choice that complement the usual models of choice among the available types of goods. The exponential form of the discount factor implies that the price discount depends only on the interval of time and economists consider it therefore, a time-consistent choice function. Economists typically determine the value of the discount rate \(\gamma\) from data. By contrast, in economic engineering it is fully determined by the model parameters (see Table 14(a)).
Figure 12: Block diagram specifying the effect of an exogenous input on an endogenous output.
Figure 13: First order system illustrating price rigidity.
Next, we consider the response of the trader to a step shock of size \(F\) in price pressure. If there were no friction, the price would increase indefinitely as the agent continues to bid up the commodity to satisfy its insatiable needs. In practice, however, the handling costs serve to proportionally check the agent's wants. Ultimately, the price converges to a new equilibrium price \(F/\gamma\) when the force of friction when the force of friction \(\gamma p\) precisely balances the exogenously imposed \(F\). In the short run, the price is obtained by multiplying the price pressure by a factor \(\frac{1}{\gamma}(1-\mathrm{DF})\) that econometricians call the speed-of-adjustment (see Table 13(a)). These effects are illustrated in Figure 14, where we graph the traders price response for various values of the discount rate.
The transfer function that determines the price action can be used as a model for price stickiness or nominal rigidity in economics. Keynes postulated that prices are resistant to change under economic shocks and do not change immediately, contrary to the neoclassical economists who argue that prices should adjust instantaneously. The economic-engineering analysis demonstrates that price stickiness is the expected behavior from a trader because it is balancing its force of demand with that of the trade friction. Because the friction force \(\gamma p=bv\) increases with the level \(v\) of the demand, the agent delays some of its acquisitions to avoid the higher frictional costs and finds it rational not to adjust instantaneously to the new equilibrium price.
### Second-Order Systems
In addition to exercising demand for the commodity, second-order traders maintain an inventory. Such traders display both price and inventory dynamics simultaneously. In the following two subsections, we analyze two cases: a trader who bears transaction costs while trading, and one who bears custody costs on its inventory stock.
#### 7.2.1 Inventory Dynamics
If an agent is capable of storing the commodity in addition to brokering its trades with the market, we obtain a trader whose mechanical analog is depicted in Figure 14(a). We expose the agent to exogenous price pressure \(F\) and investigate its inventory response. The transfer function represents the inventory rigidity of the agent. The differential equation specifying this is given in Table 14(b). The inventory rigidity is now strongly dependent on the values of the parameters, and a second-order trader can act either as a broker-dealer or a buy-and-holder.
We first consider the long-run. At equilibrium, the force of demand and that of friction must vanish and, hence, any exogenous wants must be met by the convenience of the stock level. This implies that in the long run \(q_{\infty}=F/k\). To determine the short-run transients, we notice that the dependence of the response on the discount factor form \(\mathrm{DF}\) is identical to that of a first-order trader. However, the discount factor itself of a second-order trader is quite distinct (see Table 14(c)). It depends on two parameters: the natural frequency of the trade or inventory cycle \(\omega_{n}=\sqrt{k/m}\) and, more critically, on a parameter \(\zeta=\gamma/2\omega_{n}\) we refer to as the discount propensity (see Table 14(a)).
Figure 14: Time response of a first-order agent.
#### Cyclical Discounting
When the propensity to discount \(\zeta=0\), there is no trade friction and an equilibrium state may never be reached. The discount factor becomes a pure sinusoid with frequency \(\omega_{n}\), implying that the trader alternates between discounting and placing a premium on the commodity. Such a trader is known as a dealer or market maker, alternately overstocking and understocking the commodity. When trade friction is introduced while keeping \(\zeta<1\), the frequency component of the discount factor is lowered to \(\omega_{d}\) and modulated with an exponential discount at half the discount rate. Such a trader is known as a broker-dealer. This agent modulates the swings in its inventory stock to approach the equilibrium stock level in the long run.
We see that traders add storage for the same reason that a heavy body such as a car is suspended by shock absorbers, which contain springs: The storage absorbs any exogenous price shocks by selling from inventory and the springs absorb any momentum shocks from the road. The dealer controls the response characteristics by adding trade friction and in a car suspension this is done by adding damping to the shock absorbers. In control engineering, a \(\zeta=0.9\) is typical considered the best choice to obtain a rapid inventory adjustment, while keeping the maximum overshoot and rise time within acceptable levels (the orange curve in Figure 15(b)). Control theory, in particular transient analysis, offers various methods for tuning system parameters based on the transient analysis --a subdiscipline of control theory (see, e.g., [15])-- offers methods for tuning parameters of a system to other desiderata, including rise time, maximum overshoot, settling time, etc.
#### Critical Discounting
When the propensity \(\zeta=1\), the the trader ceases to act as a dealer and its behavior is the closest to that of an exponential discounter. It no longer over- and under-stocks but, rather, approaches the equilibrium stock level in an exponential manner at half the discount rate. The reason it does not do so at the full rate is because the price also moves at half the discount rate, so that the surplus moves at the sum of the two, which amounts to full rate \(\gamma\). In the very short term, however, the response it dominated by a linear term \(\frac{\gamma}{2}t\). The reason for this is that initially, when \(v\) is still small, the trader's transaction costs are not substantial enough yet to induce the trader to deviate from the behavior of an ideal dealer.
Critical discounting is appropriate for storage facilities, such as a car gas tank, which should be filled to capacity as rapidly as possible, but which do not tolerate overstocking. Figure 15(b) shows that, among the curves having no overshoot, the curve for \(\zeta=1\) (green in the figure) does approach the equilibrium stock level the fastest. A prototypical control-engineering example would be door closers; it is desirable to have no overshoot at all to avoid the door slamming against the post, while simultaneously closing the door as rapidly as possible.
#### Hyperbolic Discounting
When the propensity to discount moves beyond unity and \(\zeta>1\), the agent behaves as what is known as a hyperbolic discounter. This leads to correction terms that appear as hyperbolic functions. The graph of the time response in Figure 15(b) suggests that the agent has a higher discount rate in the very near future and lower discount rate in the more distant future.15
Footnote 15: If we express the hyperbolic functions in terms of their definitions as a sum and difference of exponential functions, the discount factor can be rewritten as a sum of a fast and a slow exponential, explicitly showing the use of two rates of discount.
The rational for hyperbolic discounting, from the trader's perspective, is that when trading commences and the trading volume is still low, it can take advantage of the corresponding low handling costs to stock up rapidly in the short run and then slow down when these costs become significant compared to the value of the items. Although traditionally
Figure 15: Second-order system.
dismissing hyperbolic discounting as irrational, economists have more recently argued that our hunter-gatherer ancestors were incentivized to consume relatively large amounts of food immediately upon finding it in order to mitigate the risks of losing it when consumption is postponed. This behavior is analogous to that of automatic vehicle braking systems, which initially brake relatively strongly, to then lighten up and come to a slow cruising halt, thus reducing the risk of sudden last-minute movements and shocks.
Hyperbolic discounting is appropriate to dampen out large shocks and to avoid the risk of overstocking due to unforeseen circumstance, while maintaining the ability to rapidly respond to the regular in- and outflow of orders. This is analogous to a typical car suspension system, where shock absorbers are actually designed to be overdamped so that the driver maintains a feel for the road from small rapid shocks, but that large shocks are dampened out.
#### 7.2.2 Cost of Carry Model
Traders on the futures market profit by arbitraging the spot price of a commodity against a forward price. This can be done by storing (or shorting) the commodity at spot the price in order to dispose of (or acquire) it at some later date. When storing the commodity, traders bear what is called the cost of carry, i.e., the warehousing, insurance, and other costs involved with holding the commodity. Such a trader can be represented by the mechanical system shown in Figure 16(a). By placing the damper in series with the spring rather than in parallel to it, the damper represents friction due to custody rather than to handling.
We subject the trader to an exogenous change in the desirability of the commodities and investigate its price response. The engineering diagram in Figure 16(a) shows that part of the total quantity acquired is lost to the custodian and does
Figure 16: Dynamics of a second-order system.
not appear in the storage. The convenience and friction forces are equal. In futures trading, this condition is referred to as "full carry" (see also [16]). The differential equation describing price rigidity is given in Table 16(b).
The analysis given in Figure 15 for the second-order spot trader may be applied to the futures trader with the following modifications: the discount rate should be replaced by what is known as the carry rate and \(\gamma=k/b\) is set in Table 15(a). We refer to the corresponding \(\zeta\) as the propensity to carry. In economics, this is also called the propensity to consume. The factor \(\mathrm{DF}\) is known as the carry and the response is known as the cost-of-carry model. The graphs are known as futures term-structure graphs.
For the damping regimes, we distinguish between speculators for whom \(\zeta<1\), and hedgers for whom \(\zeta>1\). Traders on futures markets describe market behavior consistent with the results in Figure 15(b).
When the market is dominated by hedgers, one speaks of a normal market. Hedgers are willing to bear the cost of storage in order to take later delivery. As a consequence, the price advances consistently towards the deferred contracts consistent with the price responses in Figure 15(b) for any \(\zeta\geq 1\). This is known as normal backwardation.
When speculators take over the market, however, the price overshoots. When it moves back, the market is said to be inverted. The explanation is that, when inventories are low and a shortage is felt, those speculators in need bid up nearby contracts to a momentary premium over deferred ones. The analogous mechanical statement is that "a spring (inventory) in compression (shortage) generates the acceleration (need) that forces (bids up) the momentum (price) to overshoot (a premium)." This is borne out by the price responses in Figure 15(b) for carry propensities \(\zeta<1\). In theory, speculative markets switch cyclically between being in normal backwardation and inverted at the frequency \(\omega_{d}\). Because persistent fluctuations are rarely seen, we can conclude that, in practice, inverted markets have propensities close to \(\zeta=0.9\) of the orange graph, as it settles rapidly enough to dampen out all but the the first of the inversions.
### Dynamical Systems
In this section, we briefly overview several ways the analysis for the first and second-order systems of the previous subsections can be generalized.
#### General Second-Order Systems
The spot and futures traders can be consolidated into one general second-order trader who both incurs handling and carrying costs. Only in the very short time, i.e. within a cycle, is the behavior influenced by relative strength of the handling costs with respect to the carrying cost as the agent attempts to optimally balance these. In the medium or long term, i.e. over more than at least several cycles, the behavior of such a trader does not deviate substantially from that of a spot or futures trader.
#### Higher-Order Linear Systems
In general, an economic system may display several natural trade cycles. For instance, economists identify at least four different cycles in the economy. The period of a cycle can range from four years for the inventory cycle to an approximately 50 years for the technology cycle.
For instance, economists identify at least four different cycles in the economy, whose periods ranging from the 4-year period of the inventory cycle, to an approximately 50-year period of the technology cycle. This means that the economy is at least an eight-order system, consisting of four interacting second-order systems.
#### Nonlinear Systems
As in mechanics, nonlinear force laws occur more often than not in economics. It is easy to imagine a storage facility overflowing and it is equally imaginable that a spring will break when stretched enough. Although general solution methods are lacking, sophisticated methods have been developed for nonlinear system analysis in control engineering, and these are equally applicable to economic systems.
Figure 17: Futures trader as a second-order system.
Economic texts typically picture demand and supply curves as convex curves, suggesting the presence of a nonlinear demand. However, an analogous nonlinear inertial element cannot be entertained within the confines of Newtonian mechanics, since mass is a constant, independent of velocity. Therefore, if price elasticity appears to be change, its dependence on quantity demanded is implicit via time. A well-known example in mechanics is that of a rocket ship whose mass decreases over time while it picks up speed due to the emission of exhaust fumes. An analogous economic situation would involve a gradual rotation of the demand curve as quantity demanded increases. When measuring price elasticity, it is critical that the demander is sufficiently isolated from any exogenous effects that may affect the reading.
### Control Theory
An important advantage of dynamical-systems models is that their behavior can be tuned using the methods and tools of control theory. In economic applications, we think of the controller as a manager --such as a policy maker or a financial regulator-- of the economic system. Its policy and executive decisions constitute the controller actions. Its objectives or desired price (or stock) level represent the controller setpoint. In Figure 18 we show how feedback control can be used to design a price (or inventory) management system. The observed price \(p\) (or stock \(q\)) is compared to its objective \(\bar{p}\) (or \(\bar{q}\)), producing the error signal \(e\), which quantifies the deviation from the objective. Based on that, management intervenes by exercising corrective price actions \(F\). The price (or stock) rigidity serves as the transfer function relating these actions to adjustments in the actual level, which is then fed back for further corrective actions.
The design of a suitable control law depends on the specific policy targets that are in place. A Proportional-Integral-Derivative (PID) controller, for instance, can be tuned to meet response time targets while eliminating any steady-state errors. Originally modelled after the behavior of helmsmen on ships, PID control actions are both effective and intuitive and they are widely used in industry. We expect these features to carry over to economic applications. Many other control schemes exist for dynamical systems, each with there unique features which may be relevant for managers, financial regulators, or policy makers.
## 8 Conclusions
This paper is one in a series of publications that will provide a theoretical foundation for economic engineering. Our purpose herein is to develop a theory based on an economic analog to Newtonian mechanics. By way of conclusion, we evaluate our development by comparing it to existing treatments in both economic and engineering literature. Finally, we identify several limitations of the theory and indicate how we intend to address these in our forthcoming publications of the series.
The crucial element in an engineering system is an _inertial_ element, i.e., a mass in mechanical systems (or an inductor in electrical systems). In the paper, we recognize _demand_ in economics as analogous to inertia. we consider this our critical insight, from which all else follows. It complements existing approaches for modelling economic systems in engineering literature, where inertial elements are systematically missing.14
Footnote 14: To wit, the hydraulic diagrams in Forrester’s [6] system dynamics lack an inertial paddle wheel (see Figure 20).
In engineering, the force concept is central to conceptualization and modelling efforts. Building on our concept of demand, we define the _force of demand_ as analogous to Newton's definition of the _force of inertia_. Any economic force is then calibrated by comparing it to this force of demand, analogous to Newton's development of a mechanical force. In the following two paragraphs, we compare the relative advantages of the use of force in economic modelling over econometric methods and the theory of demand.
Econometric models rely on _correlations_ uncovered using _statistical_ theories. In contrast, forces lead to _causal_ models based on economic _laws_. 15 Such models have several important advantages. First, they are more reliably predictive since correlations may break down. Second, the models are more readily interpretable since the laws give economic meaning to any parameters. Third, less data is required to identify the model since only a limited set of parameters
Figure 18: Feedback control for price or inventory management.
needs to be determined to generate an entire time series of price and stock adjustments. These advantages are especially important for modelling and the design of highly complex systems.
In both classical and neoclassical theories of demand, prices are assumed to adjust nearly instantaneously after a shock. The role of force is thus restricted to one that is _static_. This contrasts with the _dynamic_ nature of the Newtonian economic force, which details how prices and stocks evolve over time. Although Keynesians do distinguish between a short and a long run, a dynamic economic force specifies behavior over any time span, no matter how short it is and no matter when it occurs. Dynamic forces are particularly valuable when modeling volatile economic conditions, where predictions of short-term transient price movements are of the essence. In addition, it allows us to model systems that never attain equilibrium, such as those with recurring business cycles or persistent economic growth paths.
Although the Newtonian theory is particularly useful in economic engineering practice, it has two important limitations; one theoretical and one practical. Our forthcoming publications in the foundational series address these by applying analytical mechanics --i.e., Lagrangian and Hamiltonian mechanics-- to economics. Analytical mechanics provide a definitive set of theoretical foundations for economic engineering: Lagrangian mechanics from the perspective of the individual in terms of utility maximization and Hamiltonian mechanics from the perspective of a business in terms of flow of surplus. For engineering practice, they provide a unified framework wherein we can extend economic engineering beyond trade in commodities to areas such as production, economic growth, uncertainty, etc. With the completed series, we thus aim to establish comprehensive theoretical foundations for economic engineering on a par with those for mechanical engineering.
|
2301.01500
|
The Langevin approach for fission of heavy and super-heavy nuclei
|
In this contribution, we present the main relations of the Langevin approach
to the description of fission or fusion-fission reactions. The results of
Langevin calculations are shown for the mass distributions of fission fragments
of super-heavy elements and used for the investigation of memory effects in
nuclear fission.
|
F. A. Ivanyuk, S. V. Radionov, C. Ishizuka, S. Chiba
|
2023-01-04T09:17:04Z
|
http://arxiv.org/abs/2301.01500v1
|
# The Langevin approach for fission of heavy and super-heavy nuclei+
###### Abstract
In this contribution, we present the main relations of the Langevin approach to the description of fission or fusion-fission reactions. The results of Langevin calculations are shown for the mass distributions of fission fragments of super-heavy elements and used for the investigation of memory effects in nuclear fission.
## 1 Introduction
We describe the nuclear fission process by the four-dimensional set of the Langevin equations for the shape degrees of freedom with the shape given by the two-center shell model (TCSM) shape parametrization. The potential energy is calculated within the macroscopic-microscopic method. The collective mass, \(M\), and friction, \(\gamma\), tensors are defined in macroscopic (Werner-Wheller and wall-and-window formula) or microscopic (linear response theory) approaches.
We start calculations from the ground state shape with zero collective velocities and solve equations until the neck radius of the nucleus turns zero (scission point). At the scission point, the solutions of Langevin equations supply complete information about the system, its shape, excitation energy, and collective velocities. This information makes it possible to calculate the mass distributions, the total kinetic energy, and the excitation energies of fission fragments. The results of numerous previous calculations are in reasonable agreement with the available experimental data.
Below in this contribution, we present the calculated results for the mass distributions of super-heavy nuclei and clarify the impact of memory effects on the fission width of heavy nuclei.
The physics of super-heavy elements (SHE) has a long history. The existence of the "island of stability" was predicted at the end of the 1960s [1]. Nevertheless, it took almost 30 years until the alpha-decay of the element with Z=114 was observed experimentally at Flerov Nuclear Reactions Laboratory in Dubna [2].
With the development of experimental facility, it became possible not only to fix the fact of formation of SHE, but examine their properties. One of the first property of interest - the process of fission of SHEs. For the successful planning and carrying out of experiments, it is crucial to understand what kind of fission fragments mass distribution (FFMD) one should expect in the result of the fission of SHEs. The two double magic nuclei \({}^{132}\)Sn and \({}^{208}\)Pb may contribute. Both have the shell correction in the ground state of the same magnitude.
In order to clarify what kind of FFMD one could expect in the fission of SHEs, we have carried out the calculations of FFMD for a number of SHEs. The results are given in Section 3.
Another problem we address in this contribution is the influence of memory effects on the probability of the fission process. Commonly one uses the Markovian approximation to Langevin approach in which all quantities are defined at the same moment. This approximation provides reasonable results, but its accuracy is not well established. In publications, one can find statements that the memory effects have a significant influence on the fusion or fission processes and the statements that memory effects are very small.
To clarify this uncertainty, we have calculated the fission width using the Langevin approach with memory effects included in a wide range of important parameters: the excitation energy \(E^{*}\) of the system, the damping parameter \(\eta\), the relaxation time \(\tau\). The details and results of the calculations are given in Section 4.
## 2 The Langevin approach for the fission process
Within the Langevin approach, the fission process is described by solving the equations for the time evolution of the shape of nuclear surface of the fissioning system. For the shape parametrization, we use that of the two-center shell model (TCSM) [3] with 4 deformation parameters \(q_{\mu}=z_{0}/R_{0},\delta_{1},\delta_{2},\alpha\). Here \(\mathrm{z}_{0}/R_{0}\) refers to the distance between the centers of left and right oscillator potentials, \(R_{0}\) being the radius of spherical nucleus with the mass number A. The parameters \(\delta_{i}\) describe the deformation of the right and left fragment tips. The fourth parameter \(\alpha\) is the mass asymmetry and the fifth
parameter of the TCSM shape parametrization \(\epsilon\) was kept constant, \(\epsilon\)=0.35, in all our calculations.
The first-order differential equations (Langevin equations) for the time dependence of collective variables \(q_{\mu}\) and the conjugated momenta \(p_{\mu}\) are:
\[\frac{dq_{\mu}}{dt} = \left(m^{-1}\right)_{\mu\nu}p_{\nu}, \tag{1}\] \[\frac{dp_{\mu}}{dt} = -\frac{\partial F(q,T)}{\partial q_{\mu}}-\frac{1}{2}\frac{ \partial m_{\nu\sigma}^{-1}}{\partial q_{\mu}}p_{\nu}p_{\sigma}-\gamma_{\mu \nu}m_{\nu\sigma}^{-1}p_{\sigma}+R_{\mu}(t).\]
In Eqs. (1) the \(F(q,T)\) is the temperature-dependent free energy of the system, and \(\gamma_{\mu\nu}\) and \((\rm m^{-1})_{\mu\nu}\) are the friction and inverse of mass tensors.
The free energy \(F(q,T)\) is calculated within the shell correction method. The single particle energies are calculated with the deformed Woods-Saxon potential fitted to the mentioned above TCSM shapes.
The collective inertia tensor \(m_{\mu\nu}\) is calculated by the Werner-Wheeler approximation and for the friction tensor \(\gamma_{\mu\nu}\) we used the wall-and-window formula. The random force \(R_{\mu}(t)\) is the product of the temperature-dependent strength factors \(\rm g_{\mu\nu}\) and the white noise \(\xi_{\nu}(t)\), \(R_{\mu}(t)=g_{\mu\nu}\xi_{\nu}(t)\). The factors \(\rm g_{\mu\nu}\) are related to the temperature and friction tensor via the Einstein relation,
\[g_{\mu\sigma}g_{\sigma\nu}=T\gamma_{\mu\nu} \tag{2}\]
The temperature T is kept constant, \(aT^{2}=E^{*}\), or adjusted to the local excitation energy on each step of integration by the relation,
\[aT^{2}=E^{*}-p^{2}(t)/2M-[E_{pot}(q)-E_{pot}(q_{gs})]. \tag{3}\]
Here \(q_{gs}\) is the ground state deformation. More details are given in our earlier publications [4, 5, 6, 7].
Initially, the momenta \(p_{\mu}\) are set to zero, and calculations are started from the ground state deformation. Such calculations are continued until the trajectories reach the "scission point", defined as the point in deformation space where the neck radius turns zero.
## 3 Fission fragments mass distributions of super-heavy nuclei
In order to understand what kind of mass distributions one can expect from the solution of Langevin equations for super-heavy nuclei, we looked first at the potential energy of fissioning nuclei. Fig. 1 shows the potential energy E\({}_{def}\) of nuclei \({}^{296}\)Lv and \({}^{302}\)120 at zero temperature as a function of elongation (the distance R\({}_{12}\) between the centers of mass of left and right parts of a nucleus) and the mass asymmetry (fragment mass number).
In the top part of Fig. 1 the energy was minimized with respect to the deformation parameters \(\delta_{1}\) and \(\delta_{2}\). One sees the bottom of potential energy leading to almost symmetric mass splitting. There is also a hint on the mass asymmetric valley at \(A_{F}\) close to \(A_{F}\)=208.
If the trajectories followed the bottom of potential energy, the mass distributions would be symmetric. However, it is well known that the trajectories may deviate substantially from the bottom of the potential valley due to dynamic effects. We calculate the trajectories in four-dimensional deformation space. In this space, the local minima could lead away from the bottom of the potential valley. An example is shown in the bottom part of Fig. 1. Here we show the potential energy for fixed \(\delta_{1}\)= - 0.2 and \(\delta_{2}\)=0.2. One clearly sees another valley, leading to strongly mass asymmetric splitting.
In Fig. 2, we show the fission fragment mass distributions of super-heavy nuclei from \({}^{276}\)Hs to \({}^{308}\)122 as a function of fragment mass number \(A_{F}\). The FFMDs of nuclei from \({}^{276}\)Cn to \({}^{308}\)122 have three or four peak structures. The main component is the symmetric peak, split into two components in some isotopes. The peaks of lighter fragments are located around \(A_{F}\)=140.
One can also see the strongly asymmetric peak at the mass number close to \(A_{F}\)=208. The strength of the (almost) symmetric and asymmetric components in FFMD of SHEs depends on the proton and neutron numbers of the compound nucleus. For \({}^{276}\)Cn, the contribution of a strongly asymmetric peak is tiny. This contribution becomes larger for more heavy SHE. In some elements of SHEs with \(Z=\)116-122, the symmetric and mass-asymmetric peaks are of the same magnitude. More details can be found in [8].
The similar strongly mass-asymmetric peaks in FFMD of SHEs were also found recently in [9] within the Langevin approach with the so call Fourier shape parametrization.
Figure 2: The fission fragment mass distributions of super-heavy nuclei from \({}^{276}\)Hs to \({}^{308}\)122 calculated for the excitation energies \(E^{*}\)=10, 20 and 30 MeV as a function of the fragment mass number
## 4 The memory effects in nuclear fission
In order to investigate the role of memory effects in nuclear fission, we exploit a simple one-dimensional model with the potential energy given by the two-parabolic potential (Kramers potential), see Fig. 3.
\[{\rm E}_{pot}(q)=2V_{b}q(q-q_{0})/q_{0}^{2},\,0<q<q_{0};2V_{b}(q-q_{0})(2q_{0}-q )q_{0}^{2},\,q_{0}<q<2q_{0}. \tag{4}\]
The potential (4) depends on two parameters, the barrier height \(V_{b}\) and the barrier width \(q_{0}\). We have fixed the barrier height \(V_{b}=6\) MeV, which is close to the value of the fission barrier of actinide nuclei. The width of the barrier is somewhat uncertain. It depends on the definition of the collective coordinate \(q\) and the model for the potential energy. For simplicity, we have put here \(q_{0}=1.0\).
For the potential (4) one can define the stiffness \(C=d^{2}E_{pot}/dq^{2}\) and the frequency of harmonic vibrations \(\omega_{0}=\sqrt{C/M}\). In the present work, we fix \(\hbar\omega_{0}=\)1.0 MeV, which is close to the frequency of collective vibrations calculated for \({}^{224}\)Th in [10] within the microscopic linear response theory. Then, for the mass parameter we will have the deformation and temperature-independent value,
\[M=C/\omega_{0}^{2}=4V_{b}/(\omega_{0}^{2}q_{0}^{2}). \tag{5}\]
For the friction coefficient \(\bar{\gamma}\) we use a slightly modified approximation of [10],
\[\bar{\gamma}/M=0.6(T^{2}+\hbar^{2}\omega_{0}^{2}/\pi^{2}))/(1+T^{2}/40). \tag{6}\]
For the temperature, we consider here two options: constant temperature regime and constant energy regime. In a constant temperature regime, the temperature is time-independent, related to the initial excitation energy \(E^{*}\) by the Fermi-gas relation, \(aT^{2}=E^{*}\), where \(a\) is the level density parameter of Toke and Swiatecki [11]. The fission width calculated in a constant temperature regime will be denoted as \(\Gamma_{f}(T)\).
At small excitations, the temperature varies with deformation and time, and there is no reason to consider it constant. So, it should be adjusted to the local excitation energy on each integration step by the relation (3). Correspondingly, fission width calculated in a constant energy regime is denoted as \(\Gamma_{f}(E)\).
The fission width, \(\Gamma_{f}\), is defined assuming the exponential decay of the number of "particles" in the potential well,
\[P(t)=e^{-\Gamma_{f}t/\hbar}\rightarrow\Gamma_{f}=-\hbar\ln[P(t)]/t. \tag{7}\]
By solving the Langevin equations one will get the set of time moments \(t_{b}\), at which some trajectories would cross the barrier. From this information, one can find the probability \(P(t)\) and the fission width \(\Gamma_{f}\), see [12].
The Markovian fission width \(\Gamma_{f}(T)\) calculated by Eqs. (1, 4, 7) is plotted as function of the damping parameter \(\eta\) in the right part of Fig. 3. To present the results in a broader range of parameters, the damping parameter \(\eta\equiv\bar{\gamma}/2M\omega_{0}\) in these calculations was considered as a free parameter.
For the comparison, in Fig.3 we also show the Kramers decay width \(\Gamma_{HV},\Gamma_{LV}\) in limits of high and low viscosity (friction) [13],
\[\Gamma_{HV}=\frac{\hbar\omega_{0}}{2\pi}e^{-V_{b}/T}(\sqrt{1+\eta^{2}}-\eta) \,,\quad\Gamma_{LV}=\frac{\hbar\bar{\gamma}}{M}\frac{V_{b}}{T}e^{-V_{b}/T}. \tag{8}\]
As one can see, the dependence of \(\Gamma_{f}(T)\) on \(\eta\) is rather complicated. The fission width \(\Gamma_{f}(T)\)_grows_ as function of \(\eta\) in low damping region (\(\eta<0.1\)). For \(\eta>0.2\), the fission width \(\Gamma_{f}(T)\)_decreases_ as function of \(\eta\).
In nuclear systems, the Markovian assumption is often too restrictive. We thus have to generalize the above Langevin equations to allow for finite memory effects. They read as [14],
\[dq/dt = p(t)/M, \tag{9}\] \[\frac{dp}{dt} = -\frac{\partial E_{pot}}{\partial q}-\int_{0}^{t}dt^{\prime} \gamma(t-t^{\prime})p(t^{\prime})/M+\zeta\,,\quad\gamma(t-t^{\prime})\equiv \bar{\gamma}e^{-\frac{t-t^{\prime}}{\tau}}/\tau\,,\]
where \(\tau\) is the memory (or relaxation) time. The extension consists in allowing the friction to have a memory time, i.e., the friction reacts on past stages of the system, what is called a retarded friction.
The random numbers \(\zeta\) in (9) are the normally distributed random numbers with the properties \(<\zeta(t)>=0\), \(<\zeta(t)\zeta(t^{\prime})>=T\gamma(t-t^{\prime})\). In the limit \(\omega_{0}7<<1\), one recovers the Markovian limit of nuclear fission dynamics, i.e.,
Figure 3: (left) The two-parabolic potential (4) and few examples of the dynamical trajectories. (right) The fission width as the solution of Eqs. (1, 4, 7) calculated at constant temperature (open dots), and the Kramers approximations (8) for high and low damping limits.
when the friction force is simply given by \(\gamma\dot{q}(t)\). The random numbers \(\zeta(t)\) in (9) satisfy the equation
\[d\zeta(t)/dt=-\zeta(t)/\tau+R(t)/\tau\,, \tag{10}\]
and are used in the description of the so-called Ornstein-Uhlenbeck processes.
In the top part of Fig. 4 the calculated fission width \(\Gamma_{f}(E)\) is shown as a function of the damping parameter \(\eta\) both for small and large excitation energies, \(E^{*}\)=10, 25 and 60 MeV, for few values of the relaxation time. Besides \(\tau=0\), we choose in calculations below the two values of \(\tau\), \(\tau=5\cdot 10^{-22}\) sec and \(\tau=10^{-21}\) sec.
The results of Langevin calculations satisfying the energy conservation condition are shown in Fig. 4 by solid lines. The fission width \(\Gamma_{f}(E)\)_grows
Figure 4: (top) The dependence of the fission width \(\Gamma_{f}(E)\) (solid) and the approximation (11) (dashed) on the damping parameter \(\eta\) for few values of the relaxation time \(\tau\), \(\tau\)=0, \(\tau=5\cdot 10^{-22}\) sec, \(\tau=10^{-21}\) sec and the initial excitation energies \(E^{*}_{in}\)=10, 25 and 60 MeV. (bottom) The dependence of the fission width \(\Gamma_{f}(E)\) (solid) and the approximation (11) (dashed) on the relaxation time \(\tau\) for a few values of the damping parameter \(\eta\), \(\eta\)=0.1, 0.5 and 1.0.
as a function of \(\eta\) and _decreases_ as a function of \(\tau\) in low damping region. The tendency is the opposite in the high damping region; the fission width \(\Gamma_{f}\)_falls_ as a function of \(\eta\) and _increases_ as a function of \(\tau\). Such dependence is common both for small and large excitation energies.
In the bottom part of Fig. 4, the fission width \(\Gamma_{f}(E)\) (solid lines) is shown as a function of the relaxation time \(\tau\) for a few fixed values of the damping parameter \(\eta\). The bottom part of Fig. 4 confirms the above conclusion: the dependence of fission width \(\Gamma_{f}\) on \(\eta\) and \(\tau\) is opposite in low and high damping regions.
For the comparison, we show by dashed lines in Fig. 4 the available analytical approximation for \(\Gamma_{f}(T,\tau)\)[14, 15, 16],
\[\frac{1}{\Gamma_{eff}}=\frac{1}{\Gamma_{LV}}+\frac{1}{\Gamma_{HV}}\,,\quad \Gamma_{LV}(\tau)=\frac{\Gamma_{LV}(0)}{1+\omega_{0}^{2}\tau^{2}},\quad\Gamma _{HV}(\tau)=\frac{\hbar\lambda}{2\pi}e^{-V_{b}/T}\,, \tag{11}\]
where \(\lambda\) is the largest positive solution of the secular equation,
\[\lambda^{3}+\lambda^{2}/\tau+(\bar{\gamma}/M\tau-\omega_{0}^{2})\lambda- \omega_{0}^{2}/\tau=0\,. \tag{12}\]
As can be seen, the results of Langevin calculations for \(\Gamma_{f}(E)\) are smaller than the analytical estimate (11) both in low and high damping limits. The ratio \(\Gamma_{f}(E)/\Gamma_{eff}\) is close to 1 at \(E^{*}\)=60 MeV and close to 0.1 at \(E^{*}\)=10 MeV.
## 5 Summary
The calculated mass distributions of fission fragments of super-heavy nuclei from \({}^{268}\)Hs to \({}^{308}\)122 demonstrate a three-four peaks structure of mass distributions. In light super-heavies, we see the dominant mass symmetric peak at \(A_{F}\approx 140\). With increasing mass and charge numbers of fissioning nuclei, the highly asymmetric peaks at \(A_{H}\approx 208\) appears. In \({}^{290-296}\)Lv and \({}^{290-296}\)Og, the three peaks in FFMD are approximately of the same magnitude at E*=10 MeV.
The investigation of memory effects in nuclear fission is carried out. The calculations presented here offer complete information on the dependence of fission probability on all essential parameters, the relaxation time \(\tau\), the damping parameter \(\eta\), and the excitation energy E*.
It turned out that the fission width \(\Gamma_{f}(E)\) calculated under the constant energy requirement is generally smaller than that calculated in the constant temperature regime, \(\Gamma_{f}(T)\), or the Bohr-Wheeler approximation.
The dependence of the fission width \(\Gamma_{f}(E)\) on the relaxation time \(\tau\) is very sensitive to the damping parameter \(\eta\). In the low viscosity region, the fission width \(\Gamma_{f}(E)\) grows as a function of \(\eta\) and decreases as a function of \(\tau\)
In the high-viscosity region, the tendency is the opposite. Such dependence is common both for small and large excitation energies.
**Acknowledgements.** The authors are grateful to Prof. K.Pomorski for the valuable discussions and presentation of our results at the Zakopane Conference
|
2307.12194
|
LIST: Learning Implicitly from Spatial Transformers for Single-View 3D
Reconstruction
|
Accurate reconstruction of both the geometric and topological details of a 3D
object from a single 2D image embodies a fundamental challenge in computer
vision. Existing explicit/implicit solutions to this problem struggle to
recover self-occluded geometry and/or faithfully reconstruct topological shape
structures. To resolve this dilemma, we introduce LIST, a novel neural
architecture that leverages local and global image features to accurately
reconstruct the geometric and topological structure of a 3D object from a
single image. We utilize global 2D features to predict a coarse shape of the
target object and then use it as a base for higher-resolution reconstruction.
By leveraging both local 2D features from the image and 3D features from the
coarse prediction, we can predict the signed distance between an arbitrary
point and the target surface via an implicit predictor with great accuracy.
Furthermore, our model does not require camera estimation or pixel alignment.
It provides an uninfluenced reconstruction from the input-view direction.
Through qualitative and quantitative analysis, we show the superiority of our
model in reconstructing 3D objects from both synthetic and real-world images
against the state of the art.
|
Mohammad Samiul Arshad, William J. Beksi
|
2023-07-23T01:01:27Z
|
http://arxiv.org/abs/2307.12194v1
|
# LIST: Learning Implicitly from Spatial Transformers
###### Abstract
Accurate reconstruction of both the geometric and topological details of a 3D object from a single 2D image embodies a fundamental challenge in computer vision. Existing explicit/implicit solutions to this problem struggle to recover self-occluded geometry and/or faithfully reconstruct topological shape structures. To resolve this dilemma, we introduce LIST, a novel neural architecture that leverages local and global image features to accurately reconstruct the geometric and topological structure of a 3D object from a single image. We utilize global 2D features to predict a coarse shape of the target object and then use it as a base for higher-resolution reconstruction. By leveraging both local 2D features from the image and 3D features from the coarse prediction, we can predict the signed distance between an arbitrary point and the target surface via an implicit predictor with great accuracy. Furthermore, our model does not require camera estimation or pixel alignment. It provides an uninfluenced reconstruction from the input-view direction. Through qualitative and quantitative analysis, we show the superiority of our model in reconstructing 3D objects from both synthetic and real-world images against the state of the art. Our source code is publicly available to the research community [15].
## 1 Introduction
Constructing a truthful portrayal of the 3D world from a single 2D image is a basic problem for many applications including robot manipulation and navigation, scene understanding, view synthesis, virtual reality, and more. Following the work of Erwin Kruppa [13] in camera motion estimation and the recovery of 3D points, researchers have attempted to solve the 3D reconstruction issue using structure from motion [36, 18, 31], and visual simultaneous localization and mapping [8, 30]. However, the main limitation of such approaches is that they require multiple observations of the desired object or scene from distinct viewpoints with shared features. Such a multi-view formulation allows for integrating information from numerous images to compensate for occluded geometry.
Reconstructing a 3D object from a single image is a more difficult task since a sole image does not contain the whole topology of the target shape due to self-occlusions. Researchers have tried both explicit and implicit techniques to reconstruct a target object with self-occluded parts. Explicit methods attempt to infer the target shape directly from the input image. Nevertheless, a major drawback of such approaches is that the output resolution needs to be defined in advance, which constrains these techniques from achieving high-quality results. Recent advances in implicit learning offer a solution to reconstruct the target shape in an arbitrary resolution by indirectly inferring the desired surface through a distance/occupancy field. Then, the target surface is reconstructed by extracting a zero level set from the
Fig. 1: Five unique views of objects reconstructed by LIST from a single RGB image. Not only does our model accurately recover occluded geometry, but also the reconstructed surfaces are _not influenced_ by the input-view direction.
distance/occupancy field.
Implicit 3D reconstruction from a single view is an active area of research where one faction of techniques [20, 3] encode global image features into a latent representation and learn an implicit function to reconstruct the target. Yet, these approaches can be easily outperformed by simple retrieval baselines [35]. Therefore, global features alone are not sufficient for a faithful reconstruction. Another faction leverages both local and global features to learn the target implicit field from pixel-aligned query points. However, such methods rely on ground-truth/estimated camera parameters for training/inference [38, 14], or they assume weak perspective projection [28, 10].
To address these shortcomings we propose LIST, a novel deep learning framework that can reliably reconstruct the topological and geometric structure of a 3D object from a single RGB image. Our method _does not depend on weak perspective projection, nor does it require any camera parameters during training or inference_. Moreover, we leverage both local and global image features to generate highly-accurate topological and geometric details. To recover self-occluded geometry and aid the implicit learning process, we first predict a coarse shape of the target object from the global image features. Then, we utilize the local image features and the predicted coarse shape to learn a signed distance function (SDF).
Due to the scarcity of real-world 2D-3D pairs, we train our model on synthetic data. However, we use both synthetic and-real world images to test the reconstruction ability of LIST. Through qualitative analysis we highlight our model's _superiority in reconstructing high-fidelity geometric and topological structure_. Via a quantitative analysis using traditional evaluation metrics, _we show that the reconstruction quality of LIST surpasses existing works_. Furthermore, _we design a new metric to investigate the reconstruction quality of self-occluded geometry_. Finally, we provide an ablation study to validate the design choices of LIST in achieving high-quality single-view 3D reconstruction.
## 2 Related Work
In this section we summarize pertinent work on the reconstruction of 3D objects from a single RGB image via implicit learning. Interested readers are encouraged to consult [7] for a comprehensive survey on 3D reconstruction from 2D images. Contrary to explicit representations, implicit ones allow for the recovery of the target shape at an arbitrary resolution. This benefit has attracted interest among researchers to develop novel implicit techniques for different applications. Dai _et al_. [5] used a voxel-based implicit representation for shape completion. DeepSDF, introduced by Park _et al_. [25], is an auto-decoder that learns to estimate signed distance fields. However, DeepSDF requires test-time optimization, which limits its efficiency and capability.
To further improve 3D object reconstruction quality, Littwin and Wolf [16] utilized encoded image features as the network weights of a multilayer perceptron. Wu _et al_. [37] explored sequential part assembly by predicting the SDFs for structural parts separately and then combining them together. For self-supervised learning, Liu _et al_. [17] proposed a ray-based field probing technique to render the implicit surfaces as 2D silhouettes. Niemeyer _et al_. [23] used supervision from RGB, depth, and normal images to reconstruct rich geometry and texture. Chen and Zhang [3] proposed generative models for implicit representations and leveraged global image features for single-view reconstruction. For multiple 3D vision tasks, Mescheder _et al_. [20] developed OccNet, a network that learns to predict the probability of a volumetric grid cell being occupied.
Pixel-aligned approaches [28, 29, 10, 1] have employed local query feature extraction from image pixels to improve 3D human reconstruction. Xu _et al_. [38] incorporated similar ideas for 3D object reconstruction. To enhance the reconstruction quality of surface details, Li and Zhang [14] utilized normal images and a Laplacian loss in addition to aligned features. Zhao _et al_. [40] exploited coarse prediction and unsigned distance fields to reconstruct garments from a single view. Duggal and Pathak [6] proposed category specific reconstruction by learning a topology aware deformation field. Mittal _et al_. introduced AutoSDF [21], a model that encodes local shape regions separately via patch-wise encoding. However, these prior works rely on weak perspective projection and the rendering of metadata to align query points to image pixels. In contrast, LIST does not require any alignment or rendering data, and it recovers more accurate topological structure and geometric details.
## 3 Implicit Function Learning from Unaligned Pixel Features
Given a single RGB image of an object, our goal is to reconstruct the object in 3D with highly-accurate topological structure and self-occluded geometry. We model the target shape as an SDF and extract the underlying surface from the zero level set of the SDF during inference. To train our model we employ an image and query point pair (\(x_{i},Q_{i}\)), where \(Q_{i}\) is a set of 3D coordinates (query points) in close vicinity to the surface of the object with a measured signed distance and \(x_{i}\) is a rendering of the object from a random viewpoint. An overview of the our framework is presented in Fig. 2. The details of each component are provided in the following subsections.
### Query Features From Coarse Predictions
Consider an RGB image \(x_{i}\subset X\in\mathbb{R}^{H\times W\times 3}\) of height \(H\) and width \(W\). We propose a convolutional neural
encoder-decoder \(\Omega_{\omega}\), parameterized by weights \(\omega\), to extract latent features from the image and predict a coarse estimation \(\hat{y}_{i}^{x_{i}}\) of the target object. Concretely,
\[\Omega_{\omega}(x_{i})\coloneqq\hat{y}_{i}^{x_{i}}\mid\mathbb{R}^{H\times W \times 3}\rightarrow\mathbb{R}^{N\times 3}, \tag{1}\]
where \(\hat{y}_{i}^{x_{i}}\) is a point cloud representation of the target and \(N\) is the resolution of the point cloud. Note that the subscript \(i\) indicates \(i\)-th sample and the superscript \(x_{i}\) designates the source variable. For high-performance point cloud generation, we utilize tree structured graph convolutions (TreeGCN) [32] to decode the image features.
We use the coarse prediction \(\hat{y}_{i}\) as a guideline for the topological structure of the target shape in a canonical space. To extract query features from this coarse prediction, first we discretize the point cloud in an occupancy grid \(\hat{u}_{i}^{\hat{y}_{i}}\in 1^{M\times M\times M}\) of resolution \(M\). However, the coarse prediction may contain gaps and noisy points that may impair the reconstruction quality. To resolve this, we employ a shallow convolutional network \(\Gamma_{\tilde{o}}\) parameterized by weights \(\tilde{o}\) to generate a probabilistic occupancy grid from \(\hat{u}_{i}^{\hat{y}_{i}}\),
\[\hat{v}_{i}^{\hat{u}_{i}}\coloneqq\Gamma_{\tilde{o}}(\hat{u}_{i}^{\hat{y}_{i} })\colon 1^{M\times M\times M}\rightarrow[0,1]^{M\times M\times M}. \tag{2}\]
Specifically, our aim is to find the neighboring points of \(\hat{y}_{i}\) with a high chance of being a surface point of the target shape.
Although it is possible to regress the voxel representation directly from the global image features [4, 33, 10], learning a high-resolution voxel occupancy prediction requires a _significant_ amount of computational resources [10]. Moreover, we empirically found that point cloud prediction followed by voxel discretization achieves better accuracy on diverse shapes rather than predicting the voxels directly.
Next, a neural network \(\Xi_{\xi}\), parameterized by weights \(\xi\), maps the probabilistic occupancy grid (2) to a high-dimensional latent matrix through convolutional operations. Then, our multi-scale trilinear interpolation scheme \(I\) extracts relevant query features \(f_{C}\) at each query location \(q_{i}\) from the mapped features. More formally,
\[f_{C}\coloneqq I(\Xi_{\xi}(\hat{v}_{i}^{\hat{u}_{i}}),Q_{i}). \tag{3}\]
In addition to \(q_{i}\), we also consider the neighboring points at a distance \(d\) from \(q_{i}\) along the Cartesian axes to capture rich 3D features, i.e.,
\[q_{j}=q_{j}+k\cdot\hat{n}_{j}\cdot d, \tag{4}\]
where \(k\in\{1,0,-1\}\), \(j\in\{1,2,3\}\), and \(\hat{n}_{j}\in\mathbb{R}^{3}\) is the \(j\)-th Cartesian axis unit vector.
### Localized Query Features
The coarse prediction and query features \(f_{C}\) can aid the recovery of the topological structure of the target shape. Nevertheless, relevant local features are also required to recover fine geometric details. To achieve this, prior arts assume weak perspective projection [28, 10] or align the query points to the image pixel locations through the ground-truth/estimated camera parameters [38, 14]. Predicting the camera parameters is analogous to predicting the object pose from a single image, which is itself a hard problem in computer vision. It involves a high chance of error and a computationally expensive training procedure. Furthermore, the error in the pose/camera estimation may lead to the loss of geometric details in the reconstruction.
Fig. 2: To reconstruct the target object from a single RGB image, LIST first predicts the coarse topology from the global image features. Simultaneously, local image features are used to extract local geometry at the given query locations. Finally, an SDF predictor (\(\Psi\)) estimates the signed distance field (\(\sigma\)) to reconstruct the target shape. Note that images and colors are for visualization purposes only.
To overcome these limitations, we obtain insight from spatial transformers [11] and leverage the spatial relationship between the input image and the coarse prediction. Via the coarse prediction, which portrays an object from a standard viewpoint and the query points that delineate the coarse predictions, it is possible to localize the query points to the local image features. This is done by predicting a spatial transformation with the aid of global features from the input image and the coarse prediction as follows.
First, we define a convolutional neural encoder \(\Pi_{\pi}\), parameterized by weights \(\pi\), to encode the input image into local \((l_{\pi}^{x_{i}})\) and global \((z_{\pi}^{x_{i}})\) features. Concretely,
\[\Pi_{\pi}(x_{i})\coloneqq\{l_{\pi}^{x_{i}},z_{\pi}^{x_{i}}\}. \tag{5}\]
Concurrently, a neural module \(K_{\kappa}\) encodes the coarse prediction \(\hat{y}_{i}^{x_{i}}\) into global point features. Using global features from both the image and the coarse prediction, the spatial transformer \(\Theta\) estimates a transformation to localize the query points in the image feature space. Then, localized query points \(\tilde{Q}_{i}\) are generated by applying the predicted transformation to \(Q_{i}\),
\[\Theta_{\theta}(z_{\pi}^{x_{i}},K_{\kappa}(\hat{y}_{i}^{x_{i}}),Q_{i}) \coloneqq\tilde{Q}_{i}\mid\mathbb{R}^{N\times 3}\rightarrow\mathbb{R}^{N \times 2}. \tag{6}\]
Finally, a bi-linear interpolation scheme \(\mathcal{B}\) extracts the local query features \(f_{L}\) from the local image features \(l_{\pi}^{x_{i}}\),
\[f_{L}\coloneqq\mathcal{B}(l_{\pi}^{x_{i}},\tilde{Q}_{i}). \tag{7}\]
Note that the point encoder \(K_{\kappa}\) and the localization network \(\Theta\) are designated to ensure an accurate SDF prediction. Therefore, we do not use any camera parameters during training and we optimize these neural modules directly with the SDF prediction objective. This has the following benefits: (i) _additional modules or training to predict the projection matrix and object pose from a single image are not required; (ii) reconstructions are free from any pose estimation error, which boosts reconstruction accuracy_.
### Signed Distance Function Prediction
To estimate the final signed distance \(\Delta_{i}\), we combine the coarse features \(f_{C}\) with the localized query features \(f_{L}\) and utilize a multilayer neural function defined as
\[\Psi_{\psi}(f_{C},f_{L})\coloneqq\begin{cases}\mathbb{R}^{-},& \text{if $q_{i}$ is inside the target surface}\\ \mathbb{R}^{+},&\text{otherwise}.\end{cases} \tag{8}\]
### Loss Functions
We incorporate the chamfer distance (CD) loss and optimize the weights \(\omega\) to accurately estimate the coarse shape of the target. More specifically,
\[\mathcal{L}_{\text{CD}}(y_{i},\hat{y}_{i})=\sum_{a\in\hat{y}_{i}} \min_{b\in y_{i}}||a-b||^{2}+\sum_{b\in y_{i}}\min_{b\in\hat{y}_{i}}||b-a||^{2}, \tag{9}\]
where \(y_{i}\in\mathbb{R}^{N\times 3}\) is a set of 3D coordinates collected from the surface of the object and \(\hat{y}_{i}\in\mathbb{R}^{N\times 3}\) is the estimated coarse shape. To supervise the probabilistic occupancy grid prediction, we discretize \(y_{i}\) to generate the ground-truth occupancy \(v_{i}^{y_{i}}\in 1^{M\times M\times M}\). The neural weight \(\tilde{o}\) is then optimized by the binary cross-entropy loss,
\[\mathcal{L}_{V}(v_{i},\hat{v}_{i})=-\frac{1}{|v_{i}|}\Sigma(\gamma v_{i}\log \hat{v}_{i}+(1-\gamma)(1-v_{i})\log(1-\hat{v}_{i})), \tag{10}\]
where \(\gamma\) is a hyperparameter to control the influence of the occupied/non-occupied grid points. To optimize the SDF prediction, we collect a set of query points \(Q_{i}\) within distance \(\delta\) of the target surface and measure their signed distance \(\sigma_{i}\). The estimated signed distance is then guided by optimizing the neural weights \(\xi\), \(\pi\), \(\theta\), and \(\psi\) through
\[\mathcal{L}_{\text{SDF}}=\frac{1}{|Q_{i}|}\Sigma(\sigma_{i}-\Delta_{i})^{2}. \tag{11}\]
### Training Details
We incorporate a two-stage procedure to train LIST. In the first stage, we only focus on the coarse prediction from the input image \(x_{i}\) and optimize the weights \(\omega\) through \(\mathcal{L}_{\text{CD}}\). Then, we freeze \(\omega\) after convergence to a minimum validation accuracy and start the second stage for the SDF prediction. During the second stage, we jointly optimize \(\tilde{o}\), \(\xi\), \(\pi\), \(\kappa\), \(\theta\), and \(\psi\) through the combined loss \(\mathcal{L}=\mathcal{L}_{V}+\mathcal{L}_{\text{SDF}}\). LIST can also be trained end-to-end by jointly minimizing \(\mathcal{L}_{CD}\) with \(\mathcal{L}_{V}\) and \(\mathcal{L}_{\text{SDF}}\). However, we found the two-stage training procedure easier to evaluate and quicker to converge during experimental evaluation. To reconstruct an object at test time, we first densely sample a fixed 3D grid of query points and predict the signed distance for each point. Then, we use the marching cubes [19] algorithm to extract the target surface from the grid.
## 4 Experimental Evaluation
In this section, we describe the details of our experimental setup and results. Additional information, including implementation details, can be found in the supplementary material.
### Datasets
Similar to [14] and [21], we utilized the 13-class subset of the ShapeNet [2] dataset to train LIST. The renderings and processed meshes from [38] were used as the input view and target shape. We trained a single model on all 13 categories. Additionally, we employed the Pix3D [34] dataset to test LIST on real-world scenarios. The train/test split from [39] was used to evaluate on all 9 categories of Pix3D. Following [39], we preprocessed the Pix3D target shapes to be watertight for training.
To prepare the ground-truth data, we first normalized the meshes to a unit cube and then sampled 50 k points from the surface of each object. Next, we displaced the sampled points with a Normal distribution of zero mean and varying standard deviation. Lastly, we calculated the signed distance for every point. To supervise the coarse prediction and probabilistic occupancy grid estimation, we sub-sampled 4 k points from the surface via farthest point sampling. Further details regarding the data preparation strategy can be found in the supplementary material.
### Baseline Models
For single-view reconstruction via synthetic images, we compared against the following prior arts: IMNET [3], and D\({}^{2}\)IM-Net [14]. IMNET does not require pose estimation. However, the reconstruction only unitizes global features from an image. D\({}^{2}\)IM-Net extracts local features by aligning the query points to image pixels through rendering metadata and it uses a pose estimation module during inference.
For single-view reconstruction from real-world images, we evaluated against TMN [24], MGN [22], and IM3D [39]. TMN deforms a template mesh to reconstruct the target object. MGN and IM3D perform reconstruction through the following steps: (i) identify objects in a scene, (ii) estimate their poses, and (iii) reconstruct each object separately.
### Metrics
We computed commonly used metrics (e.g., CD, intersection over union (IoU), and F-score), to evaluate the performance of LIST. The definitions of these metrics can be found in the supplementary material. Nonetheless, these traditional metrics _do not_ differentiate between visible/occluded surfaces since they evaluate the reconstruction as a whole. To investigate the reconstruction quality of occluded surfaces, we propose to isolate visible/occluded surfaces based on the viewpoint of the camera and evaluate them separately using the traditional metrics. A visual depiction of this new strategy is presented in Fig. 3.
To measure the reconstruction quality of occluded surfaces, we first align the predicted/ground-truth meshes to their projection in the input image using the rendering metadata. Then, we assume the camera location as a single source of light and cast rays onto the mesh surface by ray casting [27]. Next, we identify the visible/occluded faces through the ray-mesh intersection and subdivide the identified faces to separate them. Note that the rendering metadata is only used to evaluate the predictions. Finally, we sample 100 k points from the separated occluded faces to compute the CD\({}_{\text{os}}\), and voxelize the sampled points to compute the IoU\({}_{\text{os}}\) and F-Score\({}_{\text{os}}\).
In our implementation, we set the canvas resolution to \(4096\times 4096\) pixels and generated one ray per pixel from the camera location. It is important to note that ray casting and computing ray-mesh intersections are computationally demanding tasks. Therefore, to manage time and resources, we chose five sub-classes (chair, car, plane, sofa, table) to evaluate occluded surface reconstruction.
### Single-View 3D Reconstruction Evaluation
#### 4.4.1 Single-View 3D Reconstruction from Renderings of Synthetic Objects
In this experiment we performed single-view 3D reconstruction on the test set of the ShapeNet dataset. The qualitative and quantitative results are displayed in Fig. 4 and Table 1, respectively. In comparison to the baselines, the topological structure and occluded geometry recovered by LIST are considerably better. For example, in row 3 all of the baselines struggle to reconstruct the tail of the airplane and they fail to estimate the full length of the wings. In row 5, none of the baselines were able to recover the occluded part of the table. In contrast, LIST not only recovers the structure, but it also maintains the gap in between. Moreover, notice that in row 2 D\({}^{2}\)IM-Net fails to resolve the directional view ambiguity and imprints an arm shaped silhouette on the seat rather than reconstructing the arm. This indicates a strong influence of the input-view direction in the reconstructed surface. Conversely, LIST can resolve view-directional ambiguity and provide a reconstruction that is uninfluenced by the input-view direction. As shown in Table 1, LIST outperforms all the other baseline models.
We also evaluated LIST against the baselines on occluded surface recovery by partitioning the reconstructions using our proposed metric. The results are recorded in Table 2. LIST outperformed all the baselines hence show-casing the superiority of our approach in reconstructing occluded geometry. Furthermore, LIST provides a stable reconstruction across different views of the same object as shown in Fig. 5. However, the use of ground-truth rendering data instead of the estimated data improved the reconstruction quality. This indicates the source of the problem to be the sub-optimal prediction of the camera pose. Nonetheless,
Figure 3: To evaluate the reconstruction quality of occluded surfaces, we first align the reconstructed shape (b) with the input image (a) and cast rays onto the surface (c). Next, we identify the (red) faces that intersect with the rays via ray-mesh intersection and separate the reconstructed mesh into (d) visible and (e) occluded areas.
LIST is free from any such complication as our framework does not require any explicit pose estimation.
#### 4.4.2 Single-View 3D Reconstruction from Real Images
In this experiment we evaluated single-view 3D reconstruction on the test set of the Pix3D dataset. The qualitative and quantitative results are provided in Fig. 6 and Table 3, respectively. The baseline results were obtained from the re
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c c c c|c} \hline \hline & & plane & bench & cabinet & car & chair & display & lamp & speaker & rifle & sofa & table & phone & boat & Mean \\ \hline \multirow{4}{*}{CD\(\downarrow\)} & IMNET & 18.95 & 17.34 & 15.17 & 10.86 & 14.72 & 16.77 & 83.64 & 33.41 & 10.33 & 13.35 & 19.32 & 9.16 & 15.24 & 21.40 \\ & D\({}^{2}\)IM-Net & 13.25 & **12.51** & 9.47 & 7.83 & 11.31 & 15.33 & **34.08** & 17.62 & 8.55 & 12.34 & 14.26 & 8.11 & 15.73 & 13.87 \\ & LIST & **12.13** & 13.49 & **7.45** & **1.04** & **9.20** & **13.65** & 47.31 & **16.75** & **7.32** & **9.92** & **11.14** & **7.91** & **15.78** & **13.31** \\ \hline \multirow{4}{*}{IoU\(\uparrow\)} & IMNET & 39.43 & 44.65 & 49.25 & 55.75 & 51.22 & 53.34 & 29.26 & 50.66 & 46.43 & 51.12 & 41.63 & 52.79 & 49.61 & 47.31 \\ & D\({}^{2}\)IM-Net & 45.44 & 48.45 & 48.60 & 53.58 & 53.13 & 52.72 & **32.45** & 51.75 & 50.76 & 53.35 & 45.17 & 53.06 & 52.89 & 49.33 \\ & LIST & **49.03** & 47.57 & **56.29** & **65.57** & **52.70** & **57.34** & 24.80 & **55.34** & **52.42** & **56.79** & **47.90** & **58.98** & **54.35** & **52.23** \\ \hline \multirow{4}{*}{F-score\(\uparrow\)} & IMNET & 48.87 & 31.78 & **44.34** & 48.78 & 41.45 & 48.32 & 21.23 & 48.29 & 52.92 & 44.12 & 45.21 & 51.52 & 52.31 & 44.54 \\ & D\({}^{2}\)IM-Net & 51.37 & **36.76** & 43.49 & 51.77 & 45.56 & 50.82 & **29.57** & 51.93 & 56.25 & 48.34 & 47.23 & 54.84 & 52.73 & 47.74 \\ \cline{1-1} & LIST & **52.46** & 36.39 & 42.51 & **53.12** & **46.62** & **51.78** & 22.88 & **52.67** & **58.24** & **50.52** & **49.62** & **56.89** & **53.58** & **48.25** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative results using the ShapeNet [2] dataset for various models. The metrics reported are the following: chamfer distance (CD), intersection over union (IoU), and F-score. The CD values are scaled by \(10^{-3}\).
Figure 4: A qualitative comparison between LIST and the baseline models using the ShapeNet [2] dataset. Our model recovers _significantly better_ topological and geometric structure, and the reconstruction is not tainted by the input-view direction. GT denotes the ground-truth objects.
-specific papers. Compared to other methods our approach generates the most precise 3D shapes, which results in the lowest average CD and F-score. Notice that in Fig. 6, rows 3 and 4, only LIST can accurately recover the back and legs of the chair. Additionally, LIST reconstructions provide a smooth surface, precise topology, and fine geometric details.
### Ablation Study
#### 4.5.1 Setup
To investigate the impact of each individual component in our single-view 3D reconstruction model, we performed an ablation study with the following network options.
* _Base:_ A version of LIST that predicts the signed distance utilizing only global image features and coarse predictions.
* _OL:_ An improved _Base_ version that uses the probabilistic occupancy from the coarse prediction and occupancy loss.
* _1E:_ A version of LIST where local and global image features from the same encoder are used for both coarse prediction and localized query feature extraction.
* _2D:_ LIST with two separate decoders to estimate the signed distance from local and global query features. The final prediction is obtained by adding both estimations.
* _EC:_ We train LIST without the localization module and use a separate pose estimation module similar to [14] to predict the camera parameters. The estimated
\begin{table}
\begin{tabular}{c|c|c c c c c|c} \hline & & & plane & car & chair & sofa & table & Mean \\ \hline \multirow{3}{*}{\(\text{CD}_{\text{os}}\downarrow\)} & IMNET & 24.11 & 13.34 & 15.47 & 24.34 & 26.86 & 20.82 \\ & D\({}^{2}\)IM-Net & 26.23 & 13.44 & 13.59 & 20.45 & 23.45 & 19.43 \\ & LIST & **18.93** & **6.57** & **12.66** & **18.44** & **21.76** & **15.67** \\ \hline \multirow{3}{*}{\(\text{IoU}_{\text{os}}\uparrow\)} & IMNET & 45.63 & 46.87 & 38.32 & 45.87 & 39.02 & 43.14 \\ & D\({}^{2}\)IM-Net & 48.44 & 50.33 & 49.43 & 50.32 & 42.22 & 48.14 \\ & LIST & **53.15** & **55.37** & **51.25** & **55.22** & **43.17** & **51.63** \\ \hline \multirow{3}{*}{\(\text{F}_{\text{os}}\text{-score}\uparrow\)} & IMNET & 40.93 & 46.94 & 44.43 & 46.84 & 45.64 & 44.95 \\ & D\({}^{2}\)IM-Net & 47.21 & 50.73 & 48.89 & 49.15 & 47.72 & 48.73 \\ \cline{1-1} & LIST & **50.33** & **52.55** & **49.34** & **51.02** & **48.11** & **50.27** \\ \hline \end{tabular}
\end{table}
Table 2: A quantitative evaluation of the occluded surfaces of reconstructed synthetic objects via our evaluation strategy. The metrics reported are the following: chamfer distance (\(\text{CD}_{\text{os}}\)), intersection over union (\(\text{IoU}_{\text{os}}\)), and \(\text{F}_{\text{os}}\)-score. The \(\text{CD}_{\text{os}}\) values are scaled by \(10^{-3}\).
Fig. 5: A qualitative comparison between LIST and the baseline models using distinct views of the same object. Not only can our model both maintain better topological structure and geometric details, but it also provides a reconstruction that is stable across different views of the object.
Fig. 6: Single-view reconstruction using real-world images from the Pix3D [34] test set (best viewed zoomed in).
Fig. 7: Qualitative results obtained from the ablation study using different network settings.
camera parameters were used to transform the query points during inference.
To maximize limited computational resources, we focused on the most diverse five sub-classes (chair, car, plane, sofa, table) of the ShapeNet dataset for this ablation study. The qualitative and quantitative results of the experiments are recorded in Fig. 7 and Table 4 respectively.
#### 4.5.2 Discussion
In the ablation experiments the _Base_ version was able to recover global topology, but it lacked local geometry. As shown in Fig 7, the probabilistic occupancy and optimization loss helped recover some details in the _OL_ version. Conversely, the performance decreased slightly after the inclusion of local details in the single-encoder version (_1E_). We hypothesize that the task of query point localization, while estimating the coarse prediction, overloads the encoder and hinders meaningful feature extraction for the signed distance prediction. To overcome this issue, we used a separate encoder for the coarse prediction and query point localization. The dual-decoder version (_2D_), performed similar to the final model. Nonetheless, we found that the geometric details had a thicker reconstruction than the target during qualitative evaluation. This motivated the fusion of features rather than predictions in the final version.
We also ablated the localization module using estimated camera parameters during training and inference. As shown in Table 4, the final version of LIST outscores the version employing estimated camera (_EC_) parameters. This indicates that our localization module with an SDF prediction objective is more suitable for single-view reconstruction compared to a camera pose estimation sub-module. More importantly, this removes the requirement for pixel-wise alignment through camera parameters for local feature extraction. Note that the _EC_ reconstruction appears qualitatively similar to the others and was therefore omitted in Fig. 7.
### Limitations and Future Directions
Although LIST achieves state-of-the-art performance on single-view 3D reconstruction, there are some limitations. For example, the model may struggle with very small structures. We speculate that this is due to the coarse predictor failing to provide a good estimation of such structures. Please see the supplementary material for examples of failed reconstruction results. Another shortcoming is the need for a clear image background. LIST can reconstruct targets from real-world images, yet it requires an uncluttered background to do this. In the future, we will work towards resolving these issues.
## 5 Conclusion
In this paper we introduced LIST, a network that implicitly learns how to reconstruct a 3D object from a single image. Our approach does not assume weak perspective projection, nor does it require pose estimation or rendering data. We achieved state-of-the-art performance on single-view reconstruction from renderings of synthetic objects. Furthermore, we demonstrated domain transferability of our model by recovering 3D surfaces from images of real-world objects. We believe our approach could be beneficial for other problems such as object pose estimation and novel view synthesis.
## Acknowledgments
The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing software, computational, and storage resources that have contributed to the research results reported within this paper.
\begin{table}
\begin{tabular}{c|c c c c c c} \hline & Base & OL & 1E & 2D & EC & Final \\ \hline CD\(\downarrow\) & 11.35 & 9.64 & 10.72 & 8.48 & 7.89 & **7.32** \\ IoU\(\uparrow\) & 51.34 & 53.95 & 51.40 & 55.23 & 55.10 & **56.83** \\ F-score\(\uparrow\) & 43.11 & 48.06 & 45.92 & 51.37 & 51.33 & **52.75** \\ \hline \end{tabular}
\end{table}
Table 4: Quantitative results obtained from the ablation study using different network settings.
\begin{table}
\begin{tabular}{c|c c c c c c c c c|c} \hline & & bed & bookcase & chair & desk & sofa & table & tool & wardrobe & misc & Mean \\ \hline \multirow{4}{*}{CD\(\downarrow\)} & TMN & 7.78 & 5.93 & 6.86 & 7.08 & 4.25 & 17.42 & 4.13 & 4.09 & 23.68 & 9.03 \\ & MGN & 5.99 & 6.56 & **5.32** & 5.93 & 3.36 & 14.19 & 3.12 & 3.83 & 26.93 & 8.36 \\ & IM3D & **4.11** & 3.96 & 5.45 & 7.85 & 5.61 & 11.73 & 2.39 & 4.31 & 24.65 & 6.72 \\ & LIST & 5.81 & **1.74** & 6.11 & **3.87** & **2.08** & **1.68** & **1.99** & **0.80** & **5.16** & **4.36** \\ \hline IoU\(\uparrow\) & LIST & 45.61 & 39.54 & 41.15 & 59.68 & 67.34 & 49.12 & 27.82 & 43.87 & 34.72 & 46.77 \\ \hline F-score\(\uparrow\) & LIST & 58.18 & 67.22 & 60.01 & 78.34 & 70.14 & 69.19 & 46.48 & 75.70 & 39.14 & 65.66 \\ \end{tabular}
\end{table}
Table 3: A quantitative evaluation of the occluded surfaces of reconstructed real-world objects using our evaluation strategy. The metrics reported are the following: chamfer distance (CD\({}_{\text{ox}}\)), intersection over union (IoU\({}_{\text{ox}}\)), and F\({}_{\text{ox}}\)-score. The CD\({}_{\text{ox}}\) values are scaled by \(10^{-3}\).
|
2306.14814
|
Probabilistic Risk Assessment of an Obstacle Detection System for GoA 4
Freight Trains
|
In this paper, a quantitative risk assessment approach is discussed for the
design of an obstacle detection function for low-speed freight trains with
grade of automation (GoA)~4. In this 5-step approach, starting with single
detection channels and ending with a three-out-of-three (3oo3) model
constructed of three independent dual-channel modules and a voter, a
probabilistic assessment is exemplified, using a combination of statistical
methods and parametric stochastic model checking. It is illustrated that, under
certain not unreasonable assumptions, the resulting hazard rate becomes
acceptable for specific application settings. The statistical approach for
assessing the residual risk of misclassifications in convolutional neural
networks and conventional image processing software suggests that high
confidence can be placed into the safety-critical obstacle detection function,
even though its implementation involves realistic machine learning
uncertainties.
|
Mario Gleirscher, Anne E. Haxthausen, Jan Peleska
|
2023-06-26T16:18:20Z
|
http://arxiv.org/abs/2306.14814v1
|
# Probabilistic Risk Assessment of an Obstacle Detection System for GoA 4 Freight Trains
###### Abstract
In this paper, a quantitative risk assessment approach is discussed for the design of an obstacle detection function for low-speed freight trains with grade of automation (GoA) 4. In this 5-step approach, starting with single detection channels and ending with a three-out-of-three (3oo3) model constructed of three independent dual-channel modules and a voter, a probabilistic assessment is exemplified, using a combination of statistical methods and parametric stochastic model checking. It is illustrated that, under certain not unreasonable assumptions, the resulting hazard rate becomes acceptable for specific application settings. The statistical approach for assessing the residual risk of misclassifications in convolutional neural networks and conventional image processing software suggests that high confidence can be placed into the safety-critical obstacle detection function, even though its implementation involves realistic machine learning uncertainties.
Keywords:Autonomous train control, Safety certification, Neural network-based object detection, Probabilistic risk assessment, Fault tree analysis
## 1 Introduction
#### Motivation and Background
Autonomous transportation systems, their technical feasibility, safety and security are currently in the main focus of both academic research and industrial developments. This has been caused by both the significant progress made in academia regarding the enabling technologies - in particular, artificial intelligence (AI) - and the attractive business cases enabled by driverless road vehicles, trains, and aircrafts.
A major obstacle preventing the immediate deployment of autonomous transportation systems in their designated operational environments is their safety
assessment. The latter poses several technical challenges [19; 18; 2], in particular, the trustworthiness of AI-based methods involving machine learning (such as obstacle recognition on roads and railway tracks), as well as standardisation challenges: in the railway and aviation domains, approved standards for the certification of safety-critical autonomous trains or aircrafts are still unavailable.
The standardisation situation is more advanced in the automotive domain, where a stack of standards involving ISO 26262 [17], ISO 21448 [16], and ANSI/UL 4600 [27] have been approved by the US-American Department of Transportation for the certification of autonomous road vehicles.4
Footnote 4: See [https://www.youtube.com/watch?v=xCIjxiV048Q](https://www.youtube.com/watch?v=xCIjxiV048Q).
The standard ANSI/UL 4600 is of particular interest, since its authors emphasise that it is applicable to operational safety assurance on system level for _all_ types of autonomous products [27, Section 1.2.1], potentially with a preceding system-specific revision of the checklists proposed in the standard. In a previous publication, Peleska et al. [21] have suggested a particular control system architecture for autonomous trains and performed a qualitative evaluation according to ANSI/UL 4600. This work resulted in the assessment that a system-level certification based on this standard is feasible for the class of autonomous metro trains and freight trains, running in an open operational environment, as can be expected in European railway networks today.
It should be noted that autonomous trains in closed environments (platform screen doors, secured tracks, as provided by underground metro trains or airport people movers, where the problem of unexpected obstacles can be neglected) already exist since decades [14]. The current challenge consists in integrating autonomous train operation safely into the "normal" open operational environments of today's European railway networks.
#### 1.0.1 Objectives and Contributions
This paper complements our previous work [21] with respect to probabilistic risk assessment and associated verification methods for an autonomous train control system architecture with the highest grade of automation GoA 4 (no train engine driver or other personnel present): for a real-world certification, it is necessary to add a probabilistic risk analysis to the qualitative evaluation. We specialise here on autonomous low-speed freight trains travelling across railway networks, the latter providing movement authorities via interlocking systems and radio block centres. For this type of train, the most important safety-relevant AI-based sub-system is the _obstacle detection (OD) module_, consisting of sensors and perceptors. As explained in the previous work [21], the reaction to obstacle indications from OD and the state transitions between fully autonomous GoA4 mode and degraded modes due to sensor and perceptor failures can be specified, designed, implemented, verified, validated, and certified with conventional methods, typically according to standards like EN 50128, EN 50129 [8; 11]. The OD function can be evaluated according to ANSI/UL 4600, as explained in [21].
Note that while OD has been extensively researched in the automotive domain [7], the results obtained there cannot be directly transferred to railways
considered in this paper: the two domains require OD for different distances, and the detection criteria differ, because trains require the obstacle association with their tracks. We consider the following methodological aspects and risk analysis results to be the main contributions in this paper.
1. We propose a new verification method for (deep) neural networks (NN) performing classification tasks such as obstacle detection. This method allows to determine the residual probability \(p_{\ell}\) for a systematic classification error in the trained NN. Increasing the training effort, this method allows us to reduce \(p_{\ell}\) to an acceptable value.
2. We use a redundant design for the obstacle detection (OD) function introduced in our previous work [21] that allows to reduce the probability of a detection failure, due to stochastic independence between redundant channels. To show stochastic independence, a new statistical method is proposed.
3. We use parametric stochastic model checking to obtain probabilistic results for the hazard rate of the OD function. The parametric approach allows us to leave some values undefined, so that their influence on the hazard rate becomes visible, and the concrete risk values can be calculated at a later point, when reliable concrete parameter values are available.
4. The probabilistic analysis shows that, using a redundant 3oo3 design where each of the three sub-systems consists of a dual-channel module, the OD function is already certifiable today with an acceptable hazard rate of less than \(10^{-7}/h\) for low-speed autonomous freight trains, even if only camera-based sensors and perceptors are used.5 Further reduction of the hazard rate can be achieved by using additional fail-stop sensors/perceptors based on different technologies (such as radar and LiDAR), and apply sensor/perceptor fusion over the results of the non-failing units. Footnote 5: The requirement for low speed (less or equal 120km/h) is based on the fact that no reliable failure probabilities for camera-based obstacle detection modules have been published for trains with higher velocities [23].
To the best of our knowledge, our contribution is the first to apply this combination of statistical tests and stochastic model checking to the field of risk analysis for concrete designs of autonomous train control systems.
#### 1.0.1 Overview
The redundant design for the OD sensor/perceptor function proposed in our previous work [21] is presented again in Section 2. In Section 3, the risk modelling objectives and the applicable tolerable hazard rate are discussed. In Section 4, the statistical test strategies and the concept of risk modelling and probabilistic analysis by means of parametric stochastic model checking are described. The results of the probabilistic analysis are presented. Section 5 contains a conclusion. Below, we give references to related work where appropriate.
## 2 Fail-Safe Design of Obstacle Detection Modules
As described in our previous work [21], the OD function cannot be validated according to the existing EN 5012x standards, since the latter do not consider
AI-based functions whose behaviour depends on machine learning techniques, such as (deep) neural networks (NN). Consequently, the safety of the intended functionality not only depends on the software implementation (of neural networks), but also on the choice of training data and validation data sets [16].
The standard ANSI/UL 4600 provides guidance on how the safety of the intended functionality should be demonstrated in a certification undertaking. From the performance data available (see discussion in Section 4.6), however, we conclude that non-redundant sensor/perceptor components relying on machine learning and neural networks alone cannot achieve the tolerable hazard rates for safety integrity level SIL-3 discussed in Section 3 below.
Therefore, we suggest a redundant channel design according to the 2oo2 pattern6 (see Fig. 1), where two stochastically independent sensor/perceptor implementations provide data to a voter, and the voter decides "to the safe side": for obstacle detection, for example, the voter would decide 'obstacle present' as soon as one channel indicates this. For distance estimates delivered by both channels in an 'obstacle present' situation without disagreement, the voter selects the shorter distance. Moreover, the voter signals a perceptor error to the kernel, if channels disagree over a longer time period. To obtain stochastic independence between channels, we advocate that one channel should be implemented by conventional image processing methods, without the use of AI. Alternatively, two differently trained NNs can be used. In any case, the stochastic independence, needs to be verified by a statistical test, as described in Section 4.5 below.
Footnote 6: In the literature, the term “N-out-of-M” is used with different meanings. In this paper, NooM means that N consistent results produced by \(M\geq N\) channels are needed to be accepted by the voter. Otherwise, the system falls to the safe side.
In the remainder of this paper, the channel using conventional image evaluation techniques is denote by Channel-c, and the channel using a NN-based perceptor by Channel-n, as indicated in Fig. 1.
## 3 Risk Assessment Objective and Tolerable Hazard Rate
The top-level hazard to be analysed for OD is
Figure 1: 2oo2 design pattern for OD module or similar sense/perceive functions
\(\mathbf{H_{OD}}\) OD signals 'NO OBSTACLE' to the kernel though an obstacle is present.
We call this situation specified by \(\mathbf{H_{OD}}\) a _false negative_ produced by OD. The objective of the risk modelling approach and the associated model evaluation by stochastic model checking is to answer the following risk analysis question.
Taking into account the OD design described above: is the resulting hazard rate of \(\mathbf{H_{OD}}\) less than the tolerable hazard rate for collisions between trains and obstacles?
The _tolerable hazard rate (THR)_ for the obstacle detection (OD) module in a freight train to produce a false negative (i.e. fail to the unsafe side) is
\[\mathbf{THR_{OD}}=10^{-7}/h\quad\text{- the tolerable hazard rate for obstacle detection} \tag{1}\]
according to the discussion by Rangra et al. [22]. This is the THR associated with SIL-3, and it is justified by the fact that a collision between a freight train and an obstacle does not endanger as many humans, as would be the case for a passenger train. This assessment has been confirmed by the research project ATO-RISK [5], where a more detailed investigation of an adequate SIL classification for OD has been made. The project also advocates SIL-3 as the strictest safety integrity level, but additionally elaborates technical and operational boundary conditions where an even weaker SIL might be acceptable. These THR-related investigations have not yet been introduced into the current EN 5012x standards [9, 10, 8, 11], since the latter do not consider automation grades GoA 3 or GoA 4 yet. Also, the new standard ANSI/UL 4600 does not provide quantitative SIL-related requirements. It can be expected from these analyses [22, 5], however, that the "official" THRs, when published in new revisions of the railway standards, will not be stricter than SIL-3 with \(\mathbf{THR_{OD}}\) as specified in Equation (1).
## 4 Probabilistic Risk Assessment Approach
The objective of the risk assessment approach described in this section is to determine a trustworthy _hazard rate_ (\(\mathbf{HR_{OD}}\)) for the OD module and discuss the boundary conditions ensuring that the hazard rate is less or equal to the tolerable hazard rate: \(\mathbf{HR_{OD}}\leq\mathbf{THR_{OD}}\).
### Strategy Overview
The assurance strategy for the OD function comprises the following steps (Fig. 2). (1) An initial functional hazard analysis is performed for the 2oo2 OD module by means of a fault tree analysis. This fault tree serves to check the completeness of the following bottom-up steps for risk assessment for one 2oo2 module. (2) The NN-based OD Channel-n (see Fig. 1) is verified by means of statistical tests to estimate the residual probability \(p_{\ell}{}^{n}\) for systematic misclassifications that may be produced by this channel (Step 2n below). For OD Channel-c based on conventional image processing, a similar, but simpler procedure can be applied; this
is described in Step 2c. (3) The stochastic independence between the two channels is demonstrated by means of another statistical test. (4) A continuous-time Markov model is created for the 2oo2 OD module, and a probabilistic risk analysis is performed by means of parametric stochastic model checking, taking the 2oo2 design into account. From this Markov model, the failure rate of the 2oo2 OD module is determined by means of stochastic model checking. (5) With three stochastically independent OD modules and another voter, a sensor/perceptor fusion can be achieved, resulting in a failure rate that conforms to the tolerable hazard rate \(\mathbf{THR_{OD}}\). These steps are now described in detail.
### Step 1. Functional Hazard Analysis for OD Module
The fault tree (FT) in Fig. 3 serves as the basis for constructing the failure-related aspects and the associated mitigations in the model of the 2oo2 OD module. We explain the most important aspects of the FT here. The remaining elements of Fig. 3 should be clear from the context and the comments displayed in the figure. The top-level hazard event \(\mathbf{H_{OD}}\) is the occurrence of a false negative (OD signals 'no obstacle' to the kernel, though an obstacle is present).
In all sub-components of the OD module (voters, sensors, perceptors, communication links, power supplies), we can assume that no systematic HW, SW or firmware failures are still present, because we require that the software is developed according to SIL-4. Therefore, the remaining failure possibilities to consider are (a) transient HW faults, (b) terminal HW failures, (c) systematic residual perceptor failures to detect obstacles.
The left-hand side of the FT consider cases where the two channels deliver contradictory results, but the voter fails to handle the contradiction appropriately, due to a transient fault. Undetected sensor faults (transient or terminal) in one channel can arise from HW faults or environmental conditions (fog, snow, sandstorms). Undetected perceptor faults can arise from HW faults or residual failures to detect certain types of obstacles.
A simultaneous channel fault (middle box on level 2) leading to \(\mathbf{H_{OD}}\) could be caused by simultaneous sensor failures or by simultaneous perceptor faults. The former hazard is mitigated by the sensor capabilities to detect its own degradations, the stochastic independence of HW failures (due to the redundant HW design), and by the stochastic independence of the redundant perceptors, as described in Step 3 below. The latter hazard is mitigated by reducing the probability for _systematic_ perceptor faults through the tests performed in Step 2
Figure 2: Overview of the probabilistic risk assessment and assurance approach
and by the stochastic independence of both perceptors demonstrated in Step 3, reducing the probability of a simultaneous _random_ false negative.
### Step 2n. Testing for Systematic Classification Errors: Channel-n
**Equivalence Classes and Their Identification - Channel n** In the real operational environment, an infinite variety of concrete obstacles could occur. Therefore, it is desirable to partition their (finite, but still very large number of) pixel image representations into _input equivalence classes_. For convolutional neural networks (CNN) typically used for image classification, it was assumed until recently that such classes could not be determined by exact calculation or at least by numerical approximation. This has changed during the last years [12, 3, 4], and we follow the approach of Benfenati and Marta [3, 4] for this purpose: the authors explain how to approximate the classification function of a deep NN by
Figure 3: Fault tree of the 2oo2-OD module for the top-level event \(\mathbf{H_{OD}}\)
means of differentiable mappings \(\Lambda_{i}\) between differentiable manifolds \(M_{i}\):
\[M_{0}\stackrel{{\Lambda_{1}}}{{\longrightarrow}}M_{1}\stackrel{{ \Lambda_{2}}}{{\longrightarrow}}M_{2}\ldots M_{n-1}\stackrel{{ \Lambda_{n}}}{{\longrightarrow}}M_{n}\]
Manifold \(M_{0}\) represents the set of possible input images, and \(M_{1},\ldots,M_{n-1}\) the intermediate hidden layers of the CNN. For our purposes, \(M_{n}\) is a one-dimensional output manifold that can be mapped to \([0,1]\), such that all \(z\in[0,1)\) represent classification result "no obstacle", while the \(z=1\) represents "obstacle present". The maps \(\Lambda_{1},\ldots,\Lambda_{n-1}\) represent the inter-layer mappings of the CNN. Some of these are smooth (e.g. the filter applications), others can be approximated by smooth alternatives. The map \(\Lambda_{n}:M_{n-1}\longrightarrow M_{n}\) is a smooth approximation7 of the ReLU (rectified linear unit) activation function, typically used in a CNN.
Footnote 7: For example, by means of the Gaussian-error-linear unit (GELU).
Using the Euclidean metric \(g_{n}\) on \(M_{n}\), repetitive pullbacks of \(g_{n}\) through \(\Lambda_{n},\ldots,\Lambda_{1}\) introduce a _degenerate_ Riemannian metric \(g_{0}\) on \(M_{0}\): using \(\Lambda\) to denote the composite map \(\Lambda_{n}\circ\cdots\circ\Lambda_{1}:M_{0}\longrightarrow M_{n}\), the distance from \(p\) to \(p^{\prime}\) in \(M_{0}\) is simply \(|\Lambda(p)-\Lambda(p^{\prime})|\).
Given an image \(p\in M_{0}\) that is classified by the CNN as "obstacle", so that \(\Lambda(p)=1\), all images \(p^{\prime}\) that can be reached from \(p\) on a _null curve_, that is, a piecewise smooth curve of length null in the degenerate metric \(g_{0}\), are also classified by the CNN as obstacles.8 The _obstacle image space_\(\mathcal{O}=\Lambda^{-1}(\{1\})\subseteq M_{0}\) of all images classified by the CNN as obstacles, however, is not null-connected: for some image points \(p,p^{\prime\prime}\) that are both classified as obstacles, every piecewise smooth curve connecting \(p\) and \(p^{\prime\prime}\) traverses one or more regions of points mapped to "no obstacle". Each sub-manifold of \(\mathcal{O}\) consisting of pairwise null-connectible image points represents an equivalence class of the CNN.
Footnote 8: The length of differentiable curve \(\gamma\) in \(M_{0}\) is obtained by integrating over the length of its tangent vectors is some curve parametrisation, say, \(\gamma(t),t\in[0,1]\)[6]. The length of a tangent vector \(v=\stackrel{{\cdot}}{{\gamma}}(t)\) is obtained by calculating \(\sqrt{g_{0}(v,v)}\): the metric \(g_{0}\) on \(M_{0}\) induces a bilinear form (also denoted by \(g_{0}\) on the tangent space at \(\gamma(t)\). For degenerate metrics \(g_{0}\), non-zero tangent vectors can have zero length, since \(g_{0}(v,v)=0\).
**Statistical Test Based on Coupon Collector's Problem** Consider the \(\ell\) equivalence classes \(c_{1},\ldots,c_{\ell}\subseteq\mathcal{O}\) representing null-connected image sets to be classified as obstacles by the trained NN implementing the perceptor of Channel \(\mathsf{n}\). In an ideal perceptor, every real-world obstacle would produce an image \(p\in M_{0}\) fitting into some class \(c_{i}\), that is, \(p\in c_{i}\). We are interested in an estimate for the residual error probability \({p_{\ell}}^{\mathsf{n}}\) for the existence of a further subset of "undetetected" images \(u_{\ell+1}\subseteq M_{0}\setminus\mathcal{O}\) representing obstacles in the real word, but classified as "no obstacle" by the NN, because they are not contained in \(\bigcup_{i=1}^{\ell}c_{i}=\mathcal{O}\).
Recall the naive statistical approach to estimate \({p_{\ell}}^{\mathsf{n}}\): one could apply the Monte Carlo method by taking \(n\) independent image samples \(\{p_{1},\ldots,p_{n}\}\) representing obstacles and determine \(\hat{P}_{\ell,n}=\frac{n_{\ell}}{n}\), where \(n_{\ell}\) denotes the number of
false negatives obtained by the NN on the sample \(\{p_{1},\ldots,p_{n}\}\). Then \(\hat{P}_{\vec{t},n}\) converges to \(p_{\ell}\) for \(n\to\infty\) with probability \(1\). This approach is unsuitable for our purposes, since the sample size \(n\) must be very large for trustworthy estimation of small residual error probabilities \(p_{\ell}\)10.
Footnote 10: Weijing Shi et al. [24] state that a misclassification probability of \(p_{\ell}\approx 10^{-12}\) would require a sample size of \(n\approx 10^{13}\).
As a more promising alternative approach, we therefore suggest to obtain an estimate for \(p_{\ell}\)11 by means of statistical tests based on a generalised variant of the _Coupon Collector's Problem (CCP)_[13]. This CCP variant considers \(\ell\) different types of coupons in an urn, such that drawing a coupon of type \(i\in\{1,\ldots,\ell\}\) from the urn with replacement has probability \(p_{i}\). The CCP considers the random variable \(X\) denoting the number of draws necessary to obtain a coupon of each type at least once. The expected value of \(X\) is calculated by [13]
Footnote 11: Weijing Shi et al. [24] state that a misclassification probability of \(p_{\ell}\approx 10^{-12}\) would require a sample size of \(n\approx 10^{13}\).
\[E(X)=\int_{0}^{\infty}\big{(}1-\prod_{i=1}^{\ell}(1-e^{-p_{i}x})\big{)}\mathrm{ d}x\;. \tag{2}\]
For the application of the CCP in the context of this article, we assume the availability of a large database \(D\) of 'obstacle-on-track' sample images representing the urn in the CCP. We assume that there exists a random selection mechanism for \(D\), such that the selected samples are stochastically independent from each other, concerning their ontology classification. The obstacle images from \(D\) take the role of the CCP-samples to be drawn from the urn. If the image sample fits into equivalence class \(c_{i},\ i\in\{1,\ldots,\ell\}\), this corresponds to the CCP coupon being of type \(i\).
For a verification run \(V_{k}\), we draw sample images from \(D\) until every known equivalence class \(c_{1},\ldots,c_{\ell}\) has been covered by at least one sample. If a run \(V_{k}\) fails because a substantial number of samples did not fit into any class, the training of the neural network is repeated with an extended set of samples, and a new collection of equivalence classes \(c^{\prime}_{1},\ldots c^{\prime}_{\ell^{\prime}}\) is determined, as described above. Then the verification runs are repeated.
While \(E(X)\) gives us an idea of the number of samples needed to cover every equivalence class \(c_{i}\) at least once, we need a (higher) value of samples \(\overline{S}\) required to discover all members _with sufficiently high probability_. Hence we are looking for an \(\overline{S}\in\mathbb{N}\) such that the probability \(\tau=\mathsf{P}(X<\overline{S})\) is close to \(1\). Adapting the estimation approach suggested by Hauer et al. [15] to our problem, the probability \(\mathsf{P}(X<S)\) for any \(S\in\mathbb{N}\) can be calculated by using a large number of verification runs \(V_{k},\ k=1,\ldots,m\) and count the occurrences \(occ(i),\ i\in\mathbb{N}\) of verification runs in \(\{V_{1},\ldots,V_{m}\}\), where all equivalence classes \(c_{1},\ldots,c_{\ell}\) have been covered after \(i\) samples. Note that \(occ(i)=0\) for \(i<\ell\), because we need at least \(\ell\) images to cover that many classes. Then \(\mathsf{P}(X<S)\) can be estimated by
\[\mathsf{P}(X<S)=\frac{1}{m}\sum_{i=1}^{S}occ(i)\;, \tag{3}\]
so we select \(\overline{S}\) as the smallest \(S\) such that the value of \(\mathsf{P}(X<S)\) calculated by Equation (3) is greater or equal \(\tau\). The number \(m\) of verification sets \(V_{k}\) to be used in Equation (3) determines the confidence that we can have in the estimate for \(\mathsf{P}(X<\overline{S})\); minimal values for \(m\) achieving a given confidence level can be calculated as described by Hauer et al. [15].
Assume that the NN implementing the perceptor of Channel \(\mathsf{n}\) has \(\ell\) equivalence classes, as described above. Assume further that the verification runs \(V_{k},\ k=1,\ldots,m\) have been performed successfully and resulted in probability estimates \(p_{1},\ldots,p_{\ell}\) for an obstacle to fall into class \(c_{1},\ldots,c_{\ell}\). Now we test the hypothesis that the successful verification runs have overlooked an obstacle type that does not fit into any of the identified classes \(c_{1},\ldots,c_{\ell}\), but is associated with a subset \(u\subseteq M_{0}\setminus\mathcal{O}\) of obstacle images leading to false negatives. Assume further that the occurrence probability for such an obstacle is \(p_{u}\). Then we have to re-scale the probability estimates \(p_{1},\ldots,p_{\ell}\) to \(p_{i}^{\prime}=(1-p_{u})p_{i}\), so that \(\big{(}\sum_{i=1}^{\ell}p_{i}^{\prime}\big{)}+p_{u}=1\). According to Equation (2), the expected number of samples needed for covering the \((\ell+1)\) classes is
\[E(X)=\int_{0}^{\infty}\Big{(}1-\big{(}\prod_{i=1}^{\ell}(1-e^{-p_{i}x})\big{)} \cdot(1-e^{-p_{u}x})\Big{)}\mathrm{d}x\;.\]
Now we apply Equation (3) for this extended hypothetical partition \(\{c_{1},\ldots,c_{\ell},u\}\), to estimate the number \(m_{\text{new}}\) of verification runs \(V_{k}\) to be performed in order to get at least one sample for each partition element, _including_\(u\), again with a high confidence level and the same probability \(\mathsf{P}(X_{\text{new}}<\overline{S}_{\text{new}})=\tau\). Then the verification runs are extended to \(V_{1},\ldots,V_{m_{\text{new}}}\). If this extended suite of verification runs does _not_ reveal the existence of such a partition element \(u\), we can conclude with the given confidence level that the original set of classes \(c_{1},\ldots,c_{\ell}\) implemented by the NN is complete.
### Step 2c. Testing for Systematic Classification Errors: Channel-c
**Equivalence Classes and Their Identification** For the perceptor of Channel c, an input equivalence class consists of a set of images covering the same path in the perceptor software control flow graph, so that they all end up with the same classification result.
**Statistical Tests** The statistical tests regarding the probability \(p_{\ell}{}^{\mathsf{c}}\) of systematic residual classification errors in Channel-c can be performed in analogy to Step 2\(\mathsf{n}\), but now the equivalence classes are identified by software control flow paths instead of null-connected sub-manifolds of the obstacle image space \(\mathcal{O}\).
### Step 3. Stochastic Independence Between the two Channels
On hardware-level, stochastic independence between the two OD channels is justified by redundancy and segregation arguments: the channels use different
cameras, and the perceptors are deployed on different processor boards with separate power supplies and wiring, both for electrical current and communication links between sensors, perceptors, and voter. There are no communication or synchronisation links between the channels.
The remaining common cause failure of the two channels that cannot be avoided is given by adverse weather conditions (like fog, sand storms, or snow) corrupting the camera images. This can be detected by the sensors themselves by identifying consecutive images as identical without discernible shapes (fog) or as white noise (sand storm, snow). We can expect that at least one of the two channels detects this condition and raises a fault that will cause the voter to signal 'OD failure' to the kernel. This will lead to an emergency stop of the train. Consequently, we are only interested in stochastic independence of the two perceptors _in absence_ of this detectable common cause failure.
As discussed for the fault tree model of Step 1, the only remaining potential cause for stochastic dependence would be that the two perceptors evaluate images "in a similar way". To demonstrate the absence of such a dependency, we apply the method of Sun et al. [25] for explaining the _reasons_ for classification results: the method provides an algorithm for identifying a subset of image pixels that were essential for obtaining the classification result. For the demonstration of stochastic independence, we define two bit matrix-valued random variables \(R_{i},\ i=\mathsf{c},\mathsf{n}\). Variable \(R_{i}\) encodes these explanations obtained by Channel \(\mathsf{c}\) and Channel \(\mathsf{n}\), respectively, as a pixel matrix, where only the essential pixels are represented by non-zero values.
While performing the verification runs \(V_{k}\) of Step 2c and Step 2n, the sequence of matrix values \(R_{\mathsf{c}},R_{\mathsf{n}}\) obtained from the images of \(V_{k}\) are determined (both channels need to run the same verifications \(V_{k}\) in the same order, so that the same sequence of images is used over all runs \(V_{k}\)). Then the stochastic independence between \(R_{\mathsf{c}}\) and \(R_{\mathsf{n}}\) can be tested by means of the widely-used \(\chi^{2}\)-test. If this test indicates a stochastic _dependence_ between perceptors \(\mathsf{c}\) and \(\mathsf{n}\), then the NN has to be retrained with a different data set, or another structure of the NN (for example, another layering) needs to be chosen.
The main result obtained from the stochastic independence of \(R_{\mathsf{c}}\) and \(R_{\mathsf{n}}\) is that false negative misclassifications occur at the two channels in a stochastically independent way. More formally, let \(X_{i},\ i=\mathsf{c},\mathsf{n}\) be two Boolean random variables with interpretation "\(X_{i}=\text{true}\) if and only if a false negative misclassification occurs in the perceptor of Channel \(i\)". Then, with \(a,b\in\{\text{true},\text{false}\}\), stochastic independence allows us to calculate
\[\mathsf{P}(X_{\mathsf{c}}=a\wedge X_{\mathsf{n}}=b)=\mathsf{P}(X_{\mathsf{c}} =a)\cdot\mathsf{P}(X_{\mathsf{n}}=b)\;.\]
In particular, the probability of a simultaneous misclassification in both channels (case \(a=\text{true}\ \wedge\ b=\text{true}\)) can be calculated as the product of the misclassification probabilities of each channel.
### Step 4. Determining \(\mathbf{HR_{OD}}\) for the 2oo2-OD Module
We now quantify the probability of an \(\mathbf{H_{OD}}\) event for a single module demand. Recall that \(\mathbf{H_{OD}}\) means an obstacle is present within OD range or the OD module is provided with degraded data, but neither is detected by the module (i.e., \(r=\text{no}\)) and the module's voter component misses to raise an error flag (i.e., \(f=\text{false}\)) that could be considered by the automatic train protection.
We model the two channels (\(i\in\{\mathsf{c},\mathsf{n}\}\)) and the voter of the OD module (Fig. 1) using a SysML activity chart with a Petri net interpretation as shown in Fig. 4. For each processing cycle (i.e., when new tokens are placed at the beginning of each line), both channels perform a _sense_ and a _perceive_ action with the data \(d\in D\) flowing (i.e., carried with the tokens) from the environment into both channels and from the top to the bottom. For illustration, we use \(D=0..2\), with \(d=0\) for "obstacle absent", \(d=1\) for "obstacle present", and \(d=2\) for "degraded inputs" (e.g., dense fog, covered sensors). The _environment_ part enables a conditional risk assessment of the OD module based on the stochastic _generation_ of inputs from \(D\). In our assessment, the environment only generates \(d\in\{1,2\}\).
We use a continuous-time Markov chain (CTMC) as stochastic model for the OD module. Given variables \(V\), a CTMC is a tuple \(\mathcal{M}=(S,s_{0},\mathbf{R},L)\) with state space \(S\in 2^{V\to\mathbb{N}}\), initial state \(s_{0}\in S\), transition rate matrix \(\mathbf{R}\colon S\times S\to\mathbb{R}_{\geq 0}\), and labelling function \(L\colon S\to 2^{AP}\) for a set \(\mathit{AP}\) of atomic propositions. Properties to be checked of \(\mathcal{M}\) can be specified in continuous stochastic logic (CSL). For example, the expression \(\mathcal{M},s\models\mathsf{P}_{>p}[\mathsf{F}\phi]\) is satisfied if and only if the CSL formula \(\mathsf{P}_{>p}[\mathsf{F}\phi]\) is true in \(\mathcal{M}\) and \(s\in S\), that is, if the probability (\(\mathsf{P}\)) of eventually (\(\mathsf{F}\)) reaching some state \(s^{\prime}\) satisfying \(\phi\) from \(s\) in \(\mathcal{M}\) is greater than \(p\). If \(\phi\) is a propositional formula, its satisfaction in \(s\in S\) (\(s\models\phi\)) is checked using the atomic propositions from \(L(s)\). More details about CSL model checking, for example, with the Prism model checker, can be obtained from [20].
To work with CTMCs, we translate the activity chart from Fig. 4 into a probabilistic guarded command10 program (Listing 5). From this program, a probabilistic model checker can derive a CTMC \(\mathcal{M}\) that formalises the semantics
Figure 4: SysML activity chart describing the data processing in the OD module
of the activity chart, allowing the processing in the two channels to be non-deterministically concurrent,11 finally synchronising on the _vote_ action. This type of concurrency enables us to make assumptions about the processing speed in the two channels independently and flexibly.
Footnote 11: Each of the timed synchronised interleavings of the four sequential components in Fig. 4 carries information about the _expected time of occurrence_ of events and, thus, the accumulated expected duration of a particular interleaving. This allows one to derive timed termination probabilities and rates of the processing cycle.
The Listing 5 shows fragments of the program describing one channel, its processing stages, and the voter component. The state space \(S\) of the associated \(\mathcal{M}\) is defined via a stage counter (\(s_{i}\in 0..5\)), data flow variables (\(\mathit{sen}_{i}\), \(\mathit{per}_{i}\), \(\mathit{com}_{i}:D\)) for each channel, variables for the input data \(d:D\), the result \(r:D\), and a Boolean failure flag \(f\). We use the initial state \(s_{0}(v)=0\) for \(v\neq f\) and \(s_{0}(f)=\) false. The transition rate matrix \(\mathbf{R}\) is defined indirectly via probabilistic updates: For each update \(u\) (e.g., a fault) of an action \(a\) (e.g., \(\mathsf{Perceive}^{o}_{n}\)), we provide a rate \(\lambda_{a,u}=p_{u}\cdot\lambda_{a}\), where \(p_{u}\) is the probability of seeing update \(u(s)\) if an action \(a\) is performed in state \(s\) and \(\lambda_{a}\) is the average speed, frequency, or rate at which action \(a\) in \(s\) is completed. We can either provide a compound rate \(\lambda_{a,u}\) or separate parameters \(p_{u}\) and \(\lambda_{a}\). For example, for the \(\mathsf{Perceive}^{o}_{n}\) action (i.e., NN-based perception, given the sensor forwards a picture with an obstacle, line 4), we consider a single failure mode (line 5) with probability \(p_{\!f}\)1 (estimated in Sect. 4.3) multiplied with a perception speed estimate \(\lambda_{\mathsf{pn}}\).
Footnote 12: Speed estimates can be set to 1 for a CTMC where estimates are unavailable and relative speed and performance does not play a role in the risk assessment.
As described in Sect. 2, the output at the end of each processing cycle is a tuple \((r,f)\) with the voting result \(r\) and the status of the failure flag \(f\). Under normal operation, \(r\) contains either the concurring result of both channels or an error to the _safe side_ (i.e., \(\max_{i\in\{\mathsf{c},n\}}\{\mathit{com}_{i}\}\)) in case of contradictory channel results. For example, if one channel reports an obstacle and the other does not, the nominal voter would forward "obstacle present" and raise a flag.
For the model, we need to provide probability and speed estimates of the channel- and stage-specific faults. For example, we use \(p_{\!f}\)1 and \(\lambda_{\mathsf{pn}}\) for the probability of an NN-perceptor fault \(\mathit{SP}_{n}\) and the speed12 of the associated fault-prone action \(\mathsf{Perceive}^{o}_{n}\). Analogously, we provide \(p_{\!f}\)2 and \(\lambda_{\mathsf{pe}}\) for the conventional perceptor, \(p_{\!f}\)3 and \(\lambda_{\mathsf{v}}\) for \(V_{\!r}\), and, similarly, for the other events defined in the fault tree (e.g., \(\mathit{SP}_{r}\), \(\mathit{SP}_{s}\), \(\mathit{SC}\), \(\mathit{SS}_{\mathsf{di}}\), \(\mathit{SS}_{\mathsf{sh}}\); Fig. 3). Based on these parameters, the CTMC allows us to quantify time-independent probabilities of intermediate and top-level events in the fault tree, for example, \(\mathit{UC}\), \(S\), and, in particular, the probability \(\mathsf{P}[\mathit{FN}]\) of the top-level event \(\mathbf{H}_{\mathbf{OD}}\), that is, a false negative under the condition that either an obstacle or degraded data is present.
Footnote 12: Speed estimates can be set to 1 for a CTMC where estimates are unavailable and relative speed and performance does not play a role in the risk assessment.
To make our assessment independent of a particular \(p_{\!f}\)1 and \(p_{\!f}\)2, we perform a parametric CTMC analysis that yields a function \(\mathsf{P}[\mathit{FN}](p_{\!f}\)3). Consider the parametric CTMC \(\mathcal{M}(p_{\!f}\)2, \(p_{\!f}\)3) derived from Listing 5. By \(S_{\mathsf{od}}=\{s\in S\mid s_{\mathsf{c}}=s_{n}\wedge s_{\mathsf{c}}=1\wedge( d=1\lor d=2)\}\), we select only those _intermediate states_ where the OD module is provided with either a present
obstacle (\(d=1\)) or degraded data (\(d=2\)) at its sensing stage (\(s_{\mathsf{c}}=1\)). According to the fault tree (Fig. 3), we select _final states_ with the predicate
\[\begin{array}{ll}\mbox{fin}&\equiv\ \big{(}(s_{\mathsf{c}}=s_{\mathsf{n}} \wedge s_{\mathsf{c}}=5\wedge\neg f)&\mbox{at final stage $s_{i}=5$ with muted flag $(V_{\mathsf{r}})$},\\ &\wedge\ ((com_{\mathsf{c}}\neq com_{\mathsf{n}})&\mbox{observe contradictory results $(UC)$},\mbox{\bf or a}\\ &\vee\ (com_{\mathsf{c}}=com_{\mathsf{n}}\wedge r\neq d))\big{)}&\mbox{simultaneous channel or voter fault $(S,V_{r})$}.\end{array}\]
These are all states at the final processing stage (\(s_{i}=5\)) that correspond to either _UC_ or \(S\) in the fault tree and, hence, \(\mathbf{H_{OD}}\). Then, we compute \(\mathsf{P}[\mathit{FN}]({p_{\mathsf{f}}}^{\mathsf{n}},{p_{\mathsf{f}}}^{ \mathsf{c}})\) by quantifying (\(\mathsf{P}_{=?}[\cdot]\)) and accumulating (\(\sum_{S_{0}}\cdot\)) the conditional probabilities of the unbounded reachability (\(\mathsf{F}\) fin) of a final state in \(S_{\mathsf{f}}=\{s\in S\mid s\models\mbox{fin}\}\)
Figure 5: Probabilistic program fragment showing parts of the NN channel and the voter. The influence on some of the FT events from Fig. 3 is indicated.
from some intermediate state \(s\in S_{\mathsf{od}}\). The corresponding formula is
\[\mathbf{H_{OD}}({p_{\mathit{f}}}^{n},{p_{\mathit{f}}}^{\mathsf{c}})= \mathsf{P}[FN]({p_{\mathit{f}}}^{n},{p_{\mathit{f}}}^{\mathsf{c}}) \tag{4}\] \[=\sum_{s\in S_{\mathsf{od}}}\big{(}\underbrace{\mathcal{M}({p_{ \mathit{f}}}^{n},{p_{\mathit{f}}}^{\mathsf{c}}),s_{0}\models\mathsf{P}_{=?}[ \mathsf{F}\,s]}_{\text{probability of reaching $s$ from $s_{0}$}}\big{)}\cdot\big{(}\underbrace{\mathcal{M}({p_{ \mathit{f}}}^{n},{p_{\mathit{f}}}^{\mathsf{c}}),s\models\mathsf{P}_{=?}[ \mathsf{F}\,\text{fin}]}_{\text{probability of reaching fin from $s$}}\big{)}\.\]
Note that the CSL quantification operator \(\mathsf{P}_{=?}\) used inside the sum operator transforms the satisfaction relation \(\models\) into a real-valued function.
Shown in Fig. 6a, one OD module has a residual probability for an undetected false negative in range \(\mathsf{P}[\mathit{FN}]({p_{\mathit{f}}}^{\mathsf{n}},{p_{\mathit{f}}}^{ \mathsf{c}})\in[0.0016,0.005]\), depending on the residual misclassification probability \({p_{\mathit{f}}}^{\mathsf{n}},{p_{\mathit{f}}}^{\mathsf{c}}\in[0.02,0.1]\). Reports on failure probabilities of image classification based on both conventional image evaluation and trained neural networks indicate that, as of today, neither \({p_{\mathit{f}}}^{\mathsf{n}}\) nor \({p_{\mathit{f}}}^{\mathsf{c}}\) will be below 0.02 [23, 1]. For example, assuming \({p_{\mathit{f}}}^{\mathsf{n}}={p_{\mathit{f}}}^{\mathsf{c}}=0.04\), one OD module alone will have a hazard rate of approximately \(\lambda_{\mathsf{od}}\cdot 0.0016=6\cdot 10^{-5}\) with \(\lambda_{\mathsf{od}}=2/24h^{-1}\) denoting the frequency of obstacle occurrences or degraded data. While this does not yet conform to \(\mathbf{THR_{OD}}\) specified in Equation (1) (see also the parameter-dependent hazard rates in Fig. 6a), it allows to apply sensor fusion to create a composite OD system respecting \(\mathbf{THR_{OD}}\).
### Step 5. Determining \(\mathbf{HR_{OD}}\) for the Fused 3oo3 OD System
We create a 3oo3 sensor fusion system, using three stochastically independent (that is, differently trained and with different image recognition software) OD modules with 2-channel structure as described above: a 3oo3 voter raises an error leading immediately to braking the train, as soon as an "obstacle/no obstacle" indication is no longer given unanimously by the three OD modules. This means that single and double faults are immediately detected and result in immediate fault negation by going into a safe state. As explained in the previous paragraph, each module has a failure rate that is smaller than \(2\cdot 10^{-4}h^{-1}\). Therefore, applying the rule [11, B.3.5.2, 5]) of EN 50129, the detection of triple faults for such a system is not required.
Figure 6: The functions in (a) and (b) result from computing the symbolic solution of the right-hand side of Eq. (4) using the parametric CTMC \(\mathcal{M}({p_{\mathit{f}}}^{n},{p_{\mathit{f}}}^{\mathsf{c}})\).
Assuming that all three OD modules have a probability for producing a false negative that is less or equal to \(\mathsf{P}[\mathit{FN}](p_{\mathit{f}}{}^{\mathsf{n}},p_{\mathit{f}}{}^{\mathsf{ c}})\), the hazard rate for a safety-critical false negative produced by this 3oo3 OD system (Fig. 6b) is
\[\mathbf{HR_{OD}}(p_{\mathit{f}}{}^{\mathsf{n}},p_{\mathit{f}}{}^{\mathsf{c}})= \lambda_{\mathsf{od}}\cdot\left(\mathsf{P}[\mathit{FN}](p_{\mathit{f}}{}^{ \mathsf{n}},p_{\mathit{f}}{}^{\mathsf{c}})\right)^{3}\,. \tag{5}\]
With \(\mathsf{P}[\mathit{FN}](0.04,0.04)=0.0016\) as discussed above, this ensures
\[\mathbf{HR_{OD}}(0.04,0.04)=\frac{2}{24}\cdot 0.0016^{3}=3.413\cdot 10^{-10} <\mathbf{THR_{OD}}=10^{-7}\;.\]
## 5 Conclusion
We have presented a 5-step approach to probabilistic risk assessment for autonomous freight trains in open environments with automated obstacle detection. This approach is based on a preceding qualitative evaluation of the assurance steps required to enable a certification according to the standard ANSI/UL 4600. The risk figures obtained indicate that autonomous freight trains based on the train control system architecture advocated here can achieve adequate safety with obstacle detection based on camera images alone, provided that at least three independent 2oo2 OD modules are fused into an integrated 3oo3 OD detector. The costs to achieve this can be expressed in the number of statistical tests to be performed in order to guarantee these upper risk bounds. Moreover, our example illustrates that, under realistic assumptions, the failure probability of an OD module corresponds to the product of the classification error probabilities; sensor and voter faults play no significant role in the overall assessment.
The statistical testing strategy described here requires considerable effort, since several verification runs \(\{V_{1},\ldots,V_{m_{\text{new}}}\}\) are involved and have to be repeated if too many false negatives require a new training phase. To avoid the latter, it is advisable to verify first that the trained NN is _free of adversarial examples_: in our case, these are images \(p,p^{\prime}\) that are close to each other in some metric conforming to the human understanding of image similarity (e.g. two similar vehicles standing on the track at a level crossing), where \(p\) is correctly classified as an obstacle, but \(p^{\prime}\) is not. A highly effective testing method for detecting adversarial examples has been suggested by Sun et al. [26]. It is based on a novel structural coverage metric for CNN, that is analogous to the MC/DC coverage in software testing. A detailed verification cost evaluation will be considered in a future contribution.
It is important to note that the introduction of redundancy (e.g. 2oo2) to achieve fail-safe designs, as described in EN 50129 [11, B.3.1], is only admissible for random HW faults according to this standard. The occurrence of residual HW design failures, SW failures, or failures due to imperfect machine learning processes is not taken into account. For HW design and SW (including the implementation of NN software) developed and verified according to SIL-4, the assumption that safety-critical residual failures exist can be also neglected for the context of this paper. The probability of a residual systematic failure in a
trained NN, however, needs to be taken into account. Therefore, a certification of the OD module in an autonomous freight train cannot be performed on the basis of the current EN 5012x standards alone. Instead, ANSI/UL 4600 needs to be used: according to this new standard for autonomous control systems, the failure model is allowed to take systematic residual failures caused by imperfect machine learning into account.
|
2308.02502
|
The identification of garbage dumps in the rural areas of Cyprus through
the application of deep learning to satellite imagery
|
Garbage disposal is a challenging problem throughout the developed world. In
Cyprus, as elsewhere, illegal ``fly-tipping" is a significant issue, especially
in rural areas where few legal garbage disposal options exist. However, there
is a lack of studies that attempt to measure the scale of this problem, and few
resources available to address it. A method of automating the process of
identifying garbage dumps would help counter this and provide information to
the relevant authorities. The aim of this study was to investigate the degree
to which artificial intelligence techniques, together with satellite imagery,
can be used to identify illegal garbage dumps in the rural areas of Cyprus.
This involved collecting a novel dataset of images that could be categorised as
either containing, or not containing, garbage. The collection of such datasets
in sufficient raw quantities is time consuming and costly. Therefore a
relatively modest baseline set of images was collected, then data augmentation
techniques used to increase the size of this dataset to a point where useful
machine learning could occur. From this set of images an artificial neural
network was trained to recognise the presence or absence of garbage in new
images. A type of neural network especially suited to this task known as
``convolutional neural networks" was used. The efficacy of the resulting model
was evaluated using an independently collected dataset of test images. The
result was a deep learning model that could correctly identify images
containing garbage in approximately 90\% of cases. It is envisaged that this
model could form the basis of a future system that could systematically analyse
the entire landscape of Cyprus to build a comprehensive ``garbage" map of the
island.
|
Andrew Keith Wilkinson
|
2023-07-23T05:24:20Z
|
http://arxiv.org/abs/2308.02502v1
|
The identification of garbage dumps in the rural areas of Cyprus through the application of deep learning to satellite imagery
###### Abstract
Garbage disposal is a challenging problem throughout the developed world. In Cyprus, as elsewhere, illegal "fly-tipping" is a significant issue, especially in rural areas where few legal garbage disposal options exist. However, there is a lack of studies that attempt to measure the scale of this problem, and few resources available to address it. A method of automating the process of identifying garbage dumps would help counter this and provide information to the relevant authorities.
The aim of this study was to investigate the degree to which artificial intelligence techniques, together with satellite imagery, can be used to identify illegal garbage dumps in the rural areas of Cyprus. This involved collecting a novel dataset of images that could be categorised as either containing, or not containing, garbage. The collection of such datasets in sufficient raw quantities is time consuming and costly. Therefore a relatively modest baseline set of images was collected, then data augmentation techniques used to increase the size of this dataset to a point where useful machine learning could occur.
From this set of images an artificial neural network was trained to recognise the presence or absence of garbage in new images. A type of neural network especially suited to this task known as "convolutional neural networks" was used. The efficacy of the resulting model was evaluated using an independently collected dataset of test images.
The result was a deep learning model that could correctly identify images containing garbage in approximately 90% of cases. It is envisaged that this model could form the basis of a future system that could systematically analyse the entire landscape of Cyprus to build a comprehensive "garbage" map of the island.
Keywords: garbage disposal, fly-tipping, artificial intelligence, satellite imagery, image augmentation, convolutional neural networks, deep learning, Cyprus.
## 1 Introduction
### Background and Motivations
Illegal waste dumping is a social and environmental issue throughout the developed and developing world [1]. The consequences include environmental pollution, health hazards, and negative impacts on local ecosystems and aesthetics. The island of Cyprus has particular issues with waste disposal in rural areas[2], where little formal recycling exists. As a result, citizens tend to make impromptu garbage dumps. Assessing the scale of this issue is difficult [3]. However, if there was a way of identifying how many such garbage dumps have accumulated, together with their location, then this could be used to inform and exert pressure on local, regional and national government bodies to address the issue proactively [4].
Performing this sort of task by hand would be la
borious and error prone. If, for example, the land was divided up into \(100m^{2}\) patches, then even for a relatively modest sized country such as Cyprus covering around \(9250km^{2}\) this would require at least 925,000 such image patches to be analysed. If a human could retrieve an image, visually analyse it, and store the classification result as an atomic operation in around one minute, then to classify all image patches would require more than 15,400 person hours or 640 days.
Some previous work has employed AI techniques to automatically detect the presence of garbage dumps, such as [5] and [6]. These projects have focused on large land-fills occupying an area of several square kilometers. Few such projects exist focusing on small scale sites covering just tens or hundreds of square metres. However, it is such small areas that Cyprus garbage dumps tend to occupy. With the existence of public access high resolution remote sensing imagery, such as satellite [7][8], it is possible to focus on techniques to recognise these smaller scale features. This study utilised a type of Artificial Neural Network (ANN) shown to be effective in the classification of satellite imagery [9] and applied it to these small scale features.
### Aims and Objectives
The aim of this study was to develop and evaluate an AI based approach for identifying garbage heaps in Cyprus. This involved the collection and labeling of satellite imagery. However, collection of such image sets is a laborious task and it is unfeasible to collect such "raw" data in sufficient quantities to perform useful machine learning [10]. To counter this, data augmentation techniques such as sharpening, cropping, rotating, and flipping were used to significantly increase the dataset sizes, whilst retaining the important features.
The questions the study sought answers to were:
* What are the most effective machine learning algorithms and techniques for developing a model that can accurately identify and classify small scale garbage dumps in the rural areas of Cyprus from satellite imagery?
* To what extent can data augmentation techniques improve a machine learning model for such small scale satellite image classification?
### Approach
To provide answers to the questions above the following steps were completed. Data was collected in the form of satellite images of known garbage locations in Cyprus, forming a baseline dataset. This dataset was expanded through various data augmentation techniques [11]: sharpening, rotating, cropping, and flipping. A suitable machine learning model was selected for the study by training and evaluating various candidates against the baseline. The chosen model was then trained on the augmented datasets. Finally, the accuracy of the model was evaluated by applying various validation methods and statistics in order to arrive at conclusions with regards the efficacy of the model.
The study focused on a type of deep learning model referred to as convolutional neural networks (CNNs). These work by applying various filters to the images which learn to pick out low level then progressively higher level features from the images [12], culminating in, for this study, the identification of garbage. Dimensionality of the input data is reduced to a manageable level by use of such techniques as pooling [13]. CNNs have proven effective at image recognition, and have been applied successfully to the area of remote sensing [14].
### Results and Contributions
The results and outputs of the study are manyfold:
* A novel baseline set of satellite image patches labelled as containing/not containing garbage, which might find use outside of this study.
* Several augmented datasets of similarly labelled images which may also find wider use within the machine learning community.
* A set of CNN models which provide an accurate basis for garbage heap identification with the potential to inform waste management policies in Cyprus.
### Outline
Section 2 develops a theoretical framework to the study, mostly through a review of pertinent literature. Section 3 establishes the research methodology used to answer the research questions. This is followed by a discussion of the results in section 4 and finally a conclusion and recommendations for future work in section 5.
## 2 Theoretical Background
### Motivations
Webb et al., 2006 [3] provide a comprehensive study into the incidence of fly-tipping. They describe in detail the who, how, where, and why of such practices and their environmental and social impact. One issue highlighted is the difficulty authorities have in mapping the location of fly-tipping sites - a main driver behind this research and one of the main objectives it seeks to address by producing models which can recognize garbage dumps from satellite images.
### Deep Learning
The original deep learning research into image recognition was undertaken thirty years ago when LeCun used the Multilayer Perceptron with back-propagation to recognize handwritten digits [15]. Back-propagation is an iterative algorithm used to adjust the weights of the connections between nodes to minimize the difference between the predicted output and the actual output. It forms the basis by which Artificial Neural Networks (ANNs) are trained on input data. Since then deep learning has become the dominant method for all image recognition tasks, including those involving remote sensing sources such as satellite imagery. One of the main reasons for this is that an ANN can model complex non-linear relationships, as described by LeCun in the later 2015 paper [16].
Standard ANNs can be effective for classifying small image patches, but for complex large satellite images this is an impractical method. Each pixel of an image is a separate input to the network. Once multiple layers are added to the network then the number of parameters in the training process grows exponentially such that the training time and memory requirements become prohibitively high [17]. To address this a type of ANN called the Convolutional Neural Network (CNN) has become very popular.
### Convolutional Neural Networks
The first CNN was proposed by Fukushima in the foundational 1980 paper [18] in which the neocognitron is described as an "improved neural network". This is essentially a hierarchy of networks composed of layers of nodes with "variable connections between adjoining layers". LeCun took this idea further in 1998 [19] in which back-propagation and gradient descent were added and he showed how these strategies could solve pattern recognition involving high dimensional images better than existing hand-designed algorithms. This lay the groundwork for future CNN algorithms and variations such as LeNet, AlexNet, Google LeNet, and ResNet. [20]. Much of the earlier work focused on recognition of small scale images, particularly that of handwritten characters [21]. Subsequently, this was adapted for larger scale satellite imagery.
### Machine Learning and Satellite Imagery
Satellite imagery poses unique challenges for machine learning. Forkuor et al. (2018) extensively discuss these challenges in the context of land-use and land-cover mapping in Burkina Faso. Satellite images are larger and multi-spectral, with Sentinel-2 satellites offering 13 spectral bands, including near-infrared channels. This additional spectral information enables analysis of thermal characteristics, which can be relevant for certain types of garbage dumps containing biological matter. However, utilizing more channels in training a model increases the number of parameters and learning time. Consequently, this study opted to use three-channel RGB images.
Vaishnnave et al. (2019) provide a survey of different methods for classifying satellite datasets, including popular CNN algorithms like AlexNet, ResNet
50, GoogleNet, and CaffeNet. CNN methods exhibited the highest accuracy, ranging from 93% to 99%. It is important to note that these accuracy figures are drawn from various studies with different datasets and objectives, making direct comparisons challenging. Nonetheless, they lend support to the approach adopted in the current study.
A particularly relevant study by Rajkumar et al. (2022) applies various ML algorithms to landfills using satellite missions such as WorldView and Geo-Eye. The dataset comprised 245 images, each with dimensions of 512x512 pixels. Although the covered area is not specified, the landfills were significantly larger than those addressed in the current study. The results demonstrated accuracy ranging from 76% to 83%.
### Residual Networks
Residual Networks (ResNets) are a successful subset of CNNs, particularly in remote sensing. He et al. introduced ResNets in 2016 to address the vanishing gradient problem, which hinders learning in deep networks. The influential paper proposes an architecture with shortcut connections between layers, allowing nodes to be treated as blocks during the learning process. This approach facilitates the smooth flow of gradients through deep networks. The paper demonstrates the capability of training networks with over 1000 layers using this method.
### Pretrained Zoo Models
Pretrained models are specific implementations of ML algorithms that have been trained on large amounts of data. They can be used to "shortcut" the training time of similar problems and result in better models. The paper by Yosinski et al. 2014 [22] details early work that showed how even when the training data is significantly different from that of the target problem, that models designed around them perform better than not pretrained. It is not fully established why this is, but it might have something to do with the general structures that make up typical objects in 2D images being learned, such as edges, irrespective of what the target image is.
### Data Collection
There are several potential sources of high resolution satellite data. The US geological survey provides Earth Explorer - a user interface into the Landsat satellite system [23]. Sentinel data is available from the ESA open access hub [24]. Forkuor (2018) [25] describes a study of these two sources for land use and land cover, and applied three ML algorithms to compare their relative performance. Of the two, the Sentinel data proved slightly more accurate than the Landsat, although the algorithms were not ANN based, which tempers the applicability of the results.
An additional potential source of images is the Google Earth engine [26]. This provides a browser front-end and API into Landsat and Sentinel imagery, which are freely available for third-party use. Zhao et al. 2021 [8] summarises the Google Earth engine and categorises the types of use to which the engine has been applied, including land classification.
### Data Augmentation
Obtaining labelled satellite imagery for most ML is difficult and time consuming [10]. It can be especially challenging to source images in sufficient numbers to obtain meaningful results. One way to address this is to use image augmentation techniques to enlarge the base set of images. Shorten et al. 2019 [11] offers a comprehensive overview of the various approaches, including geometric and photometric. Geometric techniques focus on changing an image by rotating, cropping, and scaling. Photometric focuses on changing an image by altering the contrast, sharpening, and brightness.
These two approaches are studied in more detail by Taylor et al. (2018) [27] in which the results of performing ML experiments on both are described. The DeepLearning4J framework [28] and CNNs were used together with a variety of augmentation techniques and a comparison made. The study does not record the results of applying all techniques together at once as "pipelines", but does suggest that geometric cropping provides the biggest improvement from 48.13% to 61.95%.
## 3 Research Methodology
### Research Design
In order to achieve the aims and objectives introduced in section 1.2 the following approach was adopted:
* To collect a set of satellite images from a publicly available source.
* To appropriately label each image as either "garbage" or "not garbage".
* To apply several promising supervised deep learning approaches to this baseline set of images and to choose the best performing for the remainder of the study.
* To apply various data augmentation techniques to the baseline dataset in order to derive enlarged datasets.
* To evaluate each enlarged dataset against the best performing machine learning model and make appropriate performance comparisons.
The sections that follow describe the above process in detail together with limitations of the study.
### Data Collection
The data collected for this study was RGB colour satellite images centred on latitude/longitude coordinates known to contain garbage dumps. These points were collected over the last year and are taken from a variety of rural locations. They provide a representative set of images of the sort of garbage dumps found throughout Cyprus, taken from a variety of land types, such as forest, scrubland, mountain, and arable.
For every image collected of a garbage dump site, a corresponding image of a nearby location that did not contain garbage was collected; this ensured a balanced dataset of both classes of interest (garbage, not-garbage). A balanced dataset is important to prevent biases in the learned model, as discussed in [29]. A baseline set of 100 images was collected - 50 labelled as "garbage" and 50 labelled as "not garbage".
Various possible sources of satellite imagery exist, as detailed in 2.7. It was decided for this study that the best source would be Google Earth [26]. This decision was made due to its ease of use and availability, and because it covers the areas of interest with little cloud cover which would otherwise obscure the features of interest. Google earth has also been used successfully on various other similar land classification projects, such as [6] and [30].
The images collected were 200x200x3 files and upscaled as necessary for the model. These were a convenient size to work with and allowed the features of interest to be placed within a single image per occurrence.
Figure 1 illustrates a collected garbage/not garbage image pair.
Each 200x200 image maps onto a land area of around 20x20m area, with one pixel representing around 10x10cm.
### Data Augmentation
Labelled satellite imagery for ML projects of this sort is laborious to collect in large enough quantities to produce accurate and robust models on their own [31]. Studies have frequently relied on image augmentation to enlarge the dataset whilst preserving the labelling characteristics of the images. Section 2.8 reviewed methods of increasing an image set set via image augmentation.
Figure 1: Garbage/Not garbage
The geometric augmentations most likely to offer improvement according to [27] were applied as follows (inclusive of the original image):
* cropping: each suitable garbage labelled image and their non-garbage counterpart was subject to the cropping method described in the previous section.
* rotating: each image was rotated by 90, 180, and 270 degrees.
* flipping: each image was flipped horizontally and vertically.
The most promising photometric technique seemed likely to be sharpening, which can improve and enhance edge-detection capabilities, as outlined in [10]. The images were all subjected to a sharpening "kernel" filter.
By combining these augmentation approaches in various ways several datasets were created, ranging from 100 images for the baseline set up to 2,400 images for a pipelined dataset combining all of the techniques above.
### Machine Learning
Once suitably augmented sets of data were obtained, the ML process was carried out as follows:
* choose model instance to go forward with
* tune hyperparameters
* run model against each augmented dataset
* run model against pipelined datasets
* evaluate the results
This iterative process of tuning hyperparameters, training on augmented datasets, combining them in pipelined datasets, and evaluating the results helped in improving the model's performance and ability to handle novel scenarios.
#### 3.4.1 Choosing the model
Only convolutional neural network (CNN) models were considered for the actual machine learning, as these have been found to be most effective for the sort of problem addressed by this study [32], as explored in section 2.3.
There are many possible CNN models to choose from. It was decided to evaluate a range of those that had been successfully applied to image classification tasks, including several winners of the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) [33]. The process was to apply each candidate model to the baseline (unaugmented) and compare the relative performance of the model, using the metrics described in section 3.5. [5] and [32] suggest that Residual Network model might be most effective when applied to this dataset, which proved to be the case. The results section 4.1 summarises the outcome of this model selection process, including a list of the candidate models evaluated.
All the models evaluated were pretrained. These have already been trained against an image dataset, in this case the publicly available ImageNet dataset [34]. Whilst not consisting exclusively of satellite imagery (in fact only a small proportion is) it has been found that even models pretrained on non-satellite images have an advantage over those not pretrained at all [22].
#### 3.4.2 Modelling Individual Datasets
The following datasets were modelled to establish the relative effectiveness of each data augmentation technique via the selected model, as determined in the results, section 4.1:
* baseline dataset of 100 samples
* geocropped dataset of 200 samples
* sharpened dataset of 200 samples
* flipped dataset of 300 samples
* rotated dataset of 400 samples
For each such dataset performance metrics were collected, as described in section 3.5. The aim of
this was to establish a set of baselines for comparison of the various data augmentation techniques and for later evaluation of improvements (or otherwise) as they were pipelined together.
#### 3.4.3 Modelling Pipelined Datasets
There are numerous ways of combining the individual datasets, and it is only practicable to model a subset of these. For this study, to give a reasonable variation of datasets the following pipelines were modelled:
Note that rotation operations multiply each sample by 3 (90deg, 180deg, 270deg) and flipping operations multiply each by 2 (horizontal and vertically flipped) - in each case excluding the original sample (to avoid repeatedly including it in the dataset).
### Statistics Gathered
The research undertaken represents a binary classification problem. The model produced classifies image patches as either containing garbage (the "positive" outcome) or not (the "negative" outcome). This implies that for any attempt to classify a particular image, when compared to the actual classification, there are four possibilities [35]:
1. the classification is "contains garbage" and the image does actually contain garbage: a "true positive" (TP)
2. the classification is "contains garbage" but the image does not actually contain garbage: a "false positive" (FP)
3. the classification is "does not contain garbage" and the image does not actually contain any garbage: a "true negative" (TN)
4. the classification is "does not contain garbage" but the image does actually contain garbage: a "false negative" (FN)
Most of the statistics used to assess the efficacy of a binary classification problem make use these four quantities. The ones used in the study, either directly or buried within another statistic, are detailed below.
**Accuracy**: the percentage of correctly classified predictions with respect to the entire dataset. This gives a good quick indication of how well the model is performing, especially for balanced datasets. See equation 1 for how it is calculated from the figures referred to previously.
\[\text{Accuracy}=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+ \text{FN}} \tag{1}\]
**Precision**: how well the model predicts positive results. See equation 2.
\[\text{Precision}=\frac{\text{TP}}{\text{TP}+\text{FP}} \tag{2}\]
**Recall**: the fraction of true positive instances that are correctly identified by the model. See equation 3.
\[\text{Recall}=\frac{\text{TP}}{\text{TP}+\text{FN}} \tag{3}\]
These metrics give valuable insight into how well a model is performing in terms of predictive accuracy, but they do not provide a complete picture [36]. There are various useful ways in which these statistics can be combined to produce a more complete picture.
**F-Score**: this balances precision and recall by calculating their harmonic mean. See equation 4.
\[\text{FScore}=2\times\frac{\text{Precision}\times\text{Recall}}{\text{ Precision}+\text{Recall}} \tag{4}\]
However, it can be seen that with the F-Score the true negative quantity is not taken into account (because neither precision nor recall do so).
**MCC** A statistic which takes account of all parts of the confusion matrix is the Matthew Correlation Coefficient (MCC)[37]. This measures the correlation between the actual and predicted classes, and yields a value between -1 and 1. An MCC of 1 indicates perfect classification, 0 random performance, and -1
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Problem** & **Description** & **\#Samples** \\ \hline \hline \multirow{2}{*}{\(\text{pipeline},\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\texttexttext{ }}}}}}} {}}}{}}}}}}}\)} & \multirow{2}{*}{\(\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\texttext{ \text
a wholly negative correlation. See equation 5. This takes equal account of all four parts of the confusion matrix, so it is one of the preferred statistics used on this study.
\[\text{MCC}=\frac{(TP\times TN-FP\times FN)}{\sqrt{(TP+FP)(TP+FN)(TN+FP)(TN+ FN)}} \tag{5}\]
### Frameworks/Tools
The research was conducted using the Weka machine learning toolkit (Waikato Environment for Knowledge Analysis) [38]. This is an extensible Java based toolkit that provides support for data preprocessing, implementations of many popular machine learning algorithms, visualisation tools, results statistic reporting, and much more.
For this research Weka was used in conjunction with DeepLearning4J [39], which provides Java implementations of many deep learning models, including convolutional neural networks. Various popular models are provided, such as AlexNet, LeNet, ResNet and Inception. One especially useful feature is that many of these come as "zoo models" [40] pretrained on several publicly available image datasets, including ImageNet [34]. These pretrained models significantly shorten the training time required for new datasets [22], as detailed in section 2.6.
Both Weka and DeepLearning4j provide programmatic API and command line interfaces. Weka also offers a user friendly GUI with which to supply datasets, invoke models, and visualise the results.
In addition, LAMP was used to provide simple data collection and storage utilities [41]. Data augmentation was implemented using the popular ffmepg [42] graphics and video editing software.
All development and machine learning took place on the Linux Ubuntu 20.04 operating system.
### Limitations
A CNN project requires large and diverse datasets for effective training [10]. This project has been based on a modest baseline set of 100 images. However, data augmentation techniques have boosted the dataset size up to 2400. Nevertheless, additional images added to the baseset would add to the predictive performance and generalisation capabilities of the model.
The images were gathered at a resolution of 1 pixel representing 10x10cm of land coverage. The model might not perform as well against different resolution images. In addition, during the research the same land area was found to have different chromatic characteristics depending on what date satellite images were taken. An area for future exploration is to examine how the developed model performs against images that differ in this way. Possibly a model developed against a grey scale could be more robust against such variability. Converting the images to grey scale and rerunning the tests would be an interesting area for future study.
Many of the images were collected in mountainous wooded areas, which tend to predominate on the island. The effects of this were mitigated to an extent by using an independently collected test dataset from other areas of the island without these characteristics. Nevertheless a larger more diverse dataset would only improve the accuracy and robustness of the model.
It may also be the case that trees obscure some instances of garbage from satellite imagery in the visible spectrums, leading to false negatives. It would be an interesting future study to examine if expanding the image spectrum into the infrared might alleviate this.
## 4 Results and Discussion
This section evaluates the results of the various experiments: to first choose the best performing model type, then optimise the hyperparameters and finally run the model against the various datasets, both unaugmented and augmented. At each stage statistics such as accuracy, F-Score and MCC were collected (see section 3.5) and used as the basis for comparison.
### Model Selection Results
Table 2 shows the results of evaluating the various models to determine a single candidate to move for
ward with, as described in section 3.4.1. This shows the accuracy, F-Score and MCC metrics for a run of each model on the 100 image baseline dataset.
From this it can be seen that the ResNet-50 implementation gives the best performance against all measures 1. This is consistent with the results of other satellite based land classification studies, as highlighted in section 2.4. The ResNet-50 model was therefore selected for subsequent steps in the process. This model has 48 convolutional layers, one MaxPool layer, and one average pool layer and a total of of around 23.5 million trainable parameters [44].
Footnote 1: The VGG implementations yielded arithmetic underflows, possibly due to the “vanishing gradient” problem appearing for these models [43]
### Singly Augmented Datasets
Table 3 summarises the results of applying a single augmentation technique to the baseline dataset and adding the augmented dataset to the baseline; the ResNet-50 algorithm was applied to each of these datasets in turn. The trained model used a mini-batch size of 8 and 5x cross-fold validation as described in [45].
From this it can be seen that each of the augmentation techniques have yielded significant accuracy improvements over just the baseline dataset. The bigger the augmented dataset the greater the improvement, which is broadly in line with expectations. The flipping and rotating augmentation techniques in particular gave results suggestive of very good performance, with accuracy levels 90 percent and above, with similarly good MCC and F-score metrics.
### Pipeline Augmented Datasets
Table 4 summarises the results of the various pipelined experiments, as detailed in section 3.4.3. For each dataset the model was evaluated using each of the three validation methods: 5 times cross-fold, 70% split, and using the independent test dataset. A mini-batch size was chosen of 32, which is fairly standard for datasets of this size and represents a good compromise between generalisation capabilities and computational efficiency.
The table shows the accuracy and MCC metrics for each permutation 2. An average was taken for each measure against the validation methods as a way of combining the results into a single value.
Footnote 2: The F-score metric was dropped for reasons of clarity in interpreting the table, and because it takes into account less information than the MCC statistic
This table illustrates that in all cases that the trained model performs significantly better than the singly augmented datasets in their ability to correctly classify an image as containing/not containing garbage. They do so with greater than 98% accuracy when validated against the training data, and greater than 75% against the independent test data. This resulted in an overall average accuracy of around 90% and MCC of greater than 0.82. This represents strong performance in terms of classification accuracy and balanced predictions.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline
**Dataset** & **\#samples** & \multicolumn{2}{c|}{**5x cross-fold**} & \multicolumn{2}{c|}{**70\% split**} & \multicolumn{2}{c|}{**test dataset**} & \multicolumn{2}{c|}{**Averaged**} \\ \cline{3-10} & **Acc** & **MCC** & **Acc** & **MCC** & **Acc** & **MCC** & **Acc** & **MCC** \\ \hline Epelime 1 & 800 & 96.00 & 0.86 & 96.70 & 0.58 & 79.50 & 0.60 & 90.23 & 0.83 \\ \hline Epelime 2 & 1200 & 99.50 & 0.99 & 96.80 & 9.79 & 75.00 & 0.49 & 91.03 & 0.82 \\ \hline Epelime 3 & 2400 & 99.30 & 0.98 & 99.30 & 0.99 & 76.0 & 0.51 & 91.53 & 0.84 \\ \hline \end{tabular}
\end{table}
Table 4: mini-batch size = 32
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline
**Model** & **Accuracy** & **F-Score** & **MCC** \\ \hline LeNet & 53.00 & 0.53 & 0.06 \\ \hline VGG & N/A & N/A & N/A \\ \hline AlexNet & 50.00 & 0.45 & 0.00 \\ \hline KerasInceptionV3 & 47.00 & 0.47 & -0.06 \\ \hline Xception & 50.00 & 0.37 & 0.00 \\ \hline ResNet-50 & 67.00 & 0.66 & 0.36 \\ \hline \end{tabular}
\end{table}
Table 2: Model Evaluation
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline
**Dataset** & **\#samples** & **Accuracy** & **MCC** & **F-Score** \\ \hline Collected baseline & 100 & 56.00 & 0.12 & 0.56 \\ \hline Baseline + Cropped & 200 & 83.50 & 0.68 & 0.83 \\ \hline Baseline + Sharpened & 200 & 74.00 & 0.05 & 0.74 \\ \hline Baseline + Flipped & 300 & 90.33 & 0.82 & 0.90 \\ \hline Baseline + Rotated & 400 & 95.25 & 0.91 & 0.95 \\ \hline \end{tabular}
\end{table}
Table 3: Single Augmentation Evaluation
Conclusion
This study sought to investigate two issues:
* What are the most effective machine learning algorithms and techniques for developing a model that can accurately identify and classify small scale garbage dumps in the rural areas of Cyprus from satellite imagery?
* To what extent can data augmentation techniques improve a machine learning model for such small scale satellite image classification?
To answer these questions a baseline dataset of 100 training satellite image patches was collected using Google Earth [26], which matched verified garbage dump locations. For each garbage image a non garbage image from similar neighbouring terrain was collected. These images were correspondingly labelled as either "garbage" or "not garbage". Several popular convolutional neural network implementations were trained and evaluated on this baseline set, with the ResNet-50 model showing the most promise.
The baseline dataset was then augmented by applying various combinations of sharpening, rotating, flipping, and cropping. This resulted in "pipelines" of 800, 1200 and 2400 sized image sets. The ResNet-50 model, pretrained on the ImageNet dataset[46], was further trained on each pipelined collection of images and the performance of each model evaluated with respect to the others.
Various validation techniques were used to evaluate the accuracy and generalisation capabilities of the models, including cross-fold validation, employing a holdout dataset, and testing against a separate independently collected set of images.
The experiments showed that ResNet-50 provides an effective machine learning model in the detection of garbage dumps. With just the small baseset of 100 images, a model could be trained that correctly predicts around 70% of novel images. When the size of this baseline sets was expanded using data augmentation then the predictive capabilities of the model increased dramatically, correctly classifying images in more than 90% of cases.
### Future Work
Work is continuing in the area by:
* Increasing the quantity and quality of the baseline dataset. In time this will be made publicly available.
* Investigating automatic methods of obtaining appropriate satellite imagery.
* Developing a system that will allow the techniques explored to map the entire rural environment of Cyprus to develop a comprehensive "garbage map".
|
2305.12935
|
CrowdWeb: A Visualization Tool for Mobility Patterns in Smart Cities
|
Human mobility patterns refer to the regularities and trends in the way
people move, travel, or navigate through different geographical locations over
time. Detecting human mobility patterns is essential for a variety of
applications, including smart cities, transportation management, and disaster
response. The accuracy of current mobility prediction models is less than 25%.
The low accuracy is mainly due to the fluid nature of human movement.
Typically, humans do not adhere to rigid patterns in their daily activities,
making it difficult to identify hidden regularities in their data. To address
this issue, we proposed a web platform to visualize human mobility patterns by
abstracting the locations into a set of places to detect more realistic
patterns. However, the platform was initially designed to detect individual
mobility patterns, making it unsuitable for representing the crowd in a smart
city scale. Therefore, we extend the platform to visualize the mobility of
multiple users from a city-scale perspective. Our platform allows users to
visualize a graph of visited places based on their historical records using a
modified PrefixSpan approach. Additionally, the platform synchronizes,
aggregates, and displays crowd mobility patterns across various time intervals
within a smart city. We showcase our platform using a real dataset.
|
Yisheng Alison Zheng, Abdallah Lakhdari, Amani Abusafia, Shing Tai Tony Lui, Athman Bouguettaya
|
2023-05-22T11:30:00Z
|
http://arxiv.org/abs/2305.12935v1
|
# CrowdWeb: A Visualization Tool for Mobility Patterns in Smart Cities
###### Abstract
Human mobility patterns refer to the regularities and trends in the way people move, travel, or navigate through different geographical locations over time. Detecting human mobility patterns is essential for a variety of applications, including smart cities, transportation management, and disaster response. The accuracy of current mobility prediction models is less than 25%. The low accuracy is mainly due to the fluid nature of human movement. Typically, humans do not adhere to rigid patterns in their daily activities, making it difficult to identify hidden regularities in their data. To address this issue, we proposed a web platform to visualize human mobility patterns by abstracting the locations into a set of places to detect more realistic patterns. However, the platform was initially designed to detect individual mobility patterns, making it unsuitable for representing the crowd in a smart city scale. Therefore, we extend the platform to visualize the mobility of multiple users from a city-scale perspective. Our platform allows users to visualize a graph of visited places based on their historical records using a modified PrefixSpan approach. Additionally, the platform synchronizes, aggregates, and displays crowd mobility patterns across various time intervals within a smart city. We showcase our platform using a real dataset.
Human Mobility, Mobility Pattern, Crowd Mobility, Social Networks, Flexible Pattern
_Human Mobility Patterns_ are the series of places frequently visited by an individual [1]. Detecting these mobility patterns is crucial in various domains such as pandemic prevention [2], urban planning [3], crowd management [4, 5], and location-based services [6, 7]. Several studies have demonstrated that human mobility is highly predictable due to the regularity of daily routines [1, 8]. The acquisition of human mobility patterns involves examining spatio-temporal attributes and uncovering potential regularities in individual and population movement trajectories [2, 9]. Several models have been proposed to represent and predict human mobility patterns [10]. The availability of location-based data through social networks offers a unique opportunity to comprehensively investigate these patterns from both quantitative and detailed perspectives [2].
The detection of individual mobility patterns necessitates analyzing the historical data of the places visited by individuals. Deep learning approaches have been suggested for forecasting a user's next point of interest [10, 11]; however, these methods have limited accuracy, ranging from 8% to 25%. Detecting an exact mobility pattern for a user is challenging because of the inherent flexibility of human movement [1, 2]. For example, a user who regularly eats Thai food for lunch between 12:00 and 13:00 may visit a different Thai eatery each day, e.g., Thai Express on the first day, Seasoning Thai on the second day, and Thai Pothong on the third day. Despite the user's consistent dining habits, it is difficult to recognize this pattern due to the varying locations of the restaurants. Consequently, we proposed a platform for visualizing the mobility patterns of labeled locations to more accurately define users' mobility patterns [12].The platform displays a set of _frequent mobility patterns_ by using a modified PrefixSpan algorithm [13]. However, the platform was designed to detect individual mobility patterns, making it unsuitable for representing a group of users (i.e., a crowd) on a smart-city scale.
Detecting crowd mobility in a smart city is crucial in applications such as crowd management [4, 14, 15], IoT services [16, 17] and pandemic control [2]. Detecting crowd mobility is challenging because people have different spatio-temporal patterns [12, 18]. In this paper, we extend the aforementioned platform to compute and visualize the patterns of a crowd in a smart city over various time periods (See Fig.1). The platform utilizes users' records after labeling the locations and their computed _mobility patterns_ to compute the crowd mobility patterns and distribution (See Fig.3). The mobility patterns of each user are detected using
a modified PrefixSpan algorithm [13]. Our platform synchronizes the mobility patterns of crowds by identifying a map of their visited places (see Fig.3. Moreover, we align the crowd's patterns using different time windows. Our platform facilitates the analysis and comprehension of a crowd's movement within a city.
## I Crowd Mobility Patterns Detection Framework
The crowd mobility patterns detection framework aims to synchronize, aggregate, and display crowd distributions and mobility patterns across various time intervals within a smart city. The framework comprises three phases: data acquisition and pre-processing, individual mobility patterns detection, and crowd distribution and mobility patterns synchronization and aggregation (See Fig. 2). In what follows, we discuss each phase in details:
#### I-1 Data Acquisition and Pre-processing
During this phase, users' information and daily visited places data are acquired from a geo-location dataset. In this demonstration, we used a public Foursquare dataset as our default dataset [12]. The Foursquare dataset is a _geo-tagged social media_ (GTSM) dataset, where users check in at the venues they visit. We used the New York dataset, which comprises 227,428 check-in records. The dataset was collected over an 11-month period (April 2012 to February 2013). As the GTSM dataset is collected by allowing users to check in voluntarily, it is possible that users are not checking in regularly. This leads to a sparse dataset. To confirm this, the average number of records for each user is examined. The average is approximately 210, and the median is 153. As there are approximately 330 days in the data collection period, there would be less than one record per day. Hence, the dataset is sparse. In addition, there are 1083 users in the dataset. Therefore, to address the data sparsity, we aimed to extract data from months with rich check-in records. After investigating, we found that the best month is the period of time from April to June. Therefore, for the experiment, we employed data from these three months. Moreover, we discovered that users were not recording their movement patterns on a daily basis. However, in order to extract a descriptive human mobility pattern for the users, we would need to ensure the user records are rich. Hence, we selected users with less than 2 hours check-in records for more than 50 days within the 3-month period.
#### I-2 Individual Mobility Patterns Detection
As presented in [12], a modified PrefixSpan algorithm is used to detect the mobility patterns of each user [13].
#### I-3 Crowd Mobility Patterns Synchronization and Aggregation
In this step, we synchronize and aggregate all the users' mobility patterns based on time. Users who frequently visit a specific location at a particular time are categorized together as a group (See Fig.3). For example, any user with a pattern of visiting a certain microcell (e.g., shops) at a certain selected time (e.g., 8:00 am) will appear in the smart city at the selected time (See Fig.3). Moreover, if we change the time, the crowd locations may change to other microcells, depending on their patterns (See Fig.4). Our platform uses the aggregated patterns and distributions to visualize the crowd movement in a smart city.
Fig. 1: System overview
Fig. 3: The crowd in a smart city from 9-10 am
Fig. 2: Crowd Mobility Patterns Detection Framework
## II Demo Setup
Our interactive web application demonstrates crowd mobility in a smart city. We will also present a recorded video that presents the entire process of using the platform to display and interact with the default users' patterns in real-time. The video can be found at this link:. Booth visitors can select from a list of available users to visualize their networks and mobility patterns. Additionally, they can choose the city visualization and observe crowd movements across various time frames. If any audience member is willing to share their check-in history, we can upload it to the platform and visualize their patterns.
## III Mobility Patterns Exploration
To investigate the detected mobility patterns using the Foursquare dataset, we evaluated the effect of the Minimum Support Threshold \(min\_support\) on the patterns detected by the modified PrefixSpan [12]. Firstly, we examined the effect of changing \(min\_support\) on the number of sequences extracted per user. Secondly, we assessed the effect of changing \(min\_support\) on the average length of the sequences extracted per user.
The first experiment examined the effects of changing \(min\_support\) on the number of sequences extracted per user. Figure 5 illustrates the correlation between the number of sequences per user and minimum support threshold. In general, the number of sequences per user decreased as the minimum support threshold increased. This trend occurs because a higher minimum support threshold value makes it more difficult for a pattern to be recognized as a sequential pattern. It is also notable that, as the minimum support threshold increases from 0.25 to 0.5, there is a significant decrease in the number of sequences per user. Conversely, when the minimum support threshold rises from 0.5 to 0.75, the decline in the number of sequences per user is less pronounced. To ascertain the previous evaluation, we present the distribution of the number of sequences with \(min\_support\) = 0.5 in figure 8.
The second experiment examined the impact of modifying \(min\_support\) on the average length of the sequences extracted per user. Figure 7 illustrates the relationship between the average length of sequences per user and the minimum support threshold. Generally, the average length of sequences per user decreases as the support threshold rises. Moreover, when the minimum support threshold increased, the likelihood of a longer pattern being reco
Fig. 4: The crowd in a smart city from 9-10 am
Fig. 5: Average number of sequences per user vs. minimum support threshold
Fig. 6: Distribution plot of the number of sequences with \(min\_support\) = 0.5
pattern was considerably lower than that of a shorter pattern. For instance, it is reasonable to expect that the pattern 'Eatery' would appear more frequently than the pattern 'Eatery, Shops' in a sequence database, leading to a higher probability of 'Eatery' being certified as a sequential pattern compared to 'Eatery, Shops'. To ascertain the previous evaluation, we present the distribution of the average length across the levels with \(min\_support\) = 0.5 in figure 6.
## IV Conclusion
This paper presents a web platform that visualizes human mobility patterns at both the individual and city-scale levels, making it more suitable for applications within a smart city scale.In addition to individual mobility patterns, the platform enables the visualization of multiple users' mobility patterns, offering a city-scale perspective. In the future, we plan to allow users to scale the time frames for the crowd movement and automate the crowd movement animation.
## Acknowledgment
This research was partly made possible by LE220100078 and DP220101823 grants from the Australian Research Council. The statements made herein are solely the responsibility of the authors.
|
2302.08712
|
Quantile LSTM: A Robust LSTM for Anomaly Detection In Time Series Data
|
Anomalies refer to the departure of systems and devices from their normal
behaviour in standard operating conditions. An anomaly in an industrial device
can indicate an upcoming failure, often in the temporal direction. In this
paper, we make two contributions: 1) we estimate conditional quantiles and
consider three different ways to define anomalies based on the estimated
quantiles. 2) we use a new learnable activation function in the popular Long
Short Term Memory networks (LSTM) architecture to model temporal long-range
dependency. In particular, we propose Parametric Elliot Function (PEF) as an
activation function (AF) inside LSTM, which saturates lately compared to
sigmoid and tanh. The proposed algorithms are compared with other well-known
anomaly detection algorithms, such as Isolation Forest (iForest), Elliptic
Envelope, Autoencoder, and modern Deep Learning models such as Deep
Autoencoding Gaussian Mixture Model (DAGMM), Generative Adversarial Networks
(GAN). The algorithms are evaluated in terms of various performance metrics,
such as Precision and Recall. The algorithms have been tested on multiple
industrial time-series datasets such as Yahoo, AWS, GE, and machine sensors. We
have found that the LSTM-based quantile algorithms are very effective and
outperformed the existing algorithms in identifying anomalies.
|
Snehanshu Saha, Jyotirmoy Sarkar, Soma Dhavala, Santonu Sarkar, Preyank Mota
|
2023-02-17T06:03:16Z
|
http://arxiv.org/abs/2302.08712v1
|
# quantile-LSTM: A Robust LSTM for Anomaly Detection in Time Series Data
###### Abstract
Anomalies refer to the departure of systems and devices from their normal behaviour in standard operating conditions. An anomaly in an industrial device can indicate an upcoming failure, often in the temporal direction. In this paper, we make two contributions: 1) we estimate conditional quantiles and consider three different ways to define anomalies based on the estimated quantiles. 2) we use a new learnable activation function in the popular Long Short Term Memory networks (LSTM) architecture to model temporal long-range dependency. In particular, we propose Parametric Elliot Function (PEF) as an activation function (AF) inside LSTM, which saturates lately compared to _sigmoid_ and _tanh_. The proposed algorithms are compared with other well-known anomaly detection algorithms, such as Isolation Forest (iForest), Elliptic Envelope, Autoencoder, and modern Deep Learning models such as Deep Autoencoding Gaussian Mixture Model (DAGMM), Generative Adversarial Networks (GAN). The algorithms are evaluated in terms of various performance metrics, such as Precision and Recall. The algorithms have been tested on multiple industrial time-series datasets such as Yahoo, AWS, GE, and machine sensors. We have found that the LSTM-based quantile algorithms are very effective and outperformed the existing algorithms in identifying anomalies.
## 1 Introduction
Anomalies indicate a departure of a system from its normal behaviour. In Industrial systems, they often lead to failures. By definition, anomalies are rare events. As a result, from a Machine Learning standpoint, collecting and classifying anomalies pose significant challenges. For example, when anomaly detection is posed as a classification problem, it leads to extreme class imbalance (data paucity problem). Though several current approaches use semi-supervised neural network to detect anomalies [11, 19], these approaches still require some labeled data. In the recent past, there have been approaches that attempt to model normal dataset and consider any deviation from the normal as an anomaly. For instance, autoencoder-based family of models [6] use some form of thresholds to detect anomalies. Another class of approaches relied on reconstruction errors [17], as an anomaly score. If the reconstruction error of a datapoint is higher than a threshold, then the datapoint is declared as an anomaly. However, the threshold value can be specific to the domain and the model, and deciding the threshold on the reconstruction error can be cumbersome.
In this paper, we have introduced the notion of _quantiles_ in multiple versions of the LSTM-based anomaly detector. Our proposed approach is principled on:
* training models on a normal dataset
* modeling temporal dependency
* proposing an adaptive solution that does not require manual tuning of the activation
Since our proposed model tries to capture the normal behavior of an industrial device, it does not require any expensive dataset labeling. Our approach also does not require re-tuning of threshold values across multiple domains and datasets. We have exhibited through empirical results later in the paper (see Table 11 of Appendix E ) that the distributional variance does not impact the prediction quality. Our contributions are three folds:
**(1)** Introduction of _quantiles_, free from the assumptions on data distributions, in design of quantile-based LSTM techniques and their application in anomaly identification.
**(2)** Proposal of the _Parameterized Elliot_ as a 'flexible-form, adaptive, learnable' activation function in LSTM, where the parameter is learned from the dataset. Therefore, it does not require any manual retuning when the nature of the dataset changes. We have shown empirically that the modified LSTM architecture with PEF performed better than the Elliot Function (EF) and showed that such behavior might be attributed to the slower saturation rate of PEF.
**(3)** Demonstration of _superior performance_ of the proposed LSTM methods over state-of-the-art (SoTA) deep learning (Autoencoder [24], DAGMM [27], DevNet [14]) and non-deep learning algorithms (iForest [9], Elliptic envelope [15])
The rest of the paper is organized as follows. The proposal and discussion of various LSTM-based algorithms are presented in section 2. Section 3 describes the LSTM structure and introduces the PEF. This section also explains the intuition behind choosing a parameterized version of the AF and better variability due to it. Experimental results are presented in section 4. Section 5 discusses relevant literature in anomaly detection. We conclude the paper in section 6.
## 2 Anomaly detection with Quantile LSTMs
Since _distribution independent_ and _domain independent_ anomaly detection are the two key motivation behind this work, we borrow the concept of quantiles from Descriptive and Inferential Statistics to address this challenge.
### Why Quantile based approach?
Quantiles are used as a robust alternative to classical conditional means in Econometrics and Statistics [8]. In a previous work, Tambwekar et.al.[[21] extended the notion of conditional quantiles to the binary classification setting, allowing to quantify the uncertainty in the predictions and provide interpretations into the functions learnt by the models via a new loss called binary quantile regression loss (sBQC). The estimated quantiles are leveraged to obtain individualized confidence scores that provide an accurate measure of a prediction being misclassified. Since quantiles are a natural choice to quantify uncertainty, they are a natural candidate for anomaly detection. However, to the best of our knowledge, quantile based method has not been used for anomaly detection, however natural it seems.
Empirically, if the data being analyzed are not actually distributed according to an assumed distribution, or if there are other potential sources for anomalies that are far removed from the mean, then quantiles may be more useful descriptive statistics than means and other moment-related statistics. Quantiles can be used to identify probabilities of the range of normal data instances such that data lying outside the defined range are conveniently identified as anomalies.
The important aspect of distribution-free anomaly detection is the anomaly threshold being agnostic to the data from different domains. Simply stated, once a threshold is set (in our case, 10-90), we don't need to tune the threshold in order to detect anomalous instances for different data sets. Quantiles allows using distributions for many practical purposes, including looking for confidence intervals. Quantile divides a probability distribution into areas of equal probability i.e. quantiles offer us to quantify chances that a given parameter is inside a specified range of values. This allows us to determine the confidence level of an event (anomaly) actually occurring.
Though the mean of a distribution is a useful measure when it is symmetric, there is no guarantee that actual data distributions are symmetric. If there are potential sources for anomalies are far removed from the mean, then medians are more robust than means, particularly in skewed and heavy-tailed data. It is well known that quantiles minimize check loss [5], which is a generalized version of Mean Absolute Error (MAE) arising from medians rather than means. Thus, quantiles have less susceptibility to long-tailed distributions and outliers, in comparison to mean [3].
Therefore, it makes practical sense to investigate the power of quantiles in detecting anomalies in data distributions. Unlike the methods for anomaly detection in the literature, our proposed quantile-based thresholds applied in the quantile-LSTM are generic and not specific to the domain or dataset. The need to isolate anomalies from the underlying distribution is significant since it allows us to detect anomalies irrespective of the assumptions on the underlying data distribution. We have introduced the notion of quantiles in multiple versions of the LSTM-based anomaly detector
in this paper, namely (i) quantile-LSTM (ii) iqr-LSTM and (iii) Median-LSTM. All the LSTM versions are based on estimating the quantiles instead of the mean behaviour of an industrial device. Note, the median is \(50\%\) quantile.
### Various quantile-LSTM Algorithms
Before we discuss quantile-based anomaly detection, we describe the data structure and processing setup, with some notations. Let us consider \(x_{i},i=1,2,..,n\) be the \(n\) time-series training datapoints. We consider \(T_{k}=\{x_{i}:i=k,\cdots,k+t\}\) be the set of \(t\) datapoints, and let \(T_{k}\) be split into \(w\) disjoint windows with each window of integer size \(m=\frac{t}{w}\) and \(T_{k}=\{T_{k}^{1},\cdots,T_{k}^{w}\}\). Here, \(T_{k}^{j}=\{x_{k+m(j-1)},...,x_{k+m(j)-1}\}\). In Figure 1, we show the sliding characteristics of the proposed algorithm on a hypothetical dataset, with \(t=9,m=3\). Let \(Q_{\tau}(D)\) be the sample quantile of the datapoints in the set \(D\). The training data consists of, for every \(T_{k}\), \(X_{k,\tau}\equiv\{Q_{\tau}(T_{k}^{j})\},j=1,\cdots,w\) as predictors with \(y_{k,\tau}\equiv Q_{\tau}(T_{k+1})\), sample quantile at a future time-step, as the label or response. Let \(y_{k,\tau}\) be the predicted value by an LSTM model.
A general recipe we are proposing to detect anomalies is to: (i) estimate quantile \(Q_{\tau}(x_{k+t+1})\) with \(\tau\in(0,1)\) and (ii) define a statistic that measures the outlier-ness of the data, given the observation \(x_{k+t+1}\). Instead of using global thresholds, thresholds are adaptive i.e. they change at every time-point depending on quantiles.
#### 2.2.1 quantile-LSTM
As the name suggests, in quantile-LSTM, we forecast two quantiles \(q_{low}\) and \(q_{high}\) to detect the anomalies present in a dataset. We assume the next quantile values of the time period after sliding the time period by one position are dependent on the quantile values of the current time period.
It is further expected that, nominal range of the data can be gleaned from \(q_{low}\) and \(q_{high}\). Using these \(q_{low}\) and \(q_{high}\) values of the current time windows, we can forecast \(q_{low}\) and \(q_{high}\) values of the next time period after sliding by one position. Here, it is required to build two LSTM models, one for \(q_{low}\) (LSTM\(q_{low}\)) and another for \(q_{high}\) (LSTM\(q_{high}\)). Let's take the hypothetical dataset as a training set from Figure 1(a). It has three time windows from time period \(x_{1}\cdots x_{9}\). Table 1 defines the three time windows of the time period \(x_{1}\cdots x_{9}\) and the corresponding \(q_{low}\), \(q_{high}\) values against the time window.
The size of the inputs to the LSTM depends on the number of time windows \(w\) and one output. Since three time windows have been considered for a time period in this example, both the LSTM models will have three inputs and one output. For example, the LSTM predicting the lower quantile, would have \(X_{1,low}\), \(X_{2,low}\), \(X_{3,low}\) as its puts and \(y_{1,low}\) as its output, for one time-period. A total of \(n-t+1\) instances will be available for training the LSTM models assuming no missing values.
After building the LSTM models, for each time period it predicts the corresponding quantile value and slides one position to the next time period on the test dataset. quantile-LSTM applies a following anomaly identification approach. If the observed value \(x_{k+t+1}\) falls outside of the predicted \((q_{low},q_{high})\), then the observation will be declared as an anomaly. For example, the observed value \(x_{10}\) will be detected as an anomaly if \(x_{10}<\hat{y}_{1,low}\) or \(x_{10}>\hat{y}_{1,high}\). Figure 1(a) illustrates the anomaly identification technique of the quantile-LSTM on a hypothetical test dataset.
\begin{table}
\begin{tabular}{c c c} \hline TW & \(q_{low}\) & \(q_{high}\) \\ \hline \(x_{1},x_{2},x_{3}\) & \(X_{1,low}\equiv Q_{low}(T_{1}^{1})\) & \(X_{1,high}\equiv Q_{high}(T_{1}^{1})\) \\ \hline \(x_{4},x_{5},x_{6}\) & \(X_{2,low}\equiv Q_{low}(T_{1}^{2})\) & \(X_{2,high}\equiv Q_{high}(T_{1}^{2})\) \\ \hline \(x_{7},x_{8},x_{9}\) & \(X_{3,low}\equiv Q_{low}(T_{1}^{3})\) & \(X_{3,high}\equiv Q_{high}(T_{1}^{3})\) \\ \hline \end{tabular}
\end{table}
Table 1: The first time period and its corresponding time windows
Figure 1: Sliding movement of a time period
#### 2.2.2 Iqr-Lstm
iqr-LSTM is a special case of quantile-LSTM where \(q_{low}\) is 0.25 and \(q_{high}\) is the 0.75 quantile. In addition, another LSTM model predicts median \(q_{0.5}\) as well. Effectively, at every time index \(k\), three predictions are made \(\hat{y}_{k,0.25},\hat{y}_{k,0.5},\hat{y}_{k,0.75}\). Based on this, we define the Inter Quartile Range (IQR) \(\hat{y}_{k,0.75}-\hat{y}_{k,0.25}\). Using IQR, the following rule identifies an anomaly when \(x_{t+k+1}>\hat{y}_{k,0.5}+\alpha(\hat{y}_{k,0.75}-\hat{y}_{k,0.25})\) or \(x_{t+k+1}<\hat{y}_{k,0.5}-\alpha(\hat{y}_{k,0.75}-\hat{y}_{k,0.25})\)
#### 2.2.3 Median-LSTM
Median-LSTM, unlike quantile-LSTM, does not identify the range of the normal datapoints; rather, based on a single LSTM, distance between the observed value and predicted median (\(x_{t+k+1}-\hat{y}_{k,0.5}\)) is computed, as depicted in Figure 1(b), and running statistics are computed on this derived data stream. The training set preparation is similar to quantile-LSTM.
To detect the anomalies, Median-LSTM uses an implicit adaptive threshold. It is not reasonable to have a single threshold value for the entire time series dataset when dataset exhibits seasonality and trends. We introduce some notations to make description concrete. Adopting the same conventions introduced before, define \(d_{k}\equiv x_{t+k+1}-Q_{0.5}(T_{k+1}),k=1,2,\ldots,n-t\) and partition the difference series into \(s\) sets of size \(t\) each, i.e., \(D\equiv D_{p},p=1,\ldots,s\), where \(D_{p}=\{d_{i}:i=(s-1)t+1,\ldots,st\}\). After computing the differences on the entire dataset, for every window \(D_{p}\), mean (\(\mu_{p}\)) and standard deviation (\(\sigma_{p}\)) for the individual time period \(D_{p}\). As a result, \(\mu_{p}\) and \(\sigma_{p}\) will differ from one time period to another time period. Median-LSTM detects the anomalies using upper threshold and lower threshold parameters of a particular time period \(D_{p}\) and they are computed as follows:
\[T_{p,lower}=\mu_{p}+w\sigma_{p};T_{p,higher}=\mu_{p}-w\sigma_{p}\]
An anomaly can be flagged for \(d_{k}\in T_{p}\) when either \(d_{k}>T_{p,higher}\) or \(d_{k}<T_{p,lower}\) Now, what should be the probable value for \(w\)? If we consider \(w=2\), it means that any datapoint beyond two standard deviations away from the mean on either side will be considered as an anomaly. It is based on the intuition that differences of the
Figure 2: Sigmoid function has been applied as an recurrent function, which is applied on the outcome of the forget gate (\(f_{t}=\sigma(W_{f}*[h_{t-1},x_{t}]+b_{f})\)) as well as input gate (\(i_{t}=\sigma(W_{i}*[h_{t-1},x_{t}]+b_{i})\)). PEF decides the information to store in cell \(\hat{c_{t}}=PEF(W_{c}*[h_{t-1},x_{t}]+b_{c})\).
normal datapoints should be close to the mean value, whereas the anomalous differences will be far from the mean value. Hence 95.45% datapoints are within two standard deviations distance from the mean value. It is imperative to consider \(w=2\) since there is a higher probability of the anomalies falling into the 4.55% datapoints. We can consider \(w=3\) too where 99.7% datapoints are within three standard deviations. However, it may miss the border anomalies, which are relatively close to the normal datapoints and only can detect the prominent anomalies. Therefore we have used \(w=2\) across the experiments.
### Probability Bound
In this subsection, we analyze different datasets by computing the probability of occurrence of anomalies using the quantile approach. We have considered 0.1, 0.25, 0.75, 0.9, and 0.95 quantiles and computed the probability of anomalies beyond these values, as shown in Table 10 of appendix section. The multivariate datasets are not considered since every feature may follow a different quantile threshold. Hence it is not possible to derive a single quantile threshold for all the features. It is evident from Table 10 of Appendix A of that the probability of a datapoint being an anomaly is high if the datapoint's quantile value is either higher than 0.9 or lower than 0.1. However, if we increase the threshold to 0.95, the probability becomes 0 across the datasets. This emphasizes that a higher quantile threshold does not detect anomalies. It is required to identify the appropriate threshold value, and it is apparent from the table that most of the anomalies are nearby 0.9 and 0.1 quantile values. Table 10 also demonstrates the different nature of the anomalies present in the datasets. For instance, the anomalies of Yahoo Dataset\({}_{1}\) to Yahoo Dataset\({}_{6}\) are present nearby the quantile value 0.9, whereas the anomalies in Yahoo Dataset\({}_{7}\) to Yahoo Dataset\({}_{9}\) are close to both quantile values 0.9 and 0.1. Therefore, it is possible to detect anomalies by two extreme quantile values. We can consider these extreme quantile values as higher and lower quantile thresholds and derive a lemma. We provide a proof in the appendix section.
**Lemma 1:** For an univariate dataset \(\mathcal{D}\), the probability of an anomaly \(\mathcal{P}(\mathcal{A})=\mathcal{P}(\mathcal{E}>\alpha_{high})+\mathcal{P}( \mathcal{F}<\alpha_{low})\), where \(\alpha_{high},\alpha_{low}\) are the higher and lower level quantile thresholds respectively.
The lemma entails the fact that anomalies are trapped outside the high and low quantile threshold values. The bound is independent of data distribution as quantiles assume nominal distributional characteristics.
## 3 LSTM with Parameterized Elliot Activation (PEF)
We introduce the novel parameterized Elliot activation function PEF, an adaptive variant of usual activation, wherein we modify the LSTM architecture by replacing the activation function of the LSTM gates with PEF as follows.
A single LSTM block is composed of four major components: an input gate, a forget gate, an output gate, and a cell state. We have applied the parameterized Elliot Function (PEF) as activation.
### Parameterized Elliot Function PEF
PEF is represented by
\[f(x)=\frac{\alpha x}{1+|x|} \tag{1}\]
with the first order derivative of PEF as: \(f^{\prime}(x)=\frac{\alpha}{(|x|+1)^{2}}\). The function is equal to 0, and the derivative is also equal to the parameter \(\alpha\) at the origin. After the introduction of the PEF, the hidden state equation is:\(h_{t}=O_{t}\alpha_{c}PEF(C_{t})\). By chain rule,
\[\frac{\partial J}{\partial\alpha_{c}}=\frac{\partial J}{\partial\alpha_{c}}= \frac{\partial J}{\partial h_{t}}O_{t}* Elliot(C_{t})\]
. After each iteration, the \(\alpha_{c}\) is updated by gradient descent \(\alpha_{c}^{(n+1)}=\alpha_{c}^{n}+\delta*\frac{\partial J}{\partial\alpha_{c}}\) (See appendix C for back propagation of LSTM with PEF). Salient features of the PEF are:
1. The \(\alpha\) in equation 1 is learned during the back-propagation like other weight parameters of the LSTM model. Hence, this parameter, which controls the shape of the activation, is learned from data. Thus, if the dataset changes, so does the final form of the activation, which saves the "parameter tuning" effort.
2. The cost of saturation of standard activation functions impedes training and prediction, which is an important barrier to overcome. While PEF derivative also saturates as the \(|x|\) increases, the saturation rate is less than other activation functions, such as \(\tanh\), \(sigmoid\).
3. PEF further decreases the rate of saturation in comparison to the non-parameterized Elliot function.
To the best of our knowledge, insights on 'learning' the parameters of an activation function are not available in literature except for the standard smoothness or saturation properties activation functions are supposed to possess. It is, therefore, worthwhile to investigate the possibilities of learning an activation function within a framework or architecture that uses the inherent patterns and variances from data.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Dataset & \(\alpha\) after training & \(\alpha\) initial value \\ \hline AWS Dataset\({}_{1}\) & 1.612 & 0.1 \\ \hline AWS Dataset\({}_{2}\) & 0.895 & 0.1 \\ \hline AWS Dataset\({}_{3}\) & 1.554 & 0.1 \\ \hline AWS DatasetSyn\({}_{1}\) & 1.537 & 0.1 \\ \hline AWS DatasetSyn\({}_{2}\) & 0.680 & 0.1 \\ \hline AWS Dataset\({}_{5}\)\({}_{1}\) & 1.516 & 0.1 \\ \hline Yahoo Dataset\({}_{1}\) & 1.432 & 0.1 \\ \hline Yahoo Dataset\({}_{2}\) & 1.470 & 0.1 \\ \hline Yahoo Dataset\({}_{3}\) & 1.658 & 0.1 \\ \hline Yahoo Dataset\({}_{6}\) & 1.698 & 0.1 \\ \hline Yahoo Dataset\({}_{7}\) & 1.725 & 0.1 \\ \hline Yahoo Dataset\({}_{8}\) & 1.850 & 0.1 \\ \hline Yahoo Dataset\({}_{9}\) & 1.640 & 0.1 \\ \hline \end{tabular}
\end{table}
Table 2: Different \(\alpha\) values for each Dataset after the training.
Figure 3: Slow saturation rate as well as behavioral comparison of the different layers of LSTM model after the introduction of PEF with other activation functions. It also shows the final value of the learned parameter \(\alpha\) on various datasets.
### PEF saturation
The derivative of the PEF is represented by: \(=\frac{\alpha}{x^{2}}EF^{2}\). While the derivatives of the sigmoid and tanh are dependent on x, PEF is dependent on both \(\alpha\) and x. Even if \(\frac{EF^{2}(x)}{x^{2}}\) saturates, the learned parameter \(\alpha\) will help the PEF escape saturation. The derivatives of the sigmoid, tanh saturate when \(x>5\) or \(x<-5\). However, it is not true with PEF as evident from fig 2(a). As empirical evidence, the layer values for every epoch of the model are captured using various activation functions like PEF, sigmoid and tanh. It is observed that, after about 10 epochs, the values of the layers becomes more or less constant for sigmoid and tanh (fig 2(c) and fig 2(d)), indicating their values have already saturated whereas for PEF, variation can be seen till it reaches 50 epochs (fig 2(b)). This shows that in comparison to sigmoid and tanh as activation functions, PEF escapes saturation due to its learned parameter \(\alpha\). _The parameter \(\alpha\) in PEF_ changes its value as the model trains over the training dataset while using PEF as the activation function. Since it is a self training parameter, it returns different values for different datasets at the end of training. These values have been documented in table 2 and plotted in fig 2(e). Table 2 demonstrates the variations in \(\alpha\) values across multiple datasets as these values get updated.
## 4 Experiment
In this section, we have evaluated the performance of the quantile-LSTM techniques on multiple datasets. We have identified multiple baseline methods, such as iForest, Elliptic envelope, Autoencoder and several deep learning based approaches for comparison purposes (See section 5 for more details on baseline methods). 1
Footnote 1: LSTM code: [https://github.com/PreyankM/Quantile-LSTM](https://github.com/PreyankM/Quantile-LSTM)
### Datasets
The dataset properties have been shown in Table 11 of Appendix E. A total of 29 datasets, including real industrial datasets and synthetic datasets, have been considered in the experiments. The industrial datasets include Yahoo webscope 2, AWS cloudwatch 3, GE. There are a couple of datasets with either one or few anomalies, such as AWS\({}_{1}\), AWS\({}_{2}\). We have injected anomalies in AWS, Yahoo, and GE datasets to produce synthetic data for fair comparison purposes. The datasets are univariate, unimodal or binodal and follow mostly Weibull, Gamma and Log normal distribution. The highest anomaly percentage is 1.47 (GE Dataset\({}_{2}\)), whereas AWS Dataset\({}_{2}\) has reported the lowest percentage of anomaly i.e. 0.08 (For more details see Table 11 2 of Appendix E ).
Footnote 2: [https://webscope.sandbox.yahoo.com/](https://webscope.sandbox.yahoo.com/)
Footnote 3: [https://github.com/numenta/NAB/tree/master/data](https://github.com/numenta/NAB/tree/master/data)
### Results-Industrial Datasets
Table 3 demonstrates the performance comparison of various LSTM techniques. Precision and Recall, two performance metrics, are shown in the table. The Median-LSTM has achieved Recall 1 in most datasets (10 out of 15 datasets). In comparison to existing benchmarks, LSTM methods are SOTA on most of the datasets in terms of Recall. For comparison purposes, we have first compared the Recall. If the Recall is the same for two different methods, then we have compared the Precision. The method which has a higher Recall and Precision will be considered as a better performer. In AWS datasets, most of the techniques have achieved the highest Recall apart from DAGMM and DevNet. DevNet needs minimum two anomalies hence it is not applicable for AWS1 and AWS2. However, as per Precision, iqr-LSTM has performed better than other methods. In the case of GE1, DevNet has produced a better result, whereas quantile based LSTM techniques has outperformed others on GE\({}_{2}\). Median-LSTM has demonstrated better result in Ambient temperature. In the case of Yahoo datasets, Median-LSTM has achieved the highest Recall on four datasets; however, quantile-LSTM and iqr-LSTM have produced better results on several datasets. For example, Median-LSTM and iqr-LSTM both achieved Recall 1 on Yahoo\({}_{1}\). However, if we compare the Precision, iqr-LSTM has shown better results. It is evident from the table 3 that all these LSTM versions are performing very well on these industrial datasets. We compared our method with a recent anomaly detection method based on Graph Neural Network (GNN) [2] We observe that GNN has not shown superior performance in comparison to the quantile based technique. For example, GNN's recall value is less in comparison to the recall value of 1 quantile based techniques have produced (on AWS2, AWS3, Yahoo1, Yahoo2, Yahoo9). In terms of precision, GNN produced better results than quantile LSTM only on two datasets, namely, Yahoo1 and Yahoo9.
Table 4 shows the comparison with other baseline algorithms on multiple synthetic datasets. As in the previous table, Recall and Precision have been shown as performance metrics. As per these metrics, quantile-based approaches have outperformed iForest and other deep learning based algorithms on 7 out of 13 datasets. If we consider the Precision
alone, the quantile LSTM based techniques have demonstrated better performance on 10 synthetic datasets. There are multiple reasons for the better performance demonstrated by the quantile based LSTM approaches. First is the efficacy of the LSTM, which is well documented. Median-LSTM has detected the anomalies for each time period utilizing mean and standard deviation. It also has helped to capture the trend and seasonality. quantile-LSTM do not have any predefined threshold, which has improved their performance. Additionally, the flexibility of the parameter \(\alpha\) in determining the shape of the activation helped in isolating the anomalies. This is evident from Fig 3(e) which represents the variation in \(\alpha\) values of the PEF function across the datasets. \(\alpha\) has been initialized to \(1.5\) for all the datasets.
### Results-Non-Industrial Datasets
We have tested our approach on non-industrial datasets shown in Table 5. Here, Deviation Networks gives NA because it does not work for single anomaly containing datasets. On analysis of the results, we find that the quantile based technique is better in three of the seven datasets while Autoencoder is better for two of the seven datasets.
### Comparison between Elliot Function and PEF
In order to compare the performance of the Elliot function and parameterized Elliot function (PEF) as activation functions, we experimented with them by using them as activation functions in the LSTM layer of the models and comparing the results after they run on multiple datasets. The results are shown in Table 6. According to the data gathered after running the models, we found that parameterized Elliot function has better Precision and Recall for as except for four of the datasets. Thus, we could conclude that using parameterized Elliot function as an activation function gave better performance for quantile-LSTM.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Dataset & Anomaly & \multicolumn{2}{c|}{quantile-LSTM} & \multicolumn{2}{c|}{Autoencoder} & \multicolumn{2}{c|}{GAN} & \multicolumn{2}{c|}{DevNet} & \multicolumn{2}{c|}{iForest} & \multicolumn{2}{c|}{Envelope} \\ \hline & & Precision & Recall & Precision & Recall & Precision & Recall & Precision & Recall & Precision & Recall & Precision & Recall & Precision & Recall \\ \hline TravelTime\({}_{537}\) & 3 & **0.011** & **0.67** & 1 & 0.33 & 0.024 & 0.33 & 0.01 & 0.33 & 0.0039 & 0.6667 & 0.0107 & 0.6667 \\ \hline TravelTime\({}_{537}\) & 1 & 0.006 & 1 & 0 & 0 & **0.016** & **1** & NA & NA & 0.0028 & 1 & 0.0062 & 1 \\ \hline Occupancy\({}_{40005}\) & 1 & **0.03** & **1** & 0 & 0 & 0.007 & 1 & NA & NA & 0.0019 & 1 & 0.0042 & 1 \\ \hline Occupancy\({}_{40013}\) & 2 & **0.06** & **1** & 0.438 & 0.5 & 0.014 & 0.5 & 0.02 & 1 & 0.0038 & 1 & 0.0078 & 1 \\ \hline Speed\({}_{40005}\) & 1 & 0.014 & 1 & **0.103** & **1** & 0.009 & 1 & NA & NA & 0.002 & 1 & 0.0038 & 1 \\ \hline Speed\({}_{5775}\) & 4 & 0.086 & 1 & **0.792** & **1** & 0.2 & 0.9 & 0.16 & 0.75 & 0.0153 & 1 & 0.0247 & 1 \\ \hline Speed\({}_{46013}\) & 2 & 0.053 & 1 & 0.75 & 0.5 & 0.043 & 1 & **0.1** & **1** & 0.0036 & 1 & 0.007 & 1 \\ \hline \end{tabular}
\end{table}
Table 4: Performance comparison of various quantile LSTM techniques on synthetic datasets with other state of the art algorithms.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Dataset & Anomaly & \multicolumn{2}{c|}{\(\alpha\)-LSTM} & \multicolumn{2}{c|}{\(\alpha\)-LSTM} & \multicolumn{2}{c|}{Autoencoder} & \multicolumn{2}{c|}{GAN} & \multicolumn{2}{c|}{DevNet} & \multicolumn{2}{c|}{iForest} & \multicolumn{2}{c|}{ENticolumnumnumnumnumnumn{2}{c|}{}{ENticolumnumnumnumnumn{2}{c|}{}}
### Impact of Varying Thresholds
Deep-learning based algorithms such as Autoencoder [17], GAN [25], DAGMM [28] and DevNet [14] consider upper threshold and lower thresholds on reconstruction errors or predicted value. To understand the impact of different thresholds on the performance, we have considered three baseline algorithms GAN, Autoencoder and Devnet. The baseline methods have considered three different sets of threshold values for upper and lower thresholds. The sets are shown in column head of tables 7, 8 and 9, where the first threshold is the upper percentile and the second threshold is the lower percentile. In contrast, q-LSTM is robust against thresholds as data sets vary i.e. it captures all anomalies successfully within the \(0.1\) and \(0.9\) quantile threshold.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Dataset & \multicolumn{2}{|c|}{Elliot Function} & \multicolumn{2}{|c|}{Parameterized Elliot Function} \\ \hline & Precision & Recall & Precision & Recall \\ \hline AWS Dataset\({}_{1}\) & 0 & 0 & **0.041** & **1** \\ \hline AWS Dataset\({}_{2}\) & 0.002 & 1 & **0.0042** & **1** \\ \hline AWS Dataset\({}_{3}\) & **0.04** & **1** & 0.0181 & 1 \\ \hline AWS DatasetSyn\({}_{1}\) & 0.02 & 0.73 & **1** & **0.909** \\ \hline AWS DatasetSyn\({}_{2}\) & 0.39 & 0.77 & **0.6875** & **1** \\ \hline AWS DatasetSyn\({}_{3}\) & 0.06 & 0.73 & **1** & **1** \\ \hline Yahoo Dataset\({}_{1}\) & 0.006 & 0.25 & **0.0465** & **1** \\ \hline Yahoo Dataset\({}_{2}\) & **0.02** & **1** & 1 & 0.375 \\ \hline Yahoo Dataset\({}_{3}\) & 0.05 & 1 & **0.088** & **1** \\ \hline Yahoo Dataset\({}_{5}\) & 0.001 & 0.33 & **0.022** & **0.66** \\ \hline Yahoo Dataset\({}_{6}\) & 0.002 & 0.17 & **0.0275** & **1** \\ \hline Yahoo Dataset\({}_{7}\) & 0.03 & 0.09 & **0.066** & **0.54** \\ \hline Yahoo Dataset\({}_{8}\) & **0.017** & **0.4** & 0.028 & 0.3 \\ \hline Yahoo Dataset\({}_{9}\) & **0.43** & **0.75** & 0.0208 & 0.75 \\ \hline Yahoo DatasetSyn\({}_{1}\) & 0.14 & 0.86 & **0.375** & **1** \\ \hline Yahoo DatasetSyn\({}_{2}\) & **0.04** & **0.72** & 1 & 0.611 \\ \hline Yahoo DatasetSyn\({}_{3}\) & 0.1 & 0.78 & **0.6** & **1** \\ \hline Yahoo DatasetSyn\({}_{5}\) & 0.004 & 0.31 & **0.0625** & **0.578** \\ \hline Yahoo DatasetSyn\({}_{6}\) & 0.015 & 0.69 & **0.764** & **0.928** \\ \hline Yahoo DatasetSyn\({}_{7}\) & 0.35 & 0.43 & **0.411** & **0.66** \\ \hline Yahoo DatasetSyn\({}_{8}\) & 0.024 & 0.5 & **0.197** & **0.7** \\ \hline Yahoo DatasetSyn\({}_{9}\) & 0.27 & 0.67 & **1** & **0.94** \\ \hline \end{tabular}
\end{table}
Table 6: Comparison of Precision and Recall score for LSTM with Elliot Function and PEF as Activation Function
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline GAN & 99.25 and 0.75 & 99.75 and 0.25 & 99.9 and 0.1 \\ \hline Dataset & Precision & Recall & Precision & Recall \\ \hline Yahoo Dataset\({}_{1}\) & 0.09 & 1 & 0.25 & 1 & 0.5 & 1 \\ \hline Yahoo Dataset\({}_{2}\) & 0.348 & 1 & 0.333 & 0.375 & 0.4 & 0.25 \\ \hline Yahoo Dataset\({}_{3}\) & 0.28 & 0.5 & 0.444 & 0.286 & 0.28 & 0.5 \\ \hline Yahoo Dataset\({}_{5}\) & 0 & 0 & 0.375 & 0.333 & 0.6 & 0.333 \\ \hline Yahoo Dataset\({}_{6}\) & 0.5 & 0.5 & 0.5 & 1 & 0.182 & 1 \\ \hline Yahoo Dataset\({}_{7}\) & 0.154 & 0.364 & 0.3 & 0.273 & 0.5 & 0.182 \\ \hline Yahoo Dataset\({}_{8}\) & 0.038 & 0.1 & 0.1 & 0.1 & 0.25 & 0.1 \\ \hline Yahoo Dataset\({}_{9}\) & 0.192 & 0.625 & 0.5 & 0.625 & 0.5 & 0.25 \\ \hline \end{tabular}
\end{table}
Table 7: Comparison of Precision and Recall score for GAN with varying thresholds for anomaly Upper Bound and Lower Bound
It is evident from the above tables that performance varies significantly based on the thresholds decided by the algorithm. Therefore it is very important to decide on a correct threshold that can identify all the probable anomalies from the dataset.
### Experiments on Normal Instances
A relevant question to ask is: how would the anomaly detection methods perform on normal data instances that does not have any anomaly? We investigate this by removing anomalies from some data sets. We observe that on these data sets (AWS1, AWS2, AWS3, Yahoo1, Yahoo2, Yahoo3), q-LSTM and its variants reported very negligible false alarms (Average 40 false alarms) while other state-of-the-art methods, such as iForest, Elliptic Envelope produce higher flag false positives. Elliptic envelope has reported, on average, 137 false alarms whereas iForest reported an average of 209 false alarms across the datasets. Autoencoder and GAN, both have reported average false alarms 46 and 123 respectively, which is higher than the false positive rate of q-LSTM. This establishes the robustness of the proposed method.
## 5 Related Work
Well-known supervised machine learning approaches such as Linear Support Vector Machines (SVM), Random Forest (RF), and Random Survival Forest (RSF) [23; 22] have been explored for fault diagnosis and the lifetime prediction of industrial systems. [1] have explored SVM and RF to detect intrusion based on the anomaly in industrial data. Popular unsupervised approaches, such as Anomaly Detection Forest [20], and K-means based Isolation Forest [7] try to isolate the anomalies from the normal dataset. These methods do not require labeled data. [7] considered K-means based anomaly isolation, but the approach is tightly coupled with a clustering algorithm. Anomaly Detection Forest like k-means based iForest requires a training phase with a subsample of the dataset under consideration. A wrong selection of the training subsample can cause too many false alarms. The notion of "likely invariants" uses operational data to identify a set of invariants to characterize the normal behavior of a system, which is similar to our strategy. Such as an approach has been attempted to discover anomalies of cloud-based systems [16]. However, such
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Autoencoders & \multicolumn{2}{c|}{99.25 and 0.75} & \multicolumn{2}{c|}{99.75 and 0.25} & \multicolumn{2}{c|}{99.9 and 0.1} \\ \hline Dataset & Precision & Recall & Precision & Recall & Precision & Recall \\ \hline Yahoo Dataset\({}_{1}\) & 0.5 & 0.07 & 0.5 & 0.036 & 0.5 & 0.019 \\ \hline Yahoo Dataset\({}_{2}\) & 0.5 & 0.4 & 0.333 & 0.5 & 0.2 & 0.5 \\ \hline Yahoo Dataset\({}_{3}\) & 0.44 & 0.5 & 0.4 & 0.5 & 0.25 & 0.333 \\ \hline Yahoo Dataset\({}_{5}\) & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 \\ \hline Yahoo Dataset\({}_{6}\) & 0.5 & 1 & 1 & 1 & 0.25 & 1 \\ \hline Yahoo Dataset\({}_{7}\) & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 \\ \hline Yahoo Dataset\({}_{8}\) & 0.875 & 0.875 & 0.375 & 0.375 & 0.5 & 0.75 \\ \hline Yahoo Dataset\({}_{9}\) & 0.75 & 0.5 & 0.25 & 0.5 & 0.5 & 0.5 \\ \hline \end{tabular}
\end{table}
Table 8: Comparison of Precision and Recall score for Autoencoders with varying thresholds for anomaly Upper Bound and Lower Bound
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Devnet & \multicolumn{2}{c|}{99.25 and 0.75} & \multicolumn{2}{c|}{99.75 and 0.25} & \multicolumn{2}{c|}{99.9 and 0.1} \\ \hline Dataset & Precision & Recall & Precision & Recall & Precision & Recall \\ \hline Yahoo Dataset\({}_{1}\) & 0.002 & 1 & 0.002 & 1 & 0.001 & 1 \\ \hline Yahoo Dataset\({}_{2}\) & 0.005 & 1 & 0.005 & 1 & 0.005 & 1 \\ \hline Yahoo Dataset\({}_{3}\) & 0.0078 & 1 & 0.0078 & 1 & 0.0078 & 1 \\ \hline Yahoo Dataset\({}_{5}\) & 0.111 & 0.5 & 0.333 & 0.5 & 0.333 & 0.5 \\ \hline Yahoo Dataset\({}_{6}\) & 0.167 & 1 & 0.5 & 1 & 0.5 & 0.667 \\ \hline Yahoo Dataset\({}_{7}\) & 0.054 & 0.2 & 0.125 & 0.2 & 0.25 & 0.2 \\ \hline Yahoo Dataset\({}_{8}\) & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline Yahoo Dataset\({}_{9}\) & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \end{tabular}
\end{table}
Table 9: Comparison of Precision and Recall score for Devnet with varying thresholds for anomaly Upper Bound and Lower Bound
an approach requires labeling of data and retuning of parameters when the nature of datasets vary. Recently, Deep Learning (DL) models based on auto-encoders, long-short term memory [4, 26] are increasingly gaining attention for anomaly detection. [24] have proposed an integrated model of Convolutional Neural Network (CNN) and LSTM based auto-encoder for Yahoo Webscope time-series anomaly detection. For reasons unknown, [24] have taken only one Yahoo Webscope data to demonstrate their approach's efficacy. The DeepAnT [12] approach employs DL methods and it uses unlabeled data for training. However, the approach is meant for time-series data sets such as Yahoo Webscope, Real traffic, AWS cloudwatch. A stacked LSTM [10] is used for time series anomaly prediction, and the network is trained on a normal dataset. The hierarchical Temporal Memory (HTM) method has been applied recently on sequential streamed data and compared with other time series forecasting models [13]. The authors in [18] have performed online time-series anomaly detection using deep RNN. The incremental retraining of the neural network allows to the adoption of concept drift across multiple datasets. There are various works [11, 19], which attempt to address the data imbalance issue of the anomaly datasets since anomalies are very rare and occur occasionally. Hence they propose semi-supervised approaches. However, the semi-supervised approach cannot avoid the expensive dataset labeling. Some approaches [27] apply predefined thresholds, such as fixed percentile values to detect the anomalies. However, a fixed threshold value may not be equally effective on different domain datasets. Deep Autoencoding Gaussian Mixture Model (DAGMM) is an unsupervised DL-based anomaly detection algorithm [27], where it utilizes a deep autoencoder to generate a low-dimensional representation and reconstruction error for each input data point and is further fed into a Gaussian Mixture Model (GMM). Deviation Network(DevNet) [14] is a novel method that harnesses anomaly scoring networks, Z-score based deviation loss, and Gaussian prior together to increase efficiency for anomaly detection.
## 6 Discussion and Conclusion
In this paper, we have proposed multiple versions of the SoTA anomaly detection algorithms along with a forecasting-based LSTM method. We have demonstrated that combining the quantile technique with LSTM can be successfully implemented to detect anomalies in industrial and non-industrial datasets without label availability for training. We have also exploited the parameterized Elliot activation function and shown anomaly distribution against quantile values, which helps in deciding the quantile anomaly threshold. The design of a flexible form activation, i.e., PEF, also helps in accommodating variance in the unseen data as the shape of the activation is learned from data. PEF, as seen in Table 6 captures anomalies better than vanilla Elliot. The quantile thresholds are generic and will not differ for different datasets. The proposed techniques have addressed the data imbalance issue and expensive training dataset labeling in anomaly detection. These methods are useful where data is abundant. Traditional deep learning-based methods use classical conditional means and assume random normal distributions as the underlying structure of data. These assumptions make the methods vulnerable to capturing the uncertainty in prediction and make them incapable of modeling tail behaviors. Quantile in LSTM (for time series data) is a robust alternative that we leveraged in isolating anomalies successfully. This is fortified by the characteristics of quantiles making very few distributional assumptions. The distribution-agnostic behavior of Quantiles turned out to be a useful tool in modeling tail behavior and detecting anomalies. Anomalous instances, by definition, are rare and could be as rare as just one anomaly in the entire data set. Our method detects such instances (singleton anomaly) while some, recent state of art algorithms such as DAGMM require at least two anomalies to be effective. Extensive experiments on multiple industrial timeseries datasets (Yahoo, AWS, GE, machine sensors, Numenta and VLDB Benchmark data) and non-time series data show evidence of effectiveness and superior performance of LSTM-based quantile techniques in identifying anomalies. The proposed methods have a few drawbacks 1. quantile based LSTM techniques are applicable only on univariate datasets. 2. A few of the methods such as quantile-LSTM, iqr-LSTM have a dependency on multiple thresholds. We intend to introduce the notion of multiple dimensions in our quantile-based approaches to detect anomalies in multivariate time series data in the future.
|
2310.07630
|
Differentiable Euler Characteristic Transforms for Shape Classification
|
The Euler Characteristic Transform (ECT) has proven to be a powerful
representation, combining geometrical and topological characteristics of shapes
and graphs. However, the ECT was hitherto unable to learn task-specific
representations. We overcome this issue and develop a novel computational layer
that enables learning the ECT in an end-to-end fashion. Our method, the
Differentiable Euler Characteristic Transform (DECT), is fast and
computationally efficient, while exhibiting performance on a par with more
complex models in both graph and point cloud classification tasks. Moreover, we
show that this seemingly simple statistic provides the same topological
expressivity as more complex topological deep learning layers.
|
Ernst Roell, Bastian Rieck
|
2023-10-11T16:23:07Z
|
http://arxiv.org/abs/2310.07630v3
|
# Differentiable Euler Characteristic Transforms for Shape Classification
###### Abstract
The _Euler Characteristic Transform_ (ECT) has proven to be a powerful representation, combining geometrical and topological characteristics of shapes and graphs. However, the ECT was hitherto unable to learn task-specific representations. We overcome this issue and develop a novel computational layer that enables learning the ECT in an end-to-end fashion. Our method DECT is fast and computationally efficient, while exhibiting performance on a par with more complex models in both graph and point cloud classification tasks. Moreover, we show that this seemingly unexpressive statistic still provides the same topological expressivity as more complex topological deep learning layers provide.
## 1 Introduction
Geometrical and topological characteristics play an integral role in the classification of complex shapes. Regardless of whether they are represented as point clouds, meshes (simplicial complexes), or graphs, a multi-scale perspective provided by methods from _topological data analysis_ (TDA) can be applied for classification tasks. Of particular relevance in this context are the _Persistent Homology Transform_ (PHT) and the _Euler Characteristic Transform_ (ECT). Originally introduced by Turner et al. [36], recent work proved under which conditions both transforms are invertible, thus constituting an injective map [7, 13]. Both transforms are based on the idea of looking at a shape from multiple directions, and evaluating a multi-scale topological descriptor for each such direction. For the PHT, this descriptor is _persistent homology_, a method for assigning multi-scale topological features to input data, whereas for the ECT, the descriptor consists of the _Euler characteristic_, an alternating sum of elements of a space. The collection of all these direction-descriptor pairs is then used to provide a classification or solve an optimisation task. This approach is mathematically sound, but evaluating _all_ possible directions is infeasible in practice, thus posing a severe limitation of the applicability of the method.
Our contributions.We overcome the computational limitations and present a _differentiable, end-to-end-trainable Euler Characteristic Transform_. Our method (i) is highly scalable, (ii) affords an integration into deep neural networks (as a layer or loss term), and (iii) exhibits advantageous
performance in different shape classification task for various modalities, including graphs, point clouds, and meshes.
## 2 Related Work
We first provide a brief overview of _topological data analysis_ (TDA) before discussing alternative approaches for shape classification. TDA aims to apply tools from algebraic topology to data science questions; this is typically accomplished by computing algebraic invariants that characterise the _connectivity_ of data. The flagship algorithm of TDA is _persistent homology_ (PH), which extracts multi-scale connectivity information about connected components, loops, and voids from point clouds, graphs, and other data types [2, 11]. It is specifically advantageous because of its robustness properties [34], providing a rigorous approach towards analysing high-dimensional data. PH has thus been instrumental for shape analysis and classification, both with kernel-based methods [33] and with deep neural networks [20]. Recent work even showed that despite its seemingly discrete formulation, PH is differentiable under mild conditions [5, 21, 22, 28], thus permitting integrations into standard machine learning workflows. Of particular relevance for shape analysis is the work by Turner et al. [36], which showed that a transformation based on PH provides an injective characterisation of shapes. This transformation, like PH itself, suffers from computational limitations that preclude its application to large-scale data sets. As a seemingly less expressive alternative, Turner et al. [36] thus introduced the _Euler Characteristic Transform_ (ECT), which is highly efficient and has proven its utility in subsequent applications [1, 7, 27, 30]. It turns out that despite its apparent simplicity, the ECT is also injective, thus theoretically providing an efficient way to characterise shapes [13]. A gainful use in the context of deep learning was not attempted so far, however, with the ECT and its variants [24, 26] still being used as _static_ feature descriptors that require domain-specific hyperparameter choices. By contrast, our approach makes the ECT end-to-end trainable, resulting in an efficient and effective shape descriptor that can be integrated into deep learning models. Subsequently, we demonstrate such integrations both on the level of _loss terms_ as well as on the level of _novel computational layers_.
In a machine learning context, the choice of model is typically dictated by the type of data. For _point clouds_, a recent survey [14] outlines a plethora of models for point cloud analysis tasks like classification, many of them being based on learning equivariant functions [41]. When additional structure is being present in the form of graphs or meshes, _graph neural networks_ (GNNs) are typically employed for classification tasks [42], with some methods being capable to either learn _explicitly_ on such higher-order domains [3, 4, 10, 15, 16] or harness their topological features [23, 32].
## 3 Mathematical Background
Prior to discussing our method and its implementation, we provide a self-contained description to the _Euler Characteristic Transform_ (ECT). The ECT is often relying on _simplicial complexes_, the central building blocks in algebraic topology, which are extensively used for calculating homology groups and proving a variety of properties of topological spaces. While numerous variants of simplicial complexes exist, we will focus on those that are embedded in \(\mathbb{R}^{n}\). Generally, simplicial complexes are obtained from on a set of points, to which higher-order elements--_simplices_--such as
lines, triangles, or tetrahedra, are being added inductively. A \(d\)-simplex \(\sigma\) consists of \(d+1\) vertices, denoted by \(\sigma=(v_{0},\ldots,v_{d})\). A \(d\)-dimensional simplicial complex \(K\) contains simplices up to dimension \(d\) and is characterised by the properties that (i) each face \(\tau\subseteq\sigma\) of a simplex \(\sigma\) in \(K\) is also in \(K\), and (ii) the non-empty intersection of two simplices is a face of both. Simplicial complexes arise 'naturally' when modelling data; for instance, _3D meshes_ are examples of 2-dimensional simplicial complexes, with 0-dimensional simplices being the vertices, the 1-dimensional simplices the edges, and 2-dimensional simplices the faces; likewise, _geometric graphs_, i.e. graphs with additional node coordinates, can be considered 1-dimensional simplicial complexes.
Euler characteristic.Various geometrical or topological properties for characterising simplicial complexes exist. A simple properties is the _Euler characteristic_, defined as the alternating sum of the number of simplices in each dimension. For a simplicial complex \(K\), we define the Euler Characteristic \(\chi\) as
\[\chi(K)=\sum_{n=0}(-1)^{k}|K^{n}|, \tag{1}\]
where \(|K^{n}|\) denotes the cardinality of set of \(n\)-simplices. The Euler characteristic is _invariant_ under homeomorphisms and can be related to other properties of \(K\); for instance, \(\chi(K)\) can be equivalently written as the alternating sum of the _Betti numbers_ of \(K\).
Filtrations.The Euler characteristic is limited in the sense that it only characterises a simplicial complex \(K\) at a single scale. A multi-scale perspective can be seen to enhance the expressivity of the resulting representations. Specifically, given a simplicial complex \(K\) and a function \(f\colon\mathbb{R}^{n}\to\mathbb{R}\), we
Figure 1: We construct a simplicial from an image of the MNIST data set (using a _Delaunay complex_ construction on the non-zero pixels). For each choice of direction on \(S^{1}\), we obtain a _Euler Characteristic Curve_. The collection of all these curves constitutes the _Euler Characteristic Transform_. Existing work typically concatenates all these curves to obtain a static feature vector, whereas our method uses them in a _differentiable fashion_.
obtain a multi-scale view on \(K\) by considering the function \(\tilde{f}\) as the restriction of \(f\) to the \(0\)-simplices of \(K\), and defining \(\tilde{f}(\sigma):=\max_{\tau\subset\sigma}\tilde{f}(\tau)\) for higher-dimensional simplices. With this definition, \(\tilde{f}^{-1}((-\infty,r])\) is either empty or a non-empty simplicial subcomplex of \(K\); moreover, for \(r_{1}\leq r_{2}\), we have \(\tilde{f}^{-1}((-\infty,r_{1}])\subseteq\tilde{f}^{-1}((-\infty,r_{2}])\). A function \(\tilde{f}\) with such properties is known as a _filter function_, and it induces a _filtration_ of \(K\) into a sequence of nested subcomplexes, i.e.
\[\emptyset=K_{0}\subseteq K_{1}\cdots\subseteq K_{m-1}\subseteq K_{m}=K. \tag{2}\]
Since the filter function was extended to \(K\) by calculating the maximum, this is also known as the _sublevel set filtration of \(K\) via \(f\).1_ Filter functions can either be learned [22, 23], or they can be defined based on existing geometrical-topological properties of the input data. Calculating invariants alongside this filtration results in substantial improvements of the predictive power of methods. For instance, calculating the homology groups of each \(K_{i}\) leads to _persistent homology_, a shape descriptor for point clouds. However, persistent homology does not exhibit favourable scalability properties, making it hard to gainfully use in practice.
Footnote 1: There is also the related concept of a _superlevel set filtration_, proceeding in the opposite direction. The two filtrations are equivalent in the sense that they have the same expressive power.
## 4 Methods
With the _Euler characteristic_ being insufficiently expressive and _persistent homology_ being infeasible to calculate for large data sets, the _Euler Characteristic Transform_ (ECT), created by Turner et al. [36], aims to strike a balance between the two. Given a simplicial complex \(K\) and a filter function \(f\),2 the central idea of the ECT is to compute the Euler characteristic alongside a filtration, thus obtaining a _curve_ that serves to characterise a shape. If the vertices of \(K\) have coordinates in \(\mathbb{R}^{n}\), the ECT is typically calculated based on a parametric filter function of the form
Footnote 2: For notational simplicity, we drop the tilde from the function definition and assume that \(f\) constitutes a valid filter function as defined above.
\[f\colon S^{n-1}\times\mathbb{R}^{n} \to\mathbb{R} \tag{3}\] \[\xi,x \mapsto\langle x,\xi\rangle\quad,\]
where \(\xi\) is a _direction_ (living on a sphere of appropriate dimensionality), and \(\langle\cdot,\cdot\rangle\) denotes the standard inner product. For a fixed \(\xi\), we write \(f_{\xi}:=f(\xi,\cdot)\). Given \(h\in\mathbb{R}\), also known as the _height_, we obtain a filtration of \(K\) by computing the preimage \(f_{\xi}^{-1}((-\infty,h])\). The ECT is then defined as
\[\text{ECT}\colon S^{n-1}\times\mathbb{R} \to\mathbb{Z}, \tag{4}\] \[\xi,h \mapsto\chi\left(f_{\xi}^{-1}\big{(}(-\infty,h]\big{)}\right)\quad.\]
If \(\xi\) is fixed, we also refer to the resulting curve--which is only defined by a single direction--as the _Euler Characteristic Curve_ (ECC). The ECT is thus the collection of ECCs calculated from different directions. Somewhat surprisingly, it turns out that, given a sufficiently larger number of directions \(\xi\)[8], the ECT is _injective_, i.e. it preserves equality [13, 36].
While the injectivity makes the ECT an advantageous shape descriptor, it is currently only used as a static feature descriptor in machine learning applications, relying on a set of pre-defined
directions \(\xi\), such as directions chosen on a grid. We adopt a novel perspective here, showing how to turn the ECT into a differentiable shape descriptor that affords the integration into deep neural networks, either as a layer or as a loss term. Our key idea that permits the ECT to be used in a differentiable setting is the observation that it can be written as
\[\begin{split}\text{ECT}\colon S^{n-1}\times\mathbb{R}& \rightarrow\mathbb{Z}\\ \xi,h&\mapsto\sum_{k}^{\dim K}(-1)^{k}\sum_{\sigma_ {k}}\mathbb{1}_{[f_{\xi}(x_{\pi_{k}}),\infty)}(h)\quad,\end{split} \tag{5}\]
where \(\sigma_{k}\) is a \(k\)-dimensional and \(x_{\sigma_{k}}\) is its corresponding feature vector. Eq. (5) rewrites the ECT as an alternating sum of _indicator functions_. To see that this is an equivalent definition, it suffices to note that for the \(0\)-dimensional simplices we indeed get a sum of indicator functions, as the ECT counts how many points are below or above a given hyperplane. This value is also unique, and once a point is included, it will remain included. For the higher-dimensional simplices a similar argument holds. The value of the filter function of a higher-dimensional simplex is fully determined its vertices, and once such a simplex is included by the increasing filter function, it will remain included. This justifies writing the ECT as a sum of indicator functions.
Differentiability.A large obstacle towards the development of _topological machine learning_ algorithms involves the integration into deep neural networks, with most existing works treating topological information as mere static features. We want our formulation of the ECT to be differentiable with respect to both the _directions_\(\xi\) as well as the _coordinates_ themselves. However, the indicator function used in Eq. (5) constitutes an obstacle to differentiability. To overcome this, we
Figure 2: This figure provides an overview for the ECT in a machine learning setting. (a) We are given a noisy point cloud sampled from a circle (blue dots), and a direction on the unit circle (red dot), and compute the ECT in the direction of the red dot to obtain the ECC, the lower figure in (b). Each of curves is stacked and we obtain the curve and the red line in the top of (b) corresponds to the curve below viewed from the top. (c) The resulting 2D image then serves as the input for a CNN that is used to classify the pointcloud.
replace the indicator function with a _sigmoid function_, thus obtaining a smooth approximation to the ECT. Notably, this approximation affords gradient calculations. Using a hyperparameter \(\lambda\), we can control the tightness of the approximation, leading to
\[\begin{split}\text{ECT}\colon S^{n-1}\times\mathbb{R}& \to\mathbb{Z}\\ \xi,h&\mapsto\sum_{k}^{\dim K}(-1)^{k}\sum_{\sigma_{ k}}S\left(\lambda\left(h-f_{\xi}(x_{\sigma_{k}})\right)\right)\end{split}\quad, \tag{6}\]
where \(S(\cdot)\) denotes the sigmoid function. Each of the summands is differentiable with respect to \(\xi\), \(x_{\sigma_{k}}\), and \(h\), thus resulting in a highly-flexible framework for the ECT. We refer to this variant of the ECT as DECT, i.e. the _Differentiable Euler Characteristic Transform_.
Our novel formulation can be used in different contexts, which we will subsequently analyse in the experimental section. First, Eq. (6) affords a formulation as a shape descriptor layer, thus enabling representation learning on different domains and making a model 'topology-aware.' Second, since Eq. (6) is differentiable with respect to the input coordinates, we can use it to create _loss terms_ and, more generally, optimise point clouds to satisfy certain topological constraints. In contrast to existing works that describe topology-based losses [12, 28, 35, 37], our formulation is highly scalable without requiring subsampling strategies or any form of discretisation in terms of \(\xi\)[30].
Integration into deep neural networks.Next to being differentiable, our novel perspective also lends itself to a better integration into deep neural networks. Traditionally, methods that employ ECTs for classification concatenate the ECCs for different directions into a _single_ vector, which is subsequently used as the input for standard classification algorithms, after having been subjected to dimensionality reduction [1, 24]. However, we find that discarding the directionality information like this results in a loss of crucial information. Moreover, the concatenation of the ECCs requires the dimensionality reduction techniques to be block permutation invariant, as reordering the ECCs should _not_ change the output of the classification. This aspect is ignored in practice, thus losing the interpretability of the resulting representation. By contrast, we aim to make the integration of our variant of the ECT _invariant_ with respect to reordering individual curves. Instead of using a static dimensionality reduction method, we use an MLP to obtain a learnable embedding of individual Euler Characteristic Curves into a high-dimensional space. This embedding is permutation-equivariant by definition. To obtain a permutation-invariant representation, we use a _pooling layer_, similar to the _deep sets_ architecture [41]. Finally, we use a simple classification network based on another MLP. We note that most topological machine learning architectures require a simplicial complex with additional connectivity information to work. This usually requires additional hyperparameters or, in the case of persistent homology, a sequence of simplicial complexes encoding the data at multiple scales. Other deep learning methods, such as deep sets, require a restriction on the number of points in each sample in the dataset. By contrast, our method can _directly_ work with point clouds, exhibiting no restrictions in terms of the number of points in each object nor any restrictions concerning the type of sample connectivity information. Hence, DECT can handle data consisting of a mixture of point clouds, graphs, or meshes _simultaneously_.
Computational efficiency and implementation.While there are already efficient algorithms for the computation of the ECT for certain data modalities, like image and voxel data [39], our method constitutes the first description of a differentiable variant of the ECT in general machine learning settings. Our method is applicable to point clouds, graphs, and meshes. To show that our formulation is computationally efficient, we provide a brief overview on how to implement Eq. (6) in practice:
1. We first calculate the inner product of all coordinates with each of the directions, i.e. with each of the coordinates from \(S^{n-1}\).
2. We extend these inner products to a valid filter function by calculating a _sublevel set filtration_.
3. We translate all indicator functions by the respective filtration value and sample them on a regular grid in the range of the sigmoid function, i.e. in \([-1,1]\). This is equivalent to evaluating \(\mathbbm{1}_{[f_{5}(\sigma_{k}),1]}\) on the interval \([-1,1]\).
4. Finally, we add all the indicator functions, weighted by \(\pm 1\) depending on the dimension, to obtain the ECT.
All these computations can be _vectorized_ and executed in parallel, making our reformulation highly scalable on a GPU.
## 5 Experiments
Having described a novel, differentiable variant of the _Euler Characteristic Transform_ (ECT), we conduct a comprehensive suite of experiments to explore and assess its properties. First and foremost, building on the intuition of the ECT being a universal shape descriptor, we are interested in understanding how well ECT-based models perform _across_ different types of data sets, such as point clouds, graphs, and meshes. Moreover, while recent work has proven theoretical bounds on the number of directions required to unique classify a shape (i.e. the number of directions required to guarantee injectivity) via the ECT [8], we strive to provide practical insights into how well classification accuracy depends on the number of directions used to calculate the ECT. Finally, we also show how to use the ECT to _transform_ to point clouds, taking on the role of additional optimisation objectives that permit us to adjust point clouds based on a target ECT.
Preprocessing and experimental setup.We preprocess all data sets so that their vertex coordinates have at most unit norm. We also centre vertex coordinates at the origin. This scale normalisation simplifies the calculating of ECTs and enables us to use simpler implementations. Moreover, given the different cardinalities and modalities of the data, we slightly adjust our training procedures accordingly. We split data sets following an 80%/20% train/test split, reserving another 20% of the training data for validation. For the graph classification, we set the maximum number of epochs to 100. We use the ADAM optimiser with a starting learning rate of 0.001. As a loss term, we either use _categorical cross entropy_ for classification or the _mean squared error_ (MSE) for optimising point clouds and directions.
Architectures.We showcase the flexibility of DECT by integrating it into different architectures. Our architectures are kept purposefully _simple_ and do not make use of concepts like attention, batch normalisation, or weight decay. For the synthetic data sets, we add DECT as the first layer of an MLP with 3 hidden layers. For graph classification tasks, we also use DECT as the first
layer, followed by two convolutional layers, and an MLP with 3 hidden layers for classification. By default, we use 16 different directions for the calculation of the ECT and discretise each curve into 16 steps. This results in a 16 \(\times\) 16 'image' for each input data set. When using convolutional layers, our first convolutional layer has 8 channels, followed by a layer with 16 channels, which is subsequently followed by a pooling layer. Our _classification network_ is an MLP with 25 hidden units per layer and 3 layers in total. Since we represent each graph as a 16 \(\times\) 16 image the number of parameters is always constant in our model, ignoring the variation in the dimension of the nodes across the different datasets. We find that this makes the model highly scalable.
### Classifying Synthetic Manifolds Across Different Modalities
As a motivating example, we first showcase the capabilities of DECT to classify synthetically-generated 2-manifolds. To this end, we generate 2-spheres, tori, and Mobius strips. In total, the data set consists of 300 manifolds, distributed equally along the three difference classes. We then represent the objects in the form of point clouds (only vertices), graphs (vertices and edges), and meshes (coordinates, edges, and faces). To improve the complexity of this classification task, we perturb vertex coordinates using a per-coordinate perturbation sampled uniformly from \([0,0.3)\) and a random rotation; this level of perturbation is sufficiently small to prevent major distortions between the two classes. Table 1 depicts the results and we observe that DECT exhibits perfect classification over all three modalities.
\begin{table}
\begin{tabular}{l c} \hline \hline \multicolumn{1}{c}{ECT + MLP} \\ \hline Point cloud & \(1.0\pm 0.0\) \\ Graph & \(1.0\pm 0.0\) \\ Mesh & \(1.0\pm 0.0\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: DECT can classify three classes of manifolds across three different modalities.
Figure 3: (a): We sample a noisy point cloud from a circle (orange). Blue dots show the directions, i.e. _angles_, used for the ECT (left: initial, right: after training). Our method DECT spreads directions properly over the unit circle, resulting in a perfect matching of the ground truth. (b): DECT also permits us to optimise existing point clouds to match a target ECT in an end-to-end differentiable fashion. Using two point clouds (blue: target; orange: input data), we train DECT with an MSE loss between the learned ECT and the target ECT. Starting from a randomly-initialised point cloud (left), point coordinates are optimised to match the desired shape (right). Notably, this optimisation _only_ involves the ECT, demonstrating its capabilities as a universal shape descriptor.
### Optimising Euler Characteristic Transforms
Following existing topology-based optimisation methods [5, 12, 28], we also employ DECT in this context. In contrast to existing methods, representations learned by DECT lend itself to better _interpretability_ since one can analyse what directions are using during the classification. The collection of all learned directions can provide valuable insights into the complexity of the data, highlighting symmetries.
Learning and visualising directions.We fix a noisy point cloud sampled from a circle, computing the full ECT with respect to a set of directions sampled uniformly from \(S^{1}\). This corresponds to the 'ground truth' ECT. We then initialise our method DECT with a set of directions set to a random point on the unit circle. Using an MSE loss between the ground truth ECT and the ECT used in our model, we may _learn_ appropriate directions. Fig. 2(a) shows the results of the training process. We observe two phenomena: first, due to the symmetry of the ECT, it suffices to only cover half the unit circle in terms of directions; indeed, each vertical slice of the ECT yields an ECC, which can also be obtained by rotation. The same phenomenon occurs, _mutatis mutandis_, when directions are initialised on the other side of the circle: the axis of symmetry runs exactly through the direction closest and furthest from the point cloud, corresponding to the'maximum' and'minimum' observed in the sinusoidal wave pattern that is apparent in the ground truth ECT. One may observe that the learned directions are not _precisely_ situated on the unit circle; they are only situated close to it. This is due to our model not using a spherical constraint, i.e. learned directions are just considered to be points in \(\mathbb{R}^{2}\) as opposed to being angles.3 However, the optimisation process still forces the directions to converge to the unit circle, underpinning the fact that our novel layer DECT can in fact learn the ECT of an object even if given more degrees of freedom than strictly required.
Footnote 3: We added spherical constraints for all other classification scenarios unless explicitly mentioned otherwise.
Optimising point clouds.To complement the previous experiment on ECT-based optimisation, we also show how to use DECT to _optimise_ point cloud coordinates according to match a desired geometrical-topological descriptor. This type of optimisation can also be seen as an additional _regularisation_ based on topological constraints. In contrast to existing works [28, 35, 37], our method is computationally highly efficient and does _not_ require any additional constructions of simplicial complexes. To showcase the capabilities of DECT as an optimisation objective, we normalise all ECTs, thus ensuring that they operate on the same order of magnitude for an MSE loss.4 Being differentiable, DECT permits us to adjust the coordinate positions of the source point cloud as a function as of the MSE loss, computed between the ECT of the model and the ECT of the target point cloud. As Fig. 2(b) demonstrates, our method is capable to adjust coordinates appropriately. Notably, this also permits us to train with different sample sizes, thus creating _sparse approximations_ to target point clouds. We leave the approximation of structured objects, such as graphs or simplicial complexes, for future work; the higher complexity of such domains necessitates constructions of auxiliary complexes, which need to be separately studied in terms of differentiability.
Footnote 4: This is tantamount to making DECT scale-invariant. We plan on investigating additional invariance and equivariance properties in future work.
### Classifying Geometric Graphs
Moving from point clouds to graphs, we first study the performance of our method on the MNIST-Superpixel data set [9]. This data set, being constructed from image data, has a strong underlying geometric component, which we hypothesise our model should be capable of leveraging. Next to the graph version, we thus also create a meshed variant of the MNIST-Superpixel data set. To this end, we first assign to each pixel a coordinate in \(\mathbb{R}^{2}\) by regularly sampling the unit square. As usual, we set the vertices in the simplicial complex to be the non-zero pixel coordinates. We then add edges and faces by computing a _Delaunay complex_ of the data (the radius of said complex spans the non-zero pixels). The resulting complex captures both the geometry and the topology of the images in the data set. Following this, we classify the data using DECT and other methods, using a CNN architecture for the original data set and an MLP architecture for its meshed version. Interestingly, we found that our method only requires about 20 epochs for training, after which training is stopped automatically, whereas competitor methods use more of the allocated training budget of 100 epochs. Table 2 depicts the results; we find that DECT overall exhibits favourable performance given its smaller footprint. Moreover, using the meshed variant of the data set, we observe performance on a par with competitor methods; the presence of higher-order elements like faces enables DECT to leverage geometrical properties of the data better. Finally, we want to point towards computational considerations. The last column of the table shows the runtimes per epoch; here, DECT outperforms all other approaches by an order of magnitude or more. To put this into perspective, the runtime for MNIST has been the slowest in all our experiments, with most training runs for other experiments only taking about a minute for a _full_ 100 epochs. We report the values from Dwivedi et al. [9] noting that the survey uses a single Nvidia 1080Ti (11GB) GPU was used on a compute cluster, whereas our model was trained on a Nvidia GeForce RTX 3070 Ti (8GB) GPU of a commodity laptop. This underlines the utility of DECT as faster, more efficient classification
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & Accuracy & Epoch runtime (s) \\ \hline GAT [38] & 95.54 \(\pm\) 0.21 & 42.26 \\ GCN [25] & 90.71 \(\pm\) 0.22 & 83.41 \\ GIN [40] & 96.49 \(\pm\) 0.14 & 39.22 \\ GraphSage [18] & 97.31 \(\pm\) 0.10 & 113.12 \\ MLP & 95.34 \(\pm\) 0.14 & 22.74 \\ \hline ECT+CNN (ours) & 93.00 \(\pm\) 0.80 & 4.50 \\ ECT+MLP (ours) & 97.20 \(\pm\) 0.10 & 10.80 \\ \hline \hline \end{tabular}
\end{table}
Table 2: A comparison of our method with other methods on the MNIST-Superpixel data set. We report overall accuracy and runtime per epoch, highlighting the fact that even on commodity hardware, our method is an order of magnitude faster than the fasted GNN methods. This yields a favourable trade-off between performance, scalability, and accuracy. Finally, we find that accuracy can be improved by considering a complex constructed from the input images; in this case, our ECT+MLP method is on a par with more complex graph neural networks, but this comes at the cost of increased runtime (due to the fact that faces have to be added to the data). Accuracy values and runtimes of all all comparison partners are taken from Dwivedi et al. [9].
method.
We also use a minimal version of DECT to classify point clouds. In contrast to existing work [36], we do not use (simplicial) complexes, but restrict the ECT to _hyperplanes_, essentially merely counting the number of points above or below a given plane for each curve. We then classify shapes from ModelNet 40, sampling either 100 or 1000 points. In the former case, we achieve an accuracy of \(74\pm 0.5\) over 5 runs, while in the latter case our accuracy is \(77.1\pm 0.4\). Given the low complexity and high speed of our model, this is surprisingly close to the performance reported by Zaheer et al. [41], i.e. \(82.0\pm 2.0\) and \(87.0\pm 2.0\), respectively. Moreover, DECT is not restricted to point clouds of a specific size, and we believe that the performance gap could potentially be closed for models with more pronounced topological features and varying cardinalities.
As a final experiment, we show the performance of our DECT when it comes to analysing graphs that contain node coordinates. We use several graph benchmark data sets [29], with Table 3 depicting the results. We observe high predictive performance; our model outperforms existing graph neural networks while requiring a smaller number of parameters. We also show the benefits of substantially increasing the capacity of our model; going to a higher parameter budget yields direct improvements in terms of predictive performance. Interestingly, we observe the highest gains on the 'Letter' data sets, which are subjected to increasingly larger levels of noise. The high performance of our model in this context may point towards better robustness properties; we aim to investigate this in future work. Finally, as Fig. 4 demonstrates, accuracy remains high even when choosing a smaller number of directions for the calculation of the ECT.
## 6 Conclusion and Discussion
We described DECT, the first differentiable framework for _Euler Characteristic Transforms_ (ECTs) and showed how to integrate it into deep learning models. Our method is applicable to different data modalities--including point clouds, graphs, and meshes--and we showed its utility in a variety of learning tasks, comprising both _optimisation_ and _classification_. The primary strength of our method is its _flexibility_; it can handle data sets with mixed modalities, containing objects with varying sizes and shapes--we find that few algorithms exhibit similar aspects. Moreover, our computation lends
Figure 4: Accuracy on a ‘Letter-low’ as a function of the number of directions.
\begin{table}
\begin{tabular}{l r r r r r r r} \hline \hline & Params. & BZR & COX2 & DHFR & Letter-low & Letter-med & Letter-high \\ \hline GAT & 5K & \(80.3\pm 2.0\) & \(79.2\pm 2.6\) & \(72.8\pm 3.2\) & \(90.0\pm 2.2\) & \(63.7\pm 6.0\) & \(43.7\pm 4.1\) \\ GCN & 5K & \(80.5\pm 2.4\) & \(\mathbf{79.4\pm 1.8}\) & \(\mathbf{76.7\pm 3.8}\) & \(81.4\pm 1.6\) & \(62.0\pm 2.1\) & \(43.1\pm 1.7\) \\ GIN & 9K & \(81.7\pm 4.9\) & \(77.9\pm 2.4\) & \(64.7\pm 8.3\) & \(85.0\pm 0.6\) & \(67.1\pm 2.5\) & \(50.9\pm 3.5\) \\ \hline ECT+CNN (ours) & 4K & \(\mathbf{81.8\pm 3.2}\) & \(70.4\pm 0.9\) & \(67.9\pm 5.0\) & \(\mathbf{91.5\pm 2.1}\) & \(\mathbf{76.2\pm 4.8}\) & \(\mathbf{63.8\pm 6.0}\) \\ ECT+CNN (ours) & 65K & \(84.3\pm 6.1\) & \(74.6\pm 4.5\) & \(72.9\pm 1.6\) & \(96.8\pm 1.2\) & \(86.3\pm 2.0\) & \(85.4\pm 1.3\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of 5 runs on small graph benchmark data sets. Parameter numbers are approximate because the number of classes differs. The high consistency and performance of our method on the ‘Letter’ data sets is notable.
itself to high scalability and built-in GPU acceleration; as a result, our ECT-based methods train an order of magnitude faster than existing models on the same hardware. We observe that our method exhibits scalability properties that surpass existing _topological machine learning_ algorithms [17, 19]. Thus, being fully differentiable, both with respect to the number of directions used for its calculation as well as with respect to the input coordinates of a data set, we extend ECTs to hitherto-unavailable applications.
Future work.We believe that this work paves the path towards new future research directions and variants of the ECT. Along these lines, we first aim to extend this framework to encompass variants like the _Weighted Euler Characteristic Transform_[24] or the _Smooth Euler Characteristic Transform_[7]. Second, while our experiments already allude to the use of the ECT to solve inverse problems for point clouds, we would like to analyse to what extent our framework can be used to reconstruct graphs, meshes, or higher-order complexes. Given the recent interest in such techniques due to their characteristic geometrical and topological properties [31], we believe that this will constitute a intriguing research direction. Moreover, from the perspective of machine learning, there are numerous improvements possible. For instance, the ECT in its current form is inherently _equivariant_ with respect to rotations; finding better classification algorithms that respect this structure would thus be of great interest, potentially leveraging spherical CNNs for improved classification [6]. Finally, we aim to improve the representational capabilities of the ECT by extending it to address node-level tasks; in this context, topology-based methods have already exhibited favourable predictive performance at the price of limited scalability [23]. We hope that extensions of DECT may serve to alleviate these issues in the future.
## Reproducibility Statement
The code and configurations are provided for our experiments for reproducibility purposes. All experiments were run on a single GPU to prohibit further randomness and all parameters were logged. Our code will be released under a BSD-3-Clause Licence and can be accessed under [https://github.com/aidos-lab/DECT](https://github.com/aidos-lab/DECT).
|
2303.10274
|
Conformal stereographic projections of sphere quotients are
Majumdar-Papapetrou manifolds
|
In this short note, we compute the conformal stereographic projection on the
standard metric of a sphere quotient. The result is a Majumdar-Papapetrou
metric, which might be useful.
|
Gautier Dietrich
|
2023-03-17T23:04:13Z
|
http://arxiv.org/abs/2303.10274v2
|
# Conformal stereographic projections of sphere quotients are Majumdar-Papapetrou manifolds
###### Abstract.
In this short note, we compute the conformal stereographic projection of a standard sphere quotient metric. The result is a Majumdar-Papapetrou metric, which might be useful.
The author was supported in part by the grant ANR-17-CE40-0034 of the French National Research Agency ANR (project CCEM).
## 2. Computation
The computation of the metric induced by \(\sigma_{p}\) on \(\mathbb{S}^{n}/\Gamma\) goes as follows:
\[G_{p,\Gamma}^{\frac{4}{n-2}}\delta_{\Gamma} =\left(\sum_{\gamma\in\Gamma}\frac{1}{|\cdot-p_{\gamma}|^{n-2}} \right)^{\frac{4}{n-2}}\delta_{\Gamma}\] \[=\left(1+\sum_{\gamma\in\Gamma^{*}}\left(\frac{|\cdot-p|}{|\cdot -p_{\gamma}|}\right)^{n-2}\right)^{\frac{4}{n-2}}G_{p,\{1\}}^{\frac{4}{n-2}} \delta_{\Gamma}.\]
Note that \(G_{p,\{1\}}^{\frac{4}{n-2}}\delta_{\Gamma}=\sigma_{p}^{*}\tilde{\delta}_{\Gamma}\), where \(\tilde{\delta}_{\Gamma}\) is the metric induced by \(\delta_{\mathrm{eucl}}\) on \(H_{p}/\Gamma\). Now, formula (1) gives:
\[\forall q\in\mathbb{S}^{n}-\Gamma p,\quad\frac{|q-p|}{|q-p_{\gamma}|}=\frac{1 }{|p-p_{\gamma}|}\cdot\frac{1}{|\sigma_{p}(q)-\sigma_{p}(p_{\gamma})|}.\]
Consequently,
\[G_{p,\Gamma}^{\frac{4}{n-2}}\delta_{\Gamma} =\left(1+\sum_{\gamma\in\Gamma^{*}}\left(\frac{1}{|p-p_{\gamma}|} \cdot\frac{1}{|\sigma_{p}(\cdot)-\sigma_{p}(p_{\gamma})|}\right)^{n-2}\right) ^{\frac{4}{n-2}}\sigma_{p}^{*}\tilde{\delta}_{\Gamma}\] \[=\left(1+\sum_{\gamma\in\Gamma^{*}}\frac{m_{\gamma}}{d_{\gamma}^ {n-2}}\right)^{\frac{4}{n-2}}\sigma_{p}^{*}\tilde{\delta}_{\Gamma},\]
where \(m_{\gamma}:=\frac{1}{|p-p_{\gamma}|^{n-2}}\) and \(d_{\gamma}:=|\sigma_{p}(\cdot)-\sigma_{p}(p_{\gamma})|\). We recover the mass of the manifold:
\[m_{\delta_{\Gamma}}:=\lim_{p}G_{p,\Gamma}-\frac{1}{|\cdot-p|^{n-2}}=\sum_{ \gamma\in\Gamma^{*}}m_{\gamma}.\]
The covering space of the conformal stereographic projection of \(\mathbb{S}^{n}/\Gamma\) relatively to \(p\) is therefore a Majumdar-Papapetrou manifold \((\mathbb{R}^{n},g_{\mathrm{MP},\Gamma})\), where singularities are located at the projection of the orbit of \(p\) under \(\Gamma\).
More generally, given a compact manifold \((M,g)\) and \(\Gamma<\mathrm{Isom}(M)\) discrete, the stereographic projection of \((M/\Gamma,g_{\Gamma})\) relatively to \(p\in M/\Gamma\) verifies:
\[G_{p,\Gamma}^{\frac{4}{n-2}}g_{\Gamma}=\left(1+\sum_{\gamma\in\Gamma^{*}}G_{p, \{1\}}(p_{\gamma})\cdot\tilde{G}_{p_{\gamma},\{1\}}\right)^{\frac{4}{n-2}} \tilde{g}_{\Gamma}.\]
## Acknowledgments
The author wishes to thank Marc Herzlich for introducing him to the work of Habermann and Jost, and Julien Cortier for indicating him the paper by Bray and Neves.
|
2302.11243
|
Magnetic Inclination Evolution of Accreting Neutron Stars in
Intermediate/Low-Mass X-ray Binaries
|
The magnetic inclination angle $\chi$, namely the angle between the spin and
magnetic axes of a neutron star (NS), plays a vital role in its observational
characteristics. However, there are few systematic investigations on its
long-term evolution, especially for accreting NSs in binary systems. Applying
the model of \citet{2021MNRAS.505.1775B} and the binary evolution code \mesa{},
we simultaneously simulate the evolution of the accretion rate, spin period,
magnetic field, and magnetic inclination angle of accreting NSs in
intermediate/low X-ray binaries (I/LMXBs). We show that the evolution of $\chi$
depends not only on the initial parameters of the binary systems, but also on
the mass transfer history and the efficiency of pulsar loss. Based on the
calculated results we present the characteristic distribution of $\chi$ for
various types of systems including ultracompact X-ray binaries, binary
millisecond pulsars, and ultraluminous X-ray sources, and discuss their
possible observational implications.
|
Hao-Ran Yang, Xiang-Dong Li
|
2023-02-22T09:47:36Z
|
http://arxiv.org/abs/2302.11243v1
|
# Magnetic Inclination Evolution of Accreting Neutron Stars in Intermediate/Low-Mass X-ray Binaries
###### Abstract
The magnetic inclination angle \(\chi\), namely the angle between the spin and magnetic axes of a neutron star (NS), plays a vital role in its observational characteristics. However, there are few systematic investigations on its long-term evolution, especially for accreting NSs in binary systems. Applying the model of Biryukov & Abolmasov (2021) and the binary evolution code _MESA_, we simultaneously simulate the evolution of the accretion rate, spin period, magnetic field, and magnetic inclination angle of accreting NSs in intermediate/low X-ray binaries (I/LMXBs). We show that the evolution of \(\chi\) depends not only on the initial parameters of the binary systems, but also on the mass transfer history and the efficiency of pulsar loss. Based on the calculated results we present the characteristic distribution of \(\chi\) for various types of systems including ultracompact X-ray binaries, binary millisecond pulsars, and ultraluminous X-ray sources, and discuss their possible observational implications.
accretion, accretion disks - stars: neutron - X-rays: binaries 0000-0002-3880-8808]Hao-ran Yang
0000-0002-4882-7886]Xiang-dong Li
## 1 Introduction
Neutron stars (NSs) in X-ray binaries accrete both mass and angular momentum from their companion stars. If the NSs are magnetized, the interaction between the magnetic field and the accreting material determines the structure of the magnetosphere and the radiation characteristics. The nature of the interaction depends on whether the NSs are wind-fed or disk-fed, usually corresponding to NSs in high-mass X-ray binaries (HMXBs) and intermediate/low X-ray binaries (I/LMXBs), respectively (Bhattacharya & van den Heuvel, 1991). It is interesting to notice that the torque exerted by the accreting material can simultaneously affect the evolution of the spin period \(P_{\rm s}\) of the NSs (Ghosh & Lamb, 1979; Wang, 1987), the spin inclination angle \(\alpha\), and the magnetic inclination angle \(\chi\)(Wang & Welter, 1981; Leahy, 1990; Bulik et al., 2003; Annala & Poutanen, 2010). Here \(\alpha\) and \(\chi\) are the angle between the spin axis of the NS and the axis of the orbital plane and the angle between the spin and magnetic axes of the NS, respectively. In the following we adopt a dipolar configuration for the NS magnetic field.
The magnetic inclination angle plays a crucial role in the characteristics of the pulsed radiation from accreting X-ray pulsars and from non-accreting radio pulsars in binaries. The latter are generally related to binary millisecond pulsars (BMSPs), which are thought to evolve from I/LMXBs. Although there are attempts trying to compare theory with observation, definite conclusions are not ready to draw, because reliable data of \(\chi\) are still limited (Lyne & Manchester, 1988; Rankin, 1990; Young et al., 2010; Venter et al., 2009; Johnson et al., 2014; Benli et al., 2021). Meanwhile, the evolution of the magnetic inclination angles is dependent on the evolution of the accretion rate and the magnetic field of the NS, and the magnetic field-disk interaction, which are currently not well understood.
Biryukov & Abolmasov (2021, hereafter BA21) recently developed a model to trace the NS's magnetic inclination angle evolution for both disk-fed and wind-fed NSs. They showed that the accretion torque can affect the magnetic inclination angle evolution when both \(\alpha\) and \(\chi\) significantly deviate from zero. As the spin axis of the NS is being aligned with the spin-up torque, the magnetic axis becomes misaligned with the spin axis, which is favorable for detection of pulsed radiation from BMSPs. This work focuses on the magnetic inclination angle evolution for disk-fed NSs in I/LMXBs based on the BA21 model. Here we include some critical factors that were not considered by BA21. The most important one is that BA21 used fixed accretion rates in their calculations for convenience, without considering the influence of binary evolution on the change of accretion rates. So their calculations are limited within \(10^{5}-10^{7}\) yr evolution. In realistic situation, the accretion history is much more complicated, depending on the initial conditions of both the NSs and the donors, the mass and angular momentum transfer between the components, and the accretion disk physics (see below). The mass transfer lifetimes in I/LMXBs vary from \(\sim 10^{8}\) yr to \(10^{10}\) yr, so the final magnetic inclination
angles could significantly deviate from those in short-time evolution.
The rest of the paper is organized as follows. In Section 2, we review and slightly modify the BA21 model, and then simulate a grid of I/LMXBs with different initial parameters using the binary evolution code _MESA_(Paxton et al., 2011, 2013, 2015, 2018, 2019). We select five representative systems, corresponding to ultracompact X-ray binaries (UCXBs), traditional LMXBs, IMXBs, and ultraluminous X-ray binaries (ULXs), and calculate their evolution. The results are presented in Section 3. In Section 4, we discuss the possible effects for the adopted parameters and make predictions for future observational test. Finally, the conclusions are given in Section 5.
## 2 Method
### The evolution of an accreting neutron star
In the BA21 approach, the evolution of an accreting NS can be described by a set of differential equations:
\[I\dot{\Omega} = n_{1}\cos\alpha+n_{2}+n_{3}(1+\sin^{2}\chi)-I\dot{\Omega}, \tag{1}\] \[I\Omega\dot{\alpha} = -n_{1}\sin\alpha,\] (2) \[I\Omega\dot{\chi} = \eta\,A(\eta,\alpha,\chi)n_{1}\sin^{2}\alpha\cos\alpha\sin\chi\cos\chi\] (3) \[+n_{3}\sin\chi\cos\chi,\]
where \(I\) and \(\Omega\) are the moment of inertia and angular velocity of the NS, \(n_{1}\), \(n_{2}\) and \(n_{3}\) represent the averaged torques acting on the NS caused by accretion, magnetic braking due to magnetic field-disk interaction, and pulsar's radiation loss, respectively. The coefficient \(0<\eta\leq 1\) is a constant to describe the accretion torque modulation within the spin period, which is set to unity in accordance with BA21, and \(A(\eta,\alpha,\chi)\) is a normalization function,
\[A(\eta,\alpha,\chi)=\left[1-\frac{\eta}{2}(\sin^{2}\chi\sin^{2}\alpha+2\cos^{ 2}\chi\cos^{2}\alpha)\right]^{-1}. \tag{4}\]
The torques acting on the NS can be written as
\[n_{1} = \dot{M}_{\rm acc}\,(GM_{*}r_{\rm in})^{1/2},\;\mbox{if $r_{\rm in }<r_{\rm co}$}; \tag{5}\] \[n_{2} = -\frac{\mu^{2}}{3r_{\rm co}^{3}},\;\mbox{if $r_{\rm in}<r_{\rm lc}$};\] (6) \[n_{3} = -\frac{\mu^{2}}{r_{\rm lc}^{3}}. \tag{7}\]
Here \(G\) is the gravitational constant, \(M_{*}\), \(\mu\) and \(\dot{M}_{\rm acc}\) are the mass, magnetic moment, and mass accretion rate of the NS, respectively. We assume that the accretion rate is the mass transfer rate \(\dot{M}\) limited by the Eddington limit accretion rate, that is
\[\dot{M}_{\rm acc}=\min(\dot{M},\dot{M}_{\rm Edd}), \tag{8}\]
where
\[\dot{M}_{\rm Edd}=1.43\times 10^{18}M_{*,\odot}\rm g\,s^{-1}, \tag{9}\]
and \(M_{*,\odot}=M_{*}/\rm M_{\odot}\).
In Eqs.(5)-(7), \(r_{\rm in}\), \(r_{\rm co}\) and \(r_{\rm lc}\) are the inner disk radius (or the magnetospheric radius), corotation radius, and light cylinder radius, respectively,
\[r_{\rm in} = \xi\left(\frac{\mu^{4}}{2GM_{*}\dot{M}^{2}}\right)^{1/7} \tag{10}\] \[\simeq 7.5\times 10^{7}\mu_{30}^{4/7}M_{*,\odot}^{1/7}(\dot{M}/ \dot{M}_{\rm Edd,\odot})^{-2/7}\rm cm\] \[r_{\rm co} = \left(\frac{GM_{*}}{\Omega^{2}}\right)^{1/3}\simeq 1.5\times 10^{8 }M_{*,\odot}^{1/3}P_{\rm s,1}^{2/3}\rm cm,\] (11) \[r_{\rm lc} = \frac{c}{\Omega}\simeq 4.8\times 10^{9}P_{\rm s,1}\rm cm. \tag{12}\]
where \(\xi\sim 0.5\) is a correction coefficient depending on the detailed structure of the disk (Ghosh & Lamb, 1979; Long et al., 2005; Bessolaz et al., 2008), \(\mu_{30,0}=\mu/10^{30}\rm\,G\rm cm^{3}\) and \(P_{\rm s,1}=P/1\rm s\) are the normalized magnetic dipole moment and spin period of the NS, respectively. Equation (10) is derived by assuming that the total matter (ram and gas) pressure is balanced by the total magnetic pressure at \(r_{\rm in}\), which is generally consistent with axisymmetric and global 3D MHD simulations (Kulkarni & Romanova, 2013). Since there is no specific restriction on the geometry of the stellar magnetic field in this formula (Romanova & Owocki, 2015), it has also been adopted in more complicated situations, including those that are non-stationary and with tilted magnetic and rotational axes (e.g., Romanova et al., 2021).
We also consider the accretion-induced field decay in the following form (Shibazaki et al., 1989; Zhang & Kojima, 2006; Liu & Li, 2019),
\[\mu=\mu_{\rm min}+\mu_{0}\left(1+\frac{\Delta M_{*}}{10^{-5}\rm M_{\odot}} \right)^{-1}. \tag{13}\]
where \(\mu_{0}\), \(\mu_{\rm min}\), and \(\Delta M_{*}\) are the initial magnetic moment, the bottom magnetic moment, and the amount of matter accreted by the NS, respectively. We set \(\mu_{\rm min}=10^{26}\rm\,G\rm\,cm^{3}\), to be comparable with the weakest magnetic fields of pulsars.
### Evolution of I/LMXBs
We follow the long-term binary evolution using the stellar evolution code _MESA_ (version number 15140). The NS is taken to be a point mass whose initial mass is set to \(1.4\,\rm M_{\odot}\), and the metallicity of the donor is \(Z=0.02\). We evolved a
\begin{table}
\begin{tabular}{c c c c c c} \hline Model & A & B & C & D & E \\ \hline \(M_{\rm 2,i}\)(\(\rm M_{\odot}\)) & 1 & 1 & 3.4 & 3 & 5 \\ \(P_{\rm orb,i}\) (d) & 1 & 10 & 1.58 & 100 & 1 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters of the selected models in this paper. \(M_{\rm 2,i}\) and \(P_{\rm orb,i}\) represent the initial donor mass and orbital period, respectively. The initial mass of NS is set to \(1.4\,\rm M_{\odot}\) for all models.
large number of incipient NS I/LMXBs with the donor mass varying from 0.2 M\({}_{\odot}\) to 6 M\({}_{\odot}\) by steps of 0.2 M\({}_{\odot}\), and the orbital period (in units of days) logarithmically ranging from \(-1\) to 3 by steps of 0.2. The Ritter (1988) scheme was used to compute the mass transfer rates via Roche-lobe overflow (RLOF). Besides, we considered magnetic braking and gravitational wave radiation for angular momentum loss from the binary.
Figure 1 illustrates the distribution of I/LMXBs in the initial donor mass-orbital period plane. In the left panel, the color of each element represents the maximum mass transfer rate \(\dot{M}_{2}\), the magnitude of which is indicated by the color bar on the top of the figure. If the maximum mass transfer rate is larger than \(10^{-4}\) M\({}_{\odot}\) yr\({}^{-1}\), we regard the mass transfer to be dynamically unstable, followed by common envelope (CE) evolution; if the maximum mass transfer rate is smaller than \(10^{-15}\) M\({}_{\odot}\) yr\({}^{-1}\) we regard that no mass transfer via RLOF has occurred. Systems in between are confined by the green lines, and identified to experience stable mass transfer. They are plotted in the right panel with the color denoting the magnitude of their final orbital periods \(P_{\rm orb,f}\), also indicated by the top color bar. The white stars labeled A-E in both panels are representative systems used in our following calculations whose initial parameters are listed in Table 1. The five systems have distinct evolutionary paths, three of which (A, B and C) are located in the parameter space of stable mass transfer and the other two (i.e. D and E) are outside the parameter space.
Systems A and B are LMXBs with the same initial donor mass (1 M\({}_{\odot}\)) but different initial orbital periods. They will follow different evolutionary paths. In system A, because of its relatively short orbital period (1 d), mass transfer is driven by orbital angular momentum loss caused by magnetic braking and gravitational wave radiation. It starts early and lasts \(\sim 10^{10}\) yr, and the donor always remains in main-sequence. The binary will finally evolve to to be an UCXB (and probably a black widow/redback binary). The initial orbital period of system B is 10 d, longer than the bifurcation period which separates the converging binary systems from the diverging binary systems (Pylyser & Savonije, 1988, 1989). Its mass transfer is driven by nuclear evolution of the donor, which will finally evolve to be a Helium white dwarf (He WD). The duration (\(\sim 10^{8}\) yr) of the mass transfer is significantly shorter than in System A. After the mass transfer ceases, the final evolutionary outcome is a BMSP.
Systems C, D and E start evolution as IMXBs and can appear as ULXs. Among them, system C experiences thermal-timescale mass transfer at first, and then evolves to be an LMXB after the mass ratio reverses. The donor eventually becomes a hybrid Carbon-Oxygen white dwarf (CO WD). The remaining two systems (D and E) are subject to delayed dynamically unstable mass transfer because the initial orbital period is too long (100 d) or the donor is too massive (5 M\({}_{\odot}\)).
Figure 1: Distributions of I/LMXBs in the \(M_{2,{\rm i}}-P_{\rm orb,{\rm i}}\) plane, colored by the magnitude of the maximum mass transfer rate \(\dot{M}_{2}\) (left panel) and the final orbital period \(P_{\rm orb,f}\) (right panel). The green lines in the left panel confine the parameter space that have stable mass transfer. The white stars in both panels are the selected systems used in our calculations, and their parameters are listed in Table 1.
The mass transfer rates rise rapidly to exceed the Eddington limit accretion rate, and they will enter CE evolution, probably leading to merger of both components.
Knowing the mass transfer history, we can follow the evolution of the spin period and the magnetic inclination angle for given initial parameters. It is potentially possible to pre
Figure 2: Evolution of different parameters for systems A-E (from left to right). The rows from top to bottom show the change of the mass transfer rates, the three radii (\(r_{\rm in}\), \(r_{\rm co}\) and \(r_{\rm k}\)), three torques, the spin inclination angle \(\alpha\), the magnetic inclination angle \(\chi\), the spin period \(P_{\rm s}\), and the magnetic moment \(\mu_{30}\) with time.
dict the distribution of the magnetic inclination angles of the NSs in different evolutionary stages.
## 3 Results
Figure 2 shows the calculated results (in solid lines) for the five systems. The initial parameters at the beginning of mass transfer are taken to be in accordance with BA21 for the convenience of comparison: the magnetic inclination angle \(\chi_{0}=60^{\circ}\), the spin inclination angle \(\alpha_{0}=45^{\circ}\) and \(135^{\circ}\) (indicated by the blue and orange lines, respectively), the spin period1\({}^{\prime}\)\({}_{\rm S,0}=5\) s, and the normalized magnetic dipole moment \(\mu_{30,0}=\mu_{0}/10^{30}\,{\rm G\,cm^{3}}=1\). The panels from top to bottom in Figure 2 demonstrate the evolution of the mass transfer rate, the three radii (\(r_{\rm in}\), \(r_{\rm co}\) and \(r_{\rm lc}\)), the three torques (\(n_{1}\), \(n_{2}\) and \(n_{3}\)), the spin inclination angle \(\alpha\), the magnetic inclination angle \(\chi\), the spin period \(P_{\rm s}\), and the magnetic moment \(\mu_{30}\), respectively. The five rows from left to right correspond to systems A to E, respectively.
Footnote 1: Before the mass transfer occurs, the NS is slowed down to a relatively long spin period due to magnetic dipole radiation. We have performed calculations with \(P_{\rm S,0}\) ranging from 10 ms to 10 s, and found that the results are insensitive to the value of \(P_{\rm S,0}\).
We can easily find that the evolution of \(\chi\) with different \(\alpha_{0}\) shows similar tendency except in the beginning phase of the mass transfer, which is related to the first term in Eq (3) (\(\propto\sin^{2}\alpha\cos\alpha\)). Moreover, the evolution of \(\chi\) is similar for all systems except system A. During the first \(10^{4}-10^{5}\,{\rm yr}\) of mass transfer, the mass transfer rates \(\dot{M}\) are relatively low and there is no or very little accretion, thus little change occurs in both \(\alpha\) and \(\chi\). The NS is spun down mainly by the magnetic braking torque \(n_{2}\). Then, with the decrease in \(\Omega\) and increase in \(\dot{M}\), \(n_{2}\) declines and \(n_{1}\) increases accordingly. Therefore, \(\alpha\) starts evolving toward \(0^{\circ}\) rapidly (within a few \(10^{4}-10^{6}\) yr). During this period, \(\chi\) increases to \(\sim 68^{\circ}\) (in the case of \(\alpha_{0}=135^{\circ}\), \(\chi\) decreases firstly because \(\cos\alpha<0\)), and keeps nearly unchanged after \(\alpha\to 0^{\circ}\). At the same time, the accreting NS is spun up by the accretion torque \(n_{1}\). Our calculated evolution of \(\chi\) in these circumstances is in general concordance with the results in BA21. However, we note that the long-term spin evolution of the NS in systems B-E is different despite of the similarity of the \(\chi\) evolution. The spin periods \(P_{\rm s}\) in system B and C evolve to milliseconds eventually after the accretion ends due to their relatively long accretion time and low magnetic moments (\(\sim\mu_{\rm min}\)). The latter reflects effective accretion-induced field decay occurred in the NS. For the other two systems (D and E), the shorter mass transfer time leads to less decayed magnetic moment and a longer \(P_{\rm s}\). The mass transfer rates in the late stage exceed the Eddington limit by a few orders of magnitude, so these systems would behave as ULXs.
As for system A, the accretion timescale is so long (up to the Hubble time) that the evolution of \(\chi\) enters another stage. From Eq. (3) we know that the evolution of \(\chi\) depends on the accretion torques \(n_{1}\) and the pulsar loss torque \(n_{3}\). After \(\alpha\) evolves to zero, the change in \(\chi\) is completely controlled by \(n_{3}\). Although its magnitude is relatively small, as the mass transfer proceeds for sufficiently long time, \(n_{3}\) drives \(\chi\) to decrease to \(\sim 55^{\circ}\).
To explore the influence of the magnetic moment, we set the initial parameters to be \(\alpha_{0}=45^{\circ}\), \(\chi_{0}=60^{\circ}\) and \(P_{0}=5\) s, and change \(\mu_{30,0}\) from 0.01 to 100. This range roughly covers the magnetic field strengths of weakly magnetized NSs to magnetars. Figure 3 shows the evolution of the three radii (\(r_{\rm in}\), \(r_{\rm co}\) and \(r_{\rm lc}\)) and the magnitude of the three torques (\(n_{1}\), \(n_{2}\) and \(n_{3}\)) with different \(\mu_{30,0}\), and Figure 4 shows the corresponding evolution of \(\chi\), \(\alpha\), \(P_{\rm s}\) and \(\mu_{30}\).
We first examine the evolution of system A, which is demonstrated in the first column in Figures 3 and 4. When \(\mu_{30,0}\geq 1\), \(r_{\rm in}\) is comparable with \(r_{\rm co}\) at the beginning of the mass transfer, and the NS experiences an episode of spin-down. The longest period that the NS can reach increases with \(\mu_{30,0}\). After that the NS enters the accretor phase and the accretion torque \(n_{1}\) begins to work. We note that the stronger \(\mu_{30,0}\) is, the earlier the accretor phase starts, and \(\alpha\) and \(\chi\) begin to decrease and increase earlier, respectively. After about \(10^{6}\,{\rm yr}\), the mass transfer rate gets higher, accretion causes \(\mu_{30}\) to decrease significantly, leading to smaller \(r_{\rm in}\) and stronger \(n_{1}\). The NS starts spinning up rapidly. Accordingly, both \(r_{\rm co}\) and \(r_{\rm lc}\) become smaller, resulting in more effective spin-down torques (\(n_{2}\) and \(n_{3}\)) to balance \(n_{1}\), which not only set the spin period to be around the equilibrium spin period2, but also cause \(\chi\) to decrease. Because the magnitude of \(n_{3}\) increases with \(\mu_{30,0}\), the larger \(\mu_{30,0}\), the smaller the final \(\chi\). On the other hand, if the initial magnetic field is relatively weak (\(\mu_{30,0}<1\)), the NS directly enters the accretor phase at the beginning of the mass transfer. Smaller magnetic moments lead to less change in \(\chi\). Similar tendency can be found in systems B-E, except the case of \(\mu_{30,0}=100\), in which the initial inner disk radius is beyond the light cylinder radius, and the NS enters the radio pulsar phase at the beginning. This causes the magnetic inclination angle \(\chi\) to decrease for \(\sim 10^{4}-10^{5}\,{\rm yr}\) before accretion begins. After that, \(\chi\) increases and then settles down. If the mass transfer lasts sufficiently long time, \(\chi\) will finally decreases. The early radio pulsar episode leads to the smallest \(\chi\) after accretion ends in the five systems.
Footnote 2: After the NS reaches the equilibrium period, \(r_{\rm in}\) alternates between \(\lesssim r_{\rm co}\) and \(\gtrsim r_{\rm co}\), and the accretion torque \(n_{1}\) correspondingly switches between zero and non-zero values according to Eq. (5). In Figure 3 we only show the non-zero values of \(n_{1}\) for clarity.
We then calculate the evolution for the five systems with \(\alpha_{0}\) and \(\chi_{0}\) randomly distributed between 0\({}^{\circ}\) and 180\({}^{\circ}\). Figure 5 shows the variations of the magnetic inclination angles \(\Delta\chi\) in the \(\chi_{0}-\alpha_{0}\) plane. Each panel corresponds to a system with a designated initial magnetic dipole moment \(\mu_{30,0}\). We also depict the probability distribution of the final magnetic inclination angles \(\chi_{\ell}\) with the green histogram in each panel (to be discussed below). It is apparent to see the upper and lower symmetry and mirror symmetry in left and right due to the spherical symmetry of the NS. And as mentioned above, in most cases the magnetic axis tends to misalign with the spin axis of the NSs except for system A and for some cases in other systems with \(\mu_{30,0}\geq 100\), in which the magnetic axis tends to align with the spin axis.
## 4 Discussion
### The effect of transient accretion
In last section we infer the NS accretion rate from the mass transfer rate in I/LMXBs to trace the evolution of the NS. However, it is well known that most LMXBs are transients
Figure 3: Evolution of the three radii and the three torques with different initial magnetic moments. The other initial parameters are \(\alpha_{0}=45^{\circ}\), \(\chi_{0}=60^{\circ}\), and \(P_{0}=5\)s.
with rapid accretion during short outbursts separated by long quiescence. The origin of the transient behavior is likely related to the thermal and viscous instability, which occurs when the mass transfer rate \(\dot{M}\) is below a critical value (Lasota, 2001),
\[\dot{M}_{\rm cr} \simeq 3.2\times 10^{-9}\left(\frac{M_{*}}{1.4\mathrm{M}_{\odot}} \right)^{0.5}\left(\frac{M_{\rm d}}{1.0\mathrm{M}_{\odot}}\right)^{-0.2}\] \[\times\left(\frac{P_{\rm orb}}{1.0\mathrm{d}}\right)^{1.4} \mathrm{M}_{\odot}\,\mathrm{yr}^{-1}. \tag{14}\]
Limit cycles of the accretion rate in the disk results in the transition from quiescence to outburst when the disk gets hot enough and hydrogen is ionized from a cooler and predominantly hydrogen neutral state. This means that a transient NS would attain a higher accretion rate during outbursts than the long-term average mass transfer rate. With that in mind, the accretion rates should be reformulated as:
\[\dot{M}_{\rm acc}=\begin{cases}\dot{M}_{\rm di},&\mathrm{if}\ \dot{M}\leq\dot{M}_{\rm cr} \\ \min(\dot{M},\dot{M}_{\rm Edd}),&\mathrm{if}\ \dot{M}>\dot{M}_{\rm cr},\end{cases} \tag{15}\]
where \(\dot{M}_{\rm di}\) is the accretion rate when the disk is subject to the thermal and viscous instability. We simply assume that the accretion rate is enhanced to \(1/f\) times of \(\dot{M}\) during outbursts for a given outburst duty cycle \(f\), and declines to zero at quiescence (see also Bhattacharyya & Chakrabarty, 2017), that is
\[\dot{M}_{\rm di}=\begin{cases}\dot{M}_{\rm burst}=\min\left(\frac{\dot{M}}{f}, \dot{M}_{\rm Edd}\right),&\mathrm{during\ outbursts}\\ \dot{M}_{\rm qu}=0,&\mathrm{during\ quiescence}.\end{cases} \tag{16}\]
We recalculate the evolution of the five systems with disk instability considered. The initial parameters are set to be: \(\alpha_{0}=45^{\circ}\), \(\chi_{0}=60^{\circ}\), \(P_{0}=5\,\mathrm{s}\), and \(\mu_{30,0}=1\). The results are compared with the reference model with the same parameters but without disk instability considered (in solid line) in Figure 6. The dashed and dot-dashed lines correspond to \(f=0.1\%\) and \(1\%\), respectively. And the insets in the second row demonstrate the detailed evolution of \(\alpha\), \(\chi\) and \(P_{\rm s}\) when disk instability is considered.
Except for system B, which is subject to the disk instability during the whole mass transfer process, all systems generally experience the transient behavior during the early evolutionary stage when \(\dot{M}\) is low and rising (and during the late stage for system A when \(\dot{M}\) is declining). It is interesting to
Figure 4: Same as Fig. 3, but for the evolution of the spin inclination angle \(\alpha\), the magnetic inclination angle \(\chi\), the spin period \(P_{\rm s}\), and the magnetic moment \(\mu_{30}\).
compare the magnetic inclination evolution when the mass transfer rate \(\dot{M}<\dot{M}_{\rm cr}\). In the case that disk instability is not considered, the accretion torque \(n_{1}\) is relatively small or even zero (if \(r_{\rm in}>r_{\rm co}\)), so there would be little or no change in \(\alpha\), and \(\chi\) decreases mainly due to magnetic dipole radiation according to Eqs. (2) and (3). If the disk instability is taken into account, the enhancement of the accretion rate during outbursts would exert an efficient torque on the NS, causing both \(\alpha\) and \(\chi\) to evolve more rapidly and earlier. We can estimate the time-averaged accretion torque \(\langle n_{1}\rangle\) to be
\[\langle n_{1}\rangle= \dot{M}_{\rm burst}(GM_{*}r_{\rm in,burst})^{1/2}\cdot f\] \[= n_{1}(\dot{M})\left(\frac{r_{\rm in,burst}}{r_{\rm in}}\right)^ {1/2} \tag{17}\]
where \(r_{\rm in,burst}\) is the inner disk radius during outbursts.
Figure 5: The variations of the magnetic inclination angles \(\Delta\chi\) for the five systems with different initial magnetic moments in the \(\chi_{0}-\alpha_{0}\) plane. The green lines represent the probability distribution of the final magnetic inclination angles \(\chi_{\rm f}\).
For systems B-E, considering the disk instability does influence the evolutionary paths of \(\chi\), but barely affects the magnitude of \(\chi_{\rm f}\). However, the disk instability has an important impact on the evolution of the spin period and the magnetic moment, especially for wide binaries such as system B, where the mass transfer rates are smaller than the criteria all the time and then \(\langle n_{1}\rangle\) is always \(<n_{1}\). Consequently less material is accreted by the NS and less field decay occurs.
### The effect of pulsar loss torque enhancement
Parfrey et al. (2016) proposed that the presence of a conducting disk around the NS can increase the number of open magnetic field lines and pulsar loss, in particular for rapid rotators. Under this circumstance, the torque \(n_{3}\) related to pulsar loss is amplified to be
\[n_{3,\rm part} = \left(\zeta\frac{r_{\rm lc}}{r_{\rm in}}\right)^{2}n_{3} \tag{18}\] \[\simeq -1.65\times 10^{33}\dot{M}_{2}^{4/7}M_{*,\odot}^{2/7}\mu_{30}^{6/7} \Omega\;\rm g\,cm^{2}\,s^{-2},\]
where \(\zeta\) is used to describe the efficiency of the field line opening by the differential rotation between NS and the disk, and we take \(\zeta=1\). Therefore, the total toque exerted on the NS should be rewritten as:
\[n_{\rm tot}=\begin{cases}n_{1}+n_{2}+n_{\rm part},&\text{if $r_{\rm in}<r_{\rm co }$}\\ n_{2}+n_{\rm part},&\text{if $r_{\rm co}\leq r_{\rm in}<r_{\rm lc}$}\\ n_{3},&\text{if $r_{\rm in}\geq r_{\rm lc}$}.\end{cases} \tag{19}\]
We recalculate the evolution with the initial parameters same as in the previous section and present the results in Figure 7.
Figure 6: The solid and dashed lines compare the results in the reference model and that with the disk instability considered. The initial parameters are \(\alpha_{0}=45^{\circ}\), \(\chi_{0}=60^{\circ}\), \(P_{0}=5\)s, and \(\mu_{30,0}=1\).
The upper panel compares the torque \(n_{3}\) and the lower panel shows its effect on the evolution of the magnetic inclination angles \(\chi\) for the five systems. It is evident that the torque enhancement plays an important role in the late evolution of \(\chi\). Consequently, the magnetic axis tends to align with the spin axis, which is basically consistent with the conclusion by BA21.
### Predictions of the \(\chi\) distributions
Based on the above calculations, we attempt to predict the possible distributions of the magnetic inclination angles for NSs evolved from I/LMXBs. We recall that the five selected systems (A-E) correspond to different evolutionary outcomes of NS I/LMXBs, namely X-ray pulsars in UCXBs (A), BMSPs with He WD companions (B) and with CO WD companions (C), and ULXs (D and E).
In Figure 5, we depict the probability distribution functions \(F_{\chi}\) of the final magnetic inclination angles \(\chi_{\ell}\) denoted by \(F_{\chi}=\frac{1}{N}\int n_{\chi}\,d\chi\) assuming that \(\alpha_{0}\) and \(\chi_{0}\) are uniformly distributed between \(0^{\circ}\) to \(180^{\circ}\). Here \(N=8100\) is the total sample number for each type of system, and \(n_{\chi}\) is the number of the sample with \(\chi_{\ell}\in(\chi-d\chi/2,\chi+d\chi/2)\) and \(d\chi=3^{\circ}\). Due to the symmetry in \(\chi_{\ell}\) as seen in Figure 5, we limit the abscissa range to \(0^{\circ}-90^{\circ}\) by folding the \(\chi_{\ell}\) values larger than \(90^{\circ}\).
From left to right, the first column of Figure 5 illustrates the probability distribution function for system A. It is clearly seen that the distribution of \(\chi_{\ell}\) is clustered around relatively small angles (less than \(30^{\circ}\)). With increasing \(\mu_{30,0}\), the \(\chi_{\ell}\)'s distribution becomes more concentrated and the peak angle becomes smaller. This might partly explain why only a small amount of NS LMXBs show pulsations. The distribution functions for BMSPs evolved from systems B and C are presented in the second and third columns, respectively. They tend to possess large \(\chi_{\ell}\), roughly peaked around \(90^{\circ}\) when \(\mu_{30,0}\leq 10\). When \(\mu_{30,0}\) becomes larger, the \(\chi_{\ell}\)'s distribution becomes flatter because of shorter accretion time. When \(\mu_{30,0}=100\), another peak appears at \(\sim 20-30^{\circ}\). Overall, relatively large \(\chi_{\ell}\) in BMSPs are expected if \(\mu_{30,0}\leq 10\), which actually favors detection of pulsations from these systems. Johnson et al. (2014) simulated the observed light curves for more than 40 MSPs detected with _Fermi/LAT_ and concluded that the best-fit magnetic inclination angles are almost evenly distributed between \(10^{\circ}\) and \(90^{\circ}\). More recently, Benli et al. (2021) selected several radio-loud millisecond gamma-ray pulsars showing double peaks in their gamma-ray profiles with the spin period \(P_{\rm s}\) in the range of \(2-6\) ms. Their best fits suggested the magnetic inclinations \(\chi\) are larger than approximately \(45^{\circ}\). The difference in the predicted \(\chi\) distributions of UCXBs and BMSPs could be useful in testing the evolutionary models and constraining the initial parameters at the NS's birth.
We use systems D and E to simulate the formation of pulsars in ULXs. The fourth and fifth columns show similar distributions of \(\chi\) when \(\mu_{30,0}\leq 10\), independent of the initial magnetic moment: relatively large \(\chi\)s are expected for the pulsars in ULXs. When \(\mu_{30,0}=100\), the \(\chi_{\ell}\)'s distribution becomes flatter in system D and peaks around \(30^{\circ}\) in system E.
We emphasize that the above results are based on the assumption that the initial spin inclination angles \(\alpha\) and the magnetic inclination angles \(\chi\) are evenly distributed between \(0^{\circ}\) and \(180^{\circ}\). The realistic distributions must be more complicated and beyond the scope of this paper, but the general feature may not change significantly. Future observations and simulations of different types of pulsars may present more stringent constraints on their original distributions.
## 5 Conclusions
Figure 7: The solid and dashed lines compare the evolution of the magnetic inclination angles without and with torque enhancement considered. The initial parameters are same as in Figure 6.
In this paper, we investigate the long-term magnetic inclination angle evolution of accreting NSs in I/LMXBs by combining the BA21 model with detailed binary evolution calculation. We consider five representative binary systems to reflect the formation of UCXBs, BMSPs, and ULXs. We find that the evolution of \(\chi\) generally experiences at least part of the three stages: (1) during the first \(10^{4}-10^{5}\) yr mass transfer, \(\chi\) does not change much due to the relatively low mass transfer rate \(\dot{M}\); (2) with the growth of \(\dot{M}\) and the accretion torque, \(\chi\) increases and settles at its maximum after the spin inclination angle \(\alpha\) evolves to \(0^{\circ}\); (3) the pulsar loss torque drives \(\chi\) to decrease. We also show that stronger initial magnetic field causes the magnetic axis to be more aligned with the spin axis for systems A, B, and C with stable mass transfer, but has little effect in systems D and E with delayed unstable mass transfer.
Moreover, considering disk instability can advance the evolution of \(\chi\), but does not significantly change the final outcome except for system A. However, the enhancement of the pulsar loss torque caused by field line opening can strongly influence the evolution of \(\chi\) if it really works.
Our results suggest possible distributions of the magnetic inclination angles in specific types of binary systems including UCXBs, BMSPs and ULXs. If the initial magnetic moments of NSs are moderate, BMSPs likely have relatively large magnetic inclination angles; if the NSs are initially magnetars, we expect more systems with small \(\chi\) to be observed in BMSPs. Moreover, relatively small and large magnetic inclination angles are anticipated for UCXBs and ULXs, respectively.
Besides the uncertainties in both theory and observation related to the \(\chi\) distribution, the main issue in this work is that the objects are limited to NSs in I/LMXBs, which are just a small portion of the NS population. Including other populations such as isolated NSs and those embedded in HMXBs will definitely provide more comprehensive understanding of the magnetic inclination evolution, and worth to be explored further.
## Acknowledgments
This work was supported by the National Key Research and Development Program of China (2021YFA0718500), the Natural Science Foundation of China under grant No. 12041301, 12121003, and Project U1838201 supported by NSFC and CAS.
## Data Availability
The _MESA_ code, the input files necessary to reproduce our simulations, and the associated data products are available at Zenodo.7123729. The other data and codes underlying this article will be shared on reasonable request to the authors.
|
2308.04173
|
ExoMol line lists -- LIII: Empirical Rovibronic spectra of Yttrium Oxide
(YO)
|
Empirical line lists for the open shell molecule $^{89}$Y$^{16}$O (yttrium
oxide) and its isotopologues are presented. The line lists cover the 6 lowest
electronic states: $X {}^{2}\Sigma^{+}$, $A {}^{2}\Pi$, $A' {}^{2}\Delta$, $B
{}^{2}\Sigma^{+}$, $C {}^{2}\Pi$ and $D {}^{2}\Sigma^{+}$ up to 60000 cm$^{-1}$
($<0.167$ $\mu$m) for rotational excitation up to $J = 400.5$. An \textit{ab
initio} spectroscopic model consisting of potential energy curves (PECs),
spin-orbit and electronic angular momentum couplings is refined by fitting to
experimentally determined energies of YO, derived from published YO
experimental transition frequency data. The model is complemented by empirical
spin-rotation and $\Lambda$-doubling curves and \textit{ab initio} dipole
moment and transition dipole moment curves computed using MRCI. The \textit{ab
initio} PECs computed using the complete basis set limit extrapolation and the
CCSD(T) method with its higher quality provide an excellent initial
approximation for the refinement. Non-adiabatic coupling curves for two pairs
of states of the same symmetry $A$/$C$ and $B$/$D$ are computed using a
state-averaged CASSCF and used to built diabatic representations for the $A
{}^{2}\Pi$, $C {}^{2}\Pi$, $B {}^{2}\Sigma^{+}$ and $D {}^{2}\Sigma^{+}$
curves. Calculated lifetimes of YO are tuned to agree well with the experiment,
where available. The BRYTS YO line lists for are included into the ExoMol data
base (www.exomol.com).
|
Sergei N. Yurchenko, Ryan P. Brady, Jonathan Tennyson, Alexander N. Smirnov, Oleg A. Vasilyev, Victor G. Solomonik
|
2023-08-08T10:08:21Z
|
http://arxiv.org/abs/2308.04173v3
|
# ExoMol line lists - LIII: Empirical Rovibronic spectra Yttrium Oxide (YO)
###### Abstract
Empirical line lists for the open shell molecule \({}^{89}\)Y\({}^{16}\)O (yttrium oxide) and its isotopologues are presented. The line list covers the 6 lowest electronic states: \(X\,^{2}\Sigma^{+}\), \(A\,^{2}\Pi\), \(A\,^{\prime}\,{}^{2}\Delta\), \(B\,^{2}\Sigma^{+}\), \(C\,^{2}\Pi\) and \(D\,^{2}\Sigma^{+}\) up to 60 000 cm\({}^{-1}\) (\(<\) 0.167 \(\mu\)m) for rotational excitation up to \(J=400.5\). An _ab initio_ spectroscopic model consisting of potential energy curves (PECs), spin-orbit and electronic angular momentum couplings is refined by fitting to experimentally determined energies of YO, derived from published YO experimental transition frequency data. The model is complemented by empirical spin-rotation and \(\Lambda\)-doubling curves and _ab initio_ dipole moment and transition dipole moment curves computed using MRCI. The _ab initio_ PECs computed using the complete basis set limit extrapolation and the CCSD(T) method with its higher quality provide an excellent initial approximation for the refinement. Non-adiabatic coupling curves for two pairs of states of the same symmetry \(A/C\) and \(B/D\) are computed using a state-averaged CASSCF and used to built diabatic representations for the \(A\,^{2}\Pi\), \(C\,^{2}\Pi\), \(B\,^{2}\Sigma^{+}\) and \(D\,^{2}\Sigma^{+}\) curves. Calculated lifetimes of YO are tuned to agree well with the experiment, where available. The BRYTS YO line lists for are included into the ExoMol data base (www.exomol.com).
keywords: molecular data < Physical Data and Processes; exoplanets < Planetary Systems stars: atmospheres < Stars stars: low-mass < Stars
## 1 Introduction
The spectrum of Yttrium oxide, YO, has been the subject of many astrophysical studies. It has been observed in spectra of cool stars (Wyckoff & Clegg, 1978) including R-Cygni (Sauval, 1978; Murty, 1982), Pi-Gruis (Murty, 1983), V838 Mon (Goranskii & Barsukova, 2007; Kaminski et al., 2009), and V4332 Sgr (Goranskii & Barsukova, 2007; Tylenda et al., 2015). YO has also been actively used in laser cooling experiments(Yeo et al., 2015; Colloy et al., 2015; Quemener & Bohm, 2016; Colloy et al., 2018). Its spectrum has been used as a probe to study high temperature materials (Badie et al., 2005a).
There are many laboratory spectroscopic studies of YO, including its \(A\,^{2}\Pi\) - \(X\,^{2}\Sigma^{+}\)(Shin & Nicholls, 1977; Linton, 1978; Bernard et al., 1979; Liu & Parson, 1979; Wijchers et al., 1980; Bagare & Murthy, 1982; Bernard & Gravina, 1983; Wijchers et al., 1984; Childs et al., 1988; Steimle & Shirley, 1990; Dye et al., 1991; Fried et al., 1993; Otis & Goodwin, 1993; Badie & Granier, 2002, 2003; Badie et al., 2005a,b; Kobayashi & Sekine, 2006; Badie et al., 2007a,b; Colloy et al., 2015, Mukund & Nakhate, 2023), \(B\,^{2}\Sigma^{+}\) - \(X\,^{2}\Sigma^{+}\)(Shin & Nicholls, 1977; Bernard et al., 1979; Bernard & Gravina, 1980; Fried et al., 1993; Leung et al., 2005; Zhang et al., 2017) \(A\,^{\prime}\,{}^{2}\Delta\) - \(X\,^{2}\Sigma^{+}\), (Chalek & Gole, 1976; Simard et al., 1992; Colloy et al., 2015) and \(D\,^{2}\Sigma^{+}\) - \(X\,^{2}\Sigma^{+}\)(Zhang et al., 2017) band systems, rotational spectrum (Uhler & Akerlind, 1961; Steimle & Alramadin, 1986; Hoeft & Torring, 1993), hyperfine spectrum (Kasai & Weltener, 1965; Steimle & Alramadin, 1986, 1987; Childs et al., 1988; Suenram et al., 1990; Knight et al., 1999; Steimle & Virgo, 2003) and chemiluminescence spectra (Manos & Parson, 1975; Chalek & Gole, 1977; Fried et al., 1993). The very recent experimental study of the \(A\,^{2}\Pi\) and \(B\,^{2}\Sigma^{+}\) systems of YO by Mukund & Nakhate (2023) provided crucial information for this work on the coupling between the \(B\,^{2}\Sigma^{+}\) and \(D\,^{2}\Sigma^{+}\) states. Relative intensity measurements of the \(A\,^{2}\Pi\) - \(X\,^{2}\Sigma^{+}\) system were performed by Bagare & Murthy (1982). Permanent dipole moments of YO in both the \(X\,^{2}\Sigma^{+}\) and \(A\,^{2}\Pi\) states have been measured using the Stark technique (Steimle & Shirley, 1990; Suenram et al., 1990; Steimle & Virgo, 2003). The lifetimes in \(A\,^{2}\Pi\), \(B\,^{2}\Sigma^{+}\), and \(D\,^{2}\Sigma^{+}\) states were measured by Liu & Parson (1977) and Zhang et al. (2017).
Our high level _ab initio_ study (Smirnov et al., 2019) forms
a prerequisite for this work, where a mixture of multireference configuration interaction (MRCI) and coupled cluster methods were used to produce potential energy curves (PECs), spin-orbit curves (SOCs), electronic angular momentum curves (EAMCs), electric dipole moment curves (DMCs), and transition dipole moment curves (TDMCs) covering the six lowest electronic states of YO. Other theoretical studies of YO include MRCI calculations by Langhoff & Bauschlicher (1988) and CASPT2 calculations of spectroscopic constants by Zhang et al. (2017).
In this paper, the _ab initio_ spectroscopic model of Smirnov et al. (2019) is extended by introducing non-adiabatic coupling curves for two pairs of states, \(A/C\) and \(B/D\), and then refined by fitting to experimentally derived energies of YO using our coupled nuclear-motion program Duo (Yurchenko et al., 2016). The energies are constructed using a combination of the spectroscopic constants and line positions taken from the literature through a procedure based on the MARVEL (Furtenbacher et al., 2007) methodology. The new empirical spectroscopic model is used to produce a hot line list BRYTS for YO as part of the ExoMol project (Tennyson & Yurchenko, 2012; Tennyson et al., 2020).
## 2 Experimental Information
Although the spectroscopy of YO has been extensively studied, some key high resolution experimental sources from the 1970-80s only provide spectroscopic constants rather than original transition frequencies; this limits their usability for high resolution applications. For cases where only constants are available we used the program Pgopher(Western, 2017) to compute the corresponding energy term values. This includes term values for the ground electronic \(X\,^{2}\Sigma^{+}\) state. In the following, experimental studies of YO are reviewed.
**61UhAk**: Uhler & Akerlind (1961) reported line positions from the \(B\,^{2}\Sigma^{+}\)-\(X\,^{2}\Sigma^{+}\) (0,0) and \(A\,^{2}\Pi\)-\(X\,^{2}\Sigma^{+}\) (0,0) bands, but the \(B\,^{2}\Sigma^{+}\)-\(X\,^{2}\Sigma^{+}\) band is fully covered by more recent and accurate data (Bernard & Gravina, 1980). The quantum numbers \(F_{1}\) and \(F_{2}\) of \(B\,^{2}\Sigma^{+}\)-\(X\,^{2}\Sigma^{+}\) had to be swapped to agree with Bernard & Gravina (1980). However, due to many conflicting combination differences, this set was not included into our final analysis.
**77ShN**: Shin & Nicholls (1977) performed an analysis of the blue-green \(B\,^{2}\Sigma^{+}\)-\(X\,^{2}\Sigma^{+}\) and orange \(A\,^{2}\Pi\)-\(X\,^{2}\Sigma^{+}\) systems but no rovibronic assignment was reported and their data are not used here.
**79BeBaLu**: Bernard et al. (1979) reported an extensive analysis of the \(A\)-\(X\) (\(v^{\prime}=0,1,2,3,4,5\)) and \(B\)-\(X\) (\(v^{\prime}=0,1\)) systems. Only spectroscopic constants were reported. This work has been superseded by more recent studies and therefore is not used here.
**80BeGr**: Bernard & Gravina (1980) reported line positions from the \(B\,^{2}\Sigma^{+}\)-\(X\,^{2}\Sigma^{+}\) system, \(v^{\prime}=0,1\) and \(v^{\prime\prime}=0,1,2,3\) and spectroscopic constants for \(v^{\prime}=0,1,2,3\), in emission in a hollow cathode discharge with a partial analysis of the \((3,2)\) and \((4,3)\) bands (only higher \(J\geq 35.5\) and \(J\geq 57.5\), respectively). The data was included into our analysis.
**83BeGr**: Bernard & Gravina (1983) presented a study of the \(A\,^{2}\Pi\)-\(X\,^{2}\Sigma^{+}\) system of YO excited in the discharge of a hollow cathode tube. Only spectroscopic constants were reported, covering the \(X\,^{2}\Sigma^{+}\), \(v=0\ldots 10\) and \(A\), \(v=0\ldots 9\) vibronic states (\(v\geq 6\) with a limited analysis). We used these constants to generate term values for the states \(v=0\ldots 5\). The spectroscopic constants were obtained ignoring any direct coupling of the \(A\,^{2}\Pi\) state with other vibronic states in their vicinity (e.g. \(B\,^{2}\Sigma^{+}\)). Band heads of \(v^{\prime}=7-15\) (\(\Omega=0.5\)) and \(v^{\prime}=6,8,9\) (\(\Omega=1.5\)) were reported, but not used directly in the fit here.
The coverage of the spectroscopic constants of \(X\,^{2}\Sigma^{+}\) is up to \(v=10\), while the available line positions are only up to \(v=3\), which is why we opted to use the spectroscopic constants by Bernard & Gravina (1983) to generate the \(X\,^{2}\Sigma^{+}\) term values. YO has a relatively rigid structure in its ground electronic state potential with the vibronic energies well separated from each other and other electronic state.
**92SiJaHa**: Simard et al. (1992) reported line positions from the \(A^{\prime}\,^{2}\Delta\)-\(X\,^{2}\Sigma^{+}\) system (0,0) in their laser-induced fluorescence spectral study with a pulsed dye laser. It was included into the analysis here.
**93HoTo**: Hoeft & Torring (1993) reported a microwave spectrum of \(X\,^{2}\Sigma^{+}\) for \(v=0,1,2,3\). It was included into the analysis here.
**05LeMaCh**: Leung et al. (2005) reported a cavity ring down absorption spectrum of the \(B\,^{2}\Sigma^{+}\)-\(X\,^{2}\Sigma^{+}\) (2,0) and (2,1) system. We included their line positions into analysis here.
**15CoHuYc**: Colloy et al. (2015) reported three THz lines from the \(A\,^{2}\Pi\)-\(X\,^{2}\Sigma^{+}\) (0,0) system with low uncertainties recorded for laser cooling application. These were included in our analysis.
**17ZnhZhZh**: Zhang et al. (2017) reported line positions from the \(D\,^{2}\Sigma^{+}\)-\(X\,^{2}\Sigma^{+}\) system (0,0) and (1,0) which were used in the analysis here.
**23MuNa**: Mukund & Nakhate (2023) reported a high-resolution analysis of the highly excited \(A\,^{2}\Pi\)-\(X\,^{2}\Sigma^{+}\) (\(v^{\prime}=11,12,13\)) and \(B\,^{2}\Sigma^{+}\)-\(X\,^{2}\Sigma^{+}\) (\(v^{\prime}=5,6\)) systems. For the \(A\)-\(X\) band, only the \(\Omega=0.5\) branch was provided. Their line positions were included into our analysis. There is a crossing between \(A\,^{2}\Pi\), \(v=11\) and \(D\,^{2}\Sigma^{+}\), \(v=2\) around \(J=34.5\).
The only experimental information on the transition probabilities available for YO includes the permanent dipole moments in the \(X\,^{2}\Sigma^{+}\) and \(A\,^{2}\Pi\) sates measured by (Steinle & Shirley, 1990; Suenram et al., 1990; Steimle & Virgo, 2003) and the lifetimes of some lower lying vibrational states measured by Liu & Parson (1977) (\(A\,^{2}\Pi\)) and Zhang et al. (2017) (\(B\,^{2}\Sigma^{+}\) and \(D\,^{2}\Sigma^{+}\)).
## 3 Description of the Pseudo-M Marvel Procedures
MARVEL (Measured Active Rotational Vibrational Energy Levels) is spectroscopic network algorithm (Furtenbacher et al., 2007), now routinely used for constructing ExoMol line lists for high-resolution applications (Tennyson et al., 2020). We did not have sufficient original experimental line positions for a proper MARVEL analysis of the YO spectroscopic data, which were mostly only available represented by spectroscopic constants. Furthermore, there are no studies of the infrared spectrum of YO which meant that the (lower) ground energies could be only reconstructed from lower quality UV
transitions, which limits the quality of the MARVEL energies.
Instead, a 'pseudo-MARVEL' procedure was applied as follows (see also Yurchenko et al. (2022)). The experimental frequencies \(\tilde{\nu}_{ij}\), where available, or Pogpher-calculated frequencies, were utilised to generate rovibronic energies of YO as upper states energies using
\[\tilde{E}^{\rm(upp)}_{j(i)}=\tilde{E}^{\rm(low)}_{i}+\tilde{\nu}_{ij}, \tag{1}\]
which were then averaged over all transitions connecting the same upper state \(j\). All experimental transitions originate from or end up at the \(X\,^{2}\Sigma^{+}\) state. We used the spectroscopic constants from Bernard and Gravina (1983) (\(v=0,1,\ldots,10\)) to generate the \(X\,^{2}\Sigma^{+}\) state energies \(\tilde{E}^{\rm(low)}_{i}\) in conjunction with the program Pogpher. This MARVEL-like analysis yielded 5089 empirically determined energy levels which we used in the fit. The final experimentally determined energy set covers the following vibronic bands \(X\): \(v=0-10\), \(J_{\rm max}=105\); \(A\)': \(v=0\), \(J_{\rm max}=14.5\)\(A\): 0,1,2,3,4,5, 11,12,13, \(J_{\rm max}=90.5\) (Pogpher); \(B\): \(v=0,1,2,3,4,5,6\), \(J_{\rm max}=102.5\), \(D\): \(v=0,1,\)\(J_{\rm max}=24.5\) and is illustrated in Fig. 2. The experimental transition frequencies collected as part of this work are provided in the Supporting Information to this paper in the MARVEL format together with the pseudo-MARVEL energies used in the fit.
It should be noted that the Pogpher energies do not provide any information on direct perturbations between vibronic bands caused by their crossing or any other inter-band interactions. For example, the \(A\,^{2}\Pi\)\(v=5\) and \(B\,^{2}\Sigma^{+}\)\(v=0\) states cross at around \(J=27.5\). We excluded energy values in the vicinity of such crossings from the fit. The only crossing represented by the real data is between the \(D\,^{2}\Sigma^{+}\)\(v=2\) and \(A\,^{2}\Pi\)\(v=11\) bands (Mukund and Nakhate, 2023), see Fig. 1.
## 4 _Ab initio_ calculations
Non-adiabatic couplings (NACs) or the first-order derivative couplings (DDR) between the state pairs \(X\,^{2}\Sigma^{+}-B\,^{2}\Sigma^{+}\), \(X\,^{2}\Sigma^{+}-D\,^{2}\Sigma^{+}\), \(B\,^{2}\Sigma^{+}-D\,^{2}\Sigma^{+}\) and \(A\,^{2}\Pi-C\,^{2}\Pi\) were derived by three-point central differences for CASSCF wavefunctions using the DDR procedure as implemented in MOLPRO (Werner et al., 2020). The state-averaged CASSCF method was employed with density matrix averaging over six low-lying doublet states (three \(\Sigma^{+}\), two \(\Pi\), and one \(\Delta\)) with equal weights for each of the roots. The active space included 7 electrons distributed in 13 orbitals (\(6a_{1}\), \(3b_{1}\), \(3b_{2}\), \(1a_{2}\)) that had predominantly oxygen 2p and yttrium 4d, 5s, 5p, and 6s character; all lower energy orbitals were constrained to be doubly occupied. Augmented triple-zeta quality basis sets aug-cc-pwCVTZ(Peterson and Dunning, 2002) on O and pseudopotential-based aug-cc-pwCVTZ-PP (Peterson et al., 2007) on Y were used in these calculations. The resulting NACs are illustrated in Fig. 3.
## 5 Spectroscopic model
Our starting point is the _ab initio_ spectroscopic model of YO developed in Smirnov et al. (2019), which includes PECs, SOCs, TDMCs, EAMCs for six lowest doublet states of YO in the adiabatic representation. YO exhibits non-adiabatic effects via the couplings of the two pairs of states: \(A\,^{2}\Pi\) with \(C\,^{2}\Pi\) and \(B\,^{2}\Sigma^{+}\) with \(D\,^{2}\Sigma^{+}\). Apart from the avoided crossing in these PECs, other adiabatic curves (SOCs, EAMCs, (T)DMCs) also have strongly distorted shapes exhibiting step-like behaviour, which makes the adiabatic representation far from ideal for refinement. This is not only because these curves are difficult to represent analytically as parameterised functions of the bond length (required for the fit), but also because the shapes of the curves around any avoided crossing are very sensitive to the precise position of these crossings, which are also difficult to control in the adiabatic representation. Due to these effects, SOCs, EAMCs and (T)DMCs between the \(B\,^{2}\Sigma^{+}\), \(D\,^{2}\Sigma^{+}\) and the \(A\,^{2}\Pi\), \(C\,^{2}\Pi\) states also exhibit discontinuous behaviour in the region of the avoided crossing, which can only be correctly treated in combination with the NAC curves as well as their second-order derivative couplings (which we did not compute). Vibronic intensities, for instance, are very sensitive to the description of the steep structures in the adiabatic DMCs. Because of inaccuracies in the _ab initio_ calculations, the adiabatic DMCs will be prone to large errors in their shape since the strong, steep gradient variations around the avoided crossing are sensitive to both crossing position and morphology, and so will negatively affect the corresponding spectral properties. These sharp behaviours are difficult to model, so the diabatic representation is a natural choice since DMCs and other couplings will become smooth, and less sensitive to inaccuracies of _ab initio_ calculations.
We therefore decided to work in the diabatic representation taking advantage of the recent developments in Duo(Brady et al., 2022, 2023). To this end, a diabatic spectroscopic model of YO was generated by diabatising the _ab initio_ adiabatic PECs, SOCs, EAMCs and (T)DMCs of YO (Smirnov et al., 2019) as outlined in the following.
Unfortunately, the _ab initio_ adiabatic curves reported in Smirnov et al. (2019) were not suitable for a direct diabatisation using the corresponding NACs due to incomplete _ab initio_ curves and inconsistent levels of theory used for different properties. Effectively, only the PECs of the six electronic states of YO computed using the complete basis set limit (CBS) extrapolation from awCVQZ and awCV5Z in conjunction with the CCSD(T) method were suitable for accurate descriptions of the corresponding crossings in the diabatic representations. All other property curves (SOCs, EAMCs, (T)DMCs) were computed with MRCI or even CASSCF and did not provide adequate coverage, especially at longer bond lengths (\(r>1.9\) A) beyond the avoided crossing points (see Figs. 9 and 10 in Smirnov et al. (2019)).
In order to overcome this problem, in line with the-property-based diabatisation (see, e.g. Shu et al. (2022)), we constructed diabatic curves under the assumption that in the diabatic representations all the curves become smooth, without characteristic step-like shapes and manually constructed diabatic SOCs, EMACs, TDMCs. The existing points were inter- and extrapolated to best represent smooth diabatic curves. Admittedly, there is some arbitrariness in this approach which is subsequently resolved, at least partially, by empirically refining the initial curves. The various curves representing our diabatic spectroscopic model of YO are illustrated in Figs. 4-6.
### Diabatisation
To represent the diabatic potential energy curves of the \(X^{\,2}\Sigma^{+}\), \(A^{\,2}\Pi\), \(A^{\,\prime\,2}\Delta\), \(B^{\,2}\Sigma^{+}\), \(C^{\,2}\Pi\) and \(D^{\,2}\Sigma^{+}\) states analytically, we used the extended Morse oscillator (EMO) (Lee et al., 1999) function as well as the Extended Hulburt-Hirschfelder (EHH) function (Hulburt & Hirschfelder, 2004) as implemented in Duo. An EMO function is given by
\[V(r)=V_{\rm e}\ +\ (A_{\rm e}-V_{\rm e})\left[1\ \ -\ \ \exp\left(-\sum_{k=0}^{N}B_{k} \xi_{p}^{k}(r-r_{\rm e})\right)\right]^{2}, \tag{2}\]
where \(A_{\rm e}\) is a dissociation asymptote, \(A_{\rm e}-V_{\rm e}\) is the dissociation energy, \(r_{\rm e}\) is an equilibrium distance of the PEC, and \(\xi_{p}\) is the Surkus variable given by:
\[\xi_{p}=\frac{r^{p}-r_{\rm e}^{p}}{r^{p}+r_{\rm e}^{p}}. \tag{3}\]
An EHH function is given by (Ushakov et al., 2023)
\[V_{\rm EHH}(r)=D_{\rm e}\left[\left(1-e^{-q}\right)^{2}+cq^{3}\left(1+\sum_{i= 1}^{3}b_{i}q^{i}\right)e^{-2q}\right], \tag{4}\]
where \(q=\alpha\left(r-r_{\rm e}\right)\). The \(X\), \(A\), \(B\) and \(D\) PECs were modelled using EMO, while EHH was used for \(A^{\prime}\) and \(C\).
The corresponding parameters defining PECs were first obtained by fitting to the _ab initio_ CCSD(T)/CBS potential energies and then empirically refined by fitting to empirical energies of YO (where available) as described below; these parameters are given in the supplementary material in the form of a Duo input file. The dissociation energies for all but \(B^{\,2}\Sigma^{+}\) states were fixed to the value 59220 cm\({}^{-1}\), or 7.34 eV, which corresponds to \(D_{0}=7.290(87)\) eV determined by Ackermann & Rauh (1974), based on their mass spectrometric measurements. For the \(B^{\,2}\Sigma^{+}\) state, \(D_{\rm e}\) was fixed to a higher value of 75000 cm\({}^{-1}\) in order to achieve a physically sensible shape of the PEC. Otherwise the \(B^{\,2}\Sigma^{+}\) curve tended to cross the \(D^{\,2}\Sigma^{+}\) curve also at \(r\sim 4\) A.
In principle, the property-based diabatisation does not require usage of the DDR curves. However, in order to assist our diabatisations of the YO _ab initio_ curves, we used the _ab initio_ CASSCF NACs of YO of \(A\)-\(C\) and \(B\)-\(D\) shown in Fig. 3 as a guide. These curves were fitted using the following Lorentzian functions:
\[\phi_{ij}(r)=\frac{1}{2}\frac{\gamma}{\gamma^{2}+(r-r_{\rm c})^{2}}, \tag{5}\]
where \(\gamma\) is the corresponding half-width-at-half-maximum (HWHM), while \(r_{\rm c}\) is its centre, corresponding to the crossing point of diabatic curves.
The diabatic and adiabatic representations are connected via a unitary \(2\times 2\) transformation given by
\[{\bf U}(\beta(r))=\begin{bmatrix}\cos(\beta(r))&-\sin(\beta(r))\\ \sin(\beta(r))&\cos(\beta(r))\end{bmatrix}, \tag{6}\]
where the \(r\)-dependent mixing angle \(\beta(r)\) is obtained via the integral
\[\beta(r)=\int_{-\infty}^{r}\phi_{12}(r^{\prime})dr^{\prime}. \tag{7}\]
For the Lorentzian-type NAC in Eq. (5), the angle \(\beta\) is given by
\[\beta=\frac{1}{2}\arctan\left(\frac{r-r_{\rm c}}{\gamma}\right)+\frac{\pi}{4}. \tag{8}\]
The diabatic representation is defined by two PECs \(V_{1}(r)\)
Figure 2: Experimentally derived energy term values of YO used in the refinement of the _ab initio_ spectroscopic model.
and \(V_{2}(r)\) coupled with a diabatic term \(D(r)\) as a \(2\times 2\) diabatic matrix
\[\mathbf{A}=\left(\begin{array}{cc}V_{1}(r)&D(r)\\ D(r)&V_{2}(r)\end{array}\right). \tag{9}\]
The two eigenvalues of the matrix \(\mathbf{A}\) provide the adiabatic PECs in the form of solution of a quadratic equation as given by
\[V_{\rm low}(r) = \frac{V_{1}(r)+V_{2}(r)}{2}-\frac{\sqrt{[V_{1}(r)-V_{2}(r)]^{2}+4 \,D^{2}}(r)}{2}, \tag{10}\] \[V_{\rm app}(r) = \frac{V_{1}(r)+V_{2}(r)}{2}+\frac{\sqrt{[V_{1}(r)-V_{2}(r)]^{2}+4 \,D^{2}}(r)}{2}, \tag{11}\]
where \(V_{\rm low}(r)\) and \(V_{\rm app}(r)\) are the two adiabatic PECs.
Assuming the diabatic PECs \(V_{1}(r)\) and \(V_{2}(r)\) as well as NAC \(\phi_{12}(r)\) are known, the diabatic coupling function \(D(r)\) can be re-constructed using the condition that the non-diagonal coupling should vanish upon the unitary transformation \(U(r)\) in Eq. (6) such that the adiabatic potential matrix is diagonal and is then given by:
\[D(r)=\frac{1}{2}\tan(2\beta(r))\left(V_{2}(r)-V_{1}(r)\right). \tag{12}\]
Assuming also the EMO functions for the PECs \(V_{1}(r)\) and \(V_{2}(r)\) as in Eq. (2) and the 'Lorentzian'-type angle \(\beta(r)\) in Eq. (8), the diabatic coupling curves for YO have an asymmetric Gaussian-like shape, see the right panel of Fig. 3; this is not always the case as the \(V_{2}-V_{1}\) term in Eq. (12) heavily influences the morphology of \(D(r)\).
For the \(B\,^{2}\Sigma^{+}\)-\(D\,^{2}\Sigma^{+}\) pair, where the experimental data is better represented, in order to introduce some flexibility into the fit, we decided to model the diabatic coupling by directly representing it using an inverted EMO function from Eq. (2). This gives the asymmetric Gaussian-like shape, with the asymptote \(A_{\rm e}\) set to zero and \(V_{\rm e}\) representing the maximum of the diabatic coupling \(D(r)\). The \(A\)-\(C\) diabatic coupling was modelled using Eq. (12) with the two parameter 'Lorentzian'-type angle \(\beta(r)\) from Eq. (8).
### Other coupling curves
For the SOC and EAMC curves of YO we used the expansion:
\[F(r)=\sum_{k=0}^{N}B_{k}\,z^{k}(1-\xi_{p})+\xi_{p}\,B_{\infty}, \tag{13}\]
where \(z\) is either taken as the Surkus variable \(z=\xi_{p}\) or a damped-coordinate given by:
\[z=(r-r_{\rm ref})\,e^{-\beta_{2}(r-r_{\rm ref})^{2}-\beta_{4}(r-r_{\rm ref})^ {4}}, \tag{14}\]
see also Prajapat et al. (2017) and Yurchenko et al. (2018). Here \(r_{\rm ref}\) is a reference position equal to \(r_{\rm e}\) by default and \(\beta_{2}\) and \(\beta_{4}\) are damping factors. For the \(X\,^{2}\Sigma^{+}\) state, a BOB (Born-Oppenheimer Breakdown) correction curve modelled using Eq. (13) was used. These parameterised representation were then used to refine the _ab initio_ curves by fitting to the experimentally derived rovibronic energies of YO. The final coupling curves are shown in Figs. 3 (right display), 4 and 5.
We also included spin-rotation and \(\Lambda\)-doubling \(p(r)+2o(r)\)(Brown & Merer, 1979) curves as empirical objects for some of the electronic states modelled using Eq. (13), see Fig. 5.
### Dipoles
We diabatic our DMCs using a combination of cubic-spline interpolation to smooth out the region around the avoided crossing and knowledge of the shape of the diabastised target curves. Figure 7 illustrates our property-based diabatising 'transformation' for the \(\langle B\,^{2}\Sigma^{+}|\mu_{z}|X\,^{2}\Sigma^{+}\rangle\) and \(\langle D\,^{2}\Sigma^{+}|\mu_{z}|X\,^{2}\Sigma^{+}\rangle\) transition dipole moment pairs, the effect being the two curves'swap' beyond the avoided crossing and are now smooth. Figure 6 shows all diabatised _ab initio_ diagonal and off-diagonal DMCs, which are smooth over all bond lengths.
Within nuclear motion and intensity calculations dipoles are sometimes represented as a grid of _ab initio_ points, however one sees a flattening of the ground state IR overtone bands with vibrational excitation. The source of this nonphysical flattening has been discussed by Medvedev et al. (2015, 2016); Medvedev & Ushakov (2022); Ushakov et al. (2023), which is caused by numerical noise in the calculations which is enhanced by the interpolation of the given Molpro dipole grid points onto the Duo defined grid. The most effective method to reduce this numerical noise is to represent the input dipole moments analytically (Medvedev et al., 2015). We chose to represent our \(X\,^{2}\Sigma^{+}\) DMC using the 'irregular DMC' proposed in Medvedev & Ushakov (2022) which takes the form
\[\mu_{\rm irreg}(r)=\chi(r;c_{2},...,c_{6})\sum_{i=0}^{6}b_{i}T_{i}(z(r)) \tag{15}\]
Figure 3: CASSCF NACs and empirical diabatic couplings \(D(r)\) of YO, A–C and B–D.
where \(T_{i}\) are Chebyshev polynomials of the first kind, \(b_{i}\) are summation coefficients to be fitted, \(z(r)\) is a reduced variable in bond length and is given by
\[z(r)=1-2e^{-c_{1}r} \tag{16}\]
which maps the \(r\in[0,\infty]\) A interval to the \(z\in[-1,1]\) reduced interval (the region in which the Chebyshev polynomials have zeros), and finally \(\chi(r;c_{2},...,c_{6})\) is an \(r\)-dependent term parametrically dependent on 5 \(c_{k}\) parameters to be fitted and is given by
\[\chi(r;c_{2},...,c_{6})\frac{(1-e^{-c_{2}r})^{3}}{\sqrt{(r^{2}-c_{3}^{2})^{2}+c _{4}^{2}}\sqrt{(r^{2}-c_{5}^{2})^{2}+c_{6}^{2}}}.\]
The irregular DMC form has the desirable properties of quickly converging to the correct long-range limit, having enough parameters (13) to ensure minimal local oscillations, and provide a straight Normal Intensity Distribution Law (NIDL) (Medvedev & Ushakov, 2022; Medvedev, 2012; Medvedev et al., 2015). This straight NIDL is a major restriction to the model DMC and means the logarithm of the overtone vibrational transition dipole moments (VTDM) \(\langle v^{\prime}|\mu(r)|v=0\rangle\) (\(v^{\prime}>1\)) should evolve linearly with the square root of the upper state energy over the harmonic frequency, or \(\sqrt{v^{\prime}+\frac{1}{2}}\). Here we compute VTDMs \(\langle v^{\prime}|\mu(r)|v=0\rangle\) up to dissociation for the \(X^{\,2}\Sigma^{+}\) using both the grid defined dipole and the fitted analytical form, where figure 8 shows their behaviour. The expected linear behaviour of the NIDL is shown in Fig. 8 as a gray line which is seen to better agree with the TDM computed using the analytical \(X^{\,2}\Sigma^{+}\) DMC compared to the calculation using the grid interpolated DMC. At the \(v^{\prime}=6\) overtone the grid interpolated DMC causes a non-physical flattening of the VTDM at \(\sim 3.4\times 10^{-5}\) Debye, whereas we only see a departure from the straight NIDL at \(v^{\prime}=15\) when using the analytical form which flattens at \(\sim 7.4\times 10^{-10}\) Debye. The analytically represented \(X^{\,2}\Sigma^{+}\) DMC therefore provides a more physically meaningful behaviour of the vibrational overtone VTDM but still departs from the expected NIDL at high overtones where the intensities are much lower and therefore less important.
Following Smirnov et al. (2019), we scaled the _ab initio_ DMC of \(X^{\,2}\Sigma^{+}\) by the factor 1.025 to match the experimental value of the equilibrium dipole \(\langle X|,v=0|\mu(r)||X,v=0\rangle\) determined by Suenarm et al. (1990). The DMCs of \(A^{\,2}\Delta\), \(A^{\,2}\Pi\) and \(B^{\,2}\Sigma^{+}\) were scaled by 0.97, 0.86 and 0.6, respectively to match the more accurate CCSD(T)/CBS single point calculations from Smirnov et al. (2019).
The \(A^{\,2}\Pi\)-\(X^{\,2}\Sigma^{+}\), \(B^{\,2}\Sigma^{+}\)-\(X^{\,2}\Sigma^{+}\) and \(D^{\,2}\Sigma^{+}\)-\(X^{\,2}\Sigma^{+}\) TDMCs had to be scaled by 0.8, 0.75 and 2.8, respectively, to improve the agreement of the corresponding calculated values of the \(A^{\,2}\Pi\), \(B^{\,2}\Sigma^{+}\) and \(D^{\,2}\Sigma^{+}\) lifetimes with the measure
Figure 4: Refined SOCs of YO in the diabatic representation: diagonal and non-diagonal.
Figure 5: Refined EAMCs of YO in the diabatic representation, empirical spin-rotation corrections and A-doubling curves.
ments of Liu & Parson (1977) and Zhang et al. (2017), see discussion below.
## 6 Refinement of the spectroscopic model
We use the diatomic code Duo(Yurchenko et al., 2016) to solve a coupled system of Schrodinger equations. Duo is a free-access rovibronic solver for diatomic molecules available at [https://github.com/exomol/Duo/](https://github.com/exomol/Duo/). The hyperfine structure was ignored. For nuclear motion calculations a vibrational sinc-DVR basis set was defined as a grid of 151 internuclear geometries in the range of 1.4-3.5 A. We select the lowest 30, 30, 35, 30, 30, 30 vibrational wavefunctions \(X^{\,2}\Sigma^{+}\), \(A^{\prime\,2}\Delta\), \(A^{2}\Pi\), \(B^{\,2}\Sigma^{+}\), \(C^{\,2}\Pi\), and \(D^{\,2}\Sigma^{+}\) states, respectively, to form the contracted vibronic basis. A refined spectroscopic
Figure 8: \(X^{\,2}\Sigma^{+}\to X^{\,2}\Sigma^{+}\)\(v^{\prime}-0\) overtone TDMs are plotted on a log scale vs. \(\sqrt{v^{\prime}+\frac{1}{2}}\) and are computed using the grid interpolated _ab initio_ DMC (shown as blue crosses) and our fitted analytical model DMC (Eq. (15), shown as red crosses). A simple exponential decay is shown for comparison which simulates the expected NIDL-like behaviour (Medvedev, 2012).
Figure 6: Diabatised _ab initio_ dipole moment matrix elements (in a.u.) as a function of bond length. The middle panel gives diagonal dipoles while the top and bottom panels given transition dipole moments.
model of YO was obtained by fitting the expansion parameters representing different properties to 5089 empirically derived rovibrational energy term values of \({}^{89}\)Y\({}^{16}\)O described above.
The refined (diabatic) PECs of YO are illustrated in Fig. 9. The CCSD(T)/CBS _ab initio_ energies from Smirnov et al. (2019), shown with circles, appear to closely follow the refined curves, indicating the excellent quality of the _ab initio_ CCSD(T) PECs.
Diabatic representations of the \(A\,^{2}\Pi\), \(B\,^{2}\Sigma^{+}\), \(C\,^{2}\Pi\) and \(D\,^{2}\Sigma^{+}\) states are illustrated in Fig. 10, where the corresponding experimental energy term values are also shown (\(J=0.5\) or \(J=1.5\)). Due to the very close positioning of the \(D\,^{2}\Sigma^{+}\) (\(v=2\)) and \(B\,^{2}\Sigma^{+}\) (\(v=6\)) states, the \(B\,^{2}\Sigma^{+}\) (\(v=6\)) rovronic wavefunctions appear strongly mixed with the \(D\,^{2}\Sigma^{+}\) (\(v=2\)) wavefunctions in the Duo solution, especially at higher \(J\).
The \(B\,^{2}\Sigma^{+}\) vibronic energies of \(v\geq 4\) are strongly affected by the diabatic coupling with the \(D\,^{2}\Sigma^{+}\) state. Introduction of the diabatic coupling to the \(B\,^{2}\Sigma^{+}\) states makes the shape of the PEC broader and pushes the positions of the \(B\,^{2}\Sigma^{+}\) energies down. It is interesting to note that the \(A\,^{2}\Pi\) state vibronic energies for \(v=11,12,13\) do not appear to be very perturbed by the presence of the close-by \(C\,^{2}\Pi\) state, unlike the interaction of the \(B/D\) diabatic pair. This can be attributed to the difference in the corresponding NACs of the \(B/D\) and \(A/C\) pairs in Fig. 3.
By construction, all Duo eigenfunctions and eigenvalues are automatically assigned the rigorous quantum numbers \(J\) and parity \(\tau=\pm 1\). To assign the non-rigorous rovibronic quantum numbers, Duo first defines the spin-electronic components ('State' and \(\Omega\)) using the largest contribution from eigen-coefficients approach (Yurchenko et al., 2016). Within each rotation-spin-electronic state, the vibrational excitation is then defined by a simple count of the increasing energies starting from \(v=0\).
The refined SOCs, EAMCs, TDMCs of YO are shown in Fig. 4. The refined diabatic couplings for the \(B\)-\(D\) and \(A\)-\(C\) pairs are shown in Fig. 3.
All parameters defining the final spectroscopic model of YO are included into the supplementary material as a Duo input file.
The results of the fittings are illustrated in Fig. 11, where \(|\)obs.-calc.\(|\) residuals are shown for different electronic states. The root-mean-squares error achieved is 0.24 cm\({}^{-1}\) for all 5089 energies covering \(J\) up to 105.5.
There are no experimental data on the \(C\,^{2}\Pi\) state due its large displacement from the Franck-Condon region of the \(X\,^{2}\Sigma^{+}\) state. We therefore had to rely on the available _ab initio_ curves associated with this state as well as on their quality. Unlike the CCSD(T) PECs, the corresponding coupling curves were computed with MRCI and are less accurate. Moreover, there is no experimental data representing perturbations caused by the \(C\,^{2}\Pi\) rovibronic state on other vibronic states in the limited experimental data on YO. However theoretically, using the _ab initio_ data, we do see such perturbations in the \(A\,^{2}\Pi\), \(B\,^{2}\Sigma^{+}\) and \(D\,^{2}\Sigma^{+}\) states due to the spin-orbit and EAM couplings with \(C\,^{2}\Pi\)(see Smirnov et al. (2019)), which makes the fit especially difficult. We therefore decided to switch off all the coupling with the \(C\,^{2}\Pi\) state in this work.
The experimental data on the \(A^{\prime}\,^{2}\Delta\) state is limited to \(v=0\) (\(J\leq 17.5\)), which means only the potential minimum \(V_{\rm e}\) of the state \(A^{\prime}\,^{2}\Delta\) state and the corresponding equilibrium constant could be usefully refined, but not its shape, which was fixed to the _ab initio_ CCSD(T) curve via the corresponding EHH potential parameters.
## 7 Line List
Using our final semi-empirical spectroscopic model, a rovibronic line list of \({}^{89}\)Y\({}^{16}\)O called BRYTS covering the lowest 6 doublet electronic states and the wavelength range up to 166.67 nm was produced. In total 60 678 140 Einstein \(A\) coefficients between 173 621 bound rovibronic states were computed with a maximum total rotational quantum number \(J_{\rm max}=400.5\). \({}^{89}\)Y is the only stable isotope of yttrium; however, using the same model, line lists for two minor isotopologues \({}^{89}\)Y\({}^{17}\)O and \({}^{89}\)Y\({}^{18}\)O have been also generated.
The line lists are presented in the standard ExoMol format (Tennyson et al., 2013, 2020) consisting of a States file and Transition file, with extracts shown in Tables 1 and 2, respectively. The calculated energies in the States file are 'MARVELised', i.e. we replace them with the (pseudo-)MARVEL values where available. The uncertainties are taken as the experimental (pseudo-MARVEL) uncertainties for the substituted values, otherwise the following empirical and rather conservative expression is used:
\[{\rm unc.}=\Delta T+\Delta\omega\,v+\Delta B\,J(J+1), \tag{17}\]
with the state-dependent parameters listed in Table 3.
The partition function of YO computed using the new line list is shown in Fig. 12, where it is compared to the partition functions by Barklem & Collet (2016) and Vardya (1970), showing close agreement once correcton is made for the fact that \({}^{89}\)Y has nuclear spin \(\frac{1}{2}\). We also generate temperature- and pressure dependent opacities of YO using the BRYTS line list and by following the ExoMolOP procedure (Chubb et al., 2021) for four exoplanet atmosphere retrieval codes ARCiS (Min, Michiel et al., 2020), TauREx (Al-Refaie et al., 2021), NEMESIS (Irwin et al., 2008) and petitRADTRANS (Molliere et al., 2019).
The BRYTS line lists, partition function and opacities are available at www.exomol.com.
& Parson (1977) and \(B\,^{2}\Sigma^{+}\) (\(v=0\)) and \(D\,^{2}\Sigma^{+}\) (\(v=0,1\)) lifetimes by Zhang et al. (2017) as well as to the theoretical values by Langhoff & Bauschlicher (1988) and Smirnov et al. (2019). The theoretical values correspond to the lowest \(J\) values, \(J=0.5\) for \(X\,^{2}\Sigma^{+}\), \(B\,^{2}\Sigma^{+}\) and \(D\,^{2}\Sigma^{+}\), \(J=1.5\) for \(A\,^{2}\Pi\) and \(C\,^{2}\Pi\), \(J=1.5\) for \(A\,^{\prime}\Delta\) which we consider as a good proxy for the experimental values (\(J\) unspecified) due to very slow \(J\) dependence of the lifetimes. The good agreement is partly due to the adjustment of the corresponding TDMC to match the corresponding lifetimes as specified above. Our result is the best we could do for complicated system \(D\,^{2}\Sigma^{+}\)-\(X\,^{2}\Sigma^{+}\) with a complex diabatic coupling (Fig. 3) and the diabatsed \(D\,^{2}\Sigma^{+}\)-\(X\,^{2}\Sigma^{+}\) TDMC based on some level of arbitrariness (Fig. 7).
## 9 Comparisons to experimental spectra
Figure 15 compares the experimental \(A\,^{2}\Pi_{1/2}\)\(\rightarrow\)\(X\,^{2}\Sigma^{+}\)\(v=0\to 0\) emission bands measured by Bernard et al. (1979) via Fourier Transform spectroscopy (black, extracted from their Fig. 2) to our computed spectra (red). We simulate our spectra at the temperature of 600 K to agree with the rotational structure of the experiment. We see good agreement in both line position and band structure with experiment. The band head is, however, found to be shifted by 0.7 cm\({}^{-1}\) as a result of the accumulated error of the line positions at high value of \(J\) in combination with the lack of the accurate experimental line positions. We also plot the same spectrum using the original rotational constant from Bernard et al. (1979), which shows the same shift. Effectively, our model is based on the same constants which explains the agreement with their synthetic spectrum and disagreement with the experimental graph. Some discrepancies can be seen in the line intensities, but this could be due to our assumptions about the temperature and line broadening.
As in Smirnov et al. (2019), for the sake of completeness, we provide comparisons with the \(B\,^{2}\Sigma^{+}\) - \(X\,^{2}\Sigma^{+}\) and \(D\,^{2}\Sigma^{+}\) - \(X\,^{2}\Sigma^{+}\) absorption spectrum from Zhang et al. (2017) and \(D\,^{2}\Sigma^{+}\)-\(X\,^{2}\Sigma^{+}\) spectrum from Simard et al. (1992) using the BRYTS line list, now with an improved agreement, see also the corresponding discussions Smirnov et al. (2019). In Fig. 17 non-LTE conditions with the rotational temperature \(T_{\rm rot}\) = 50 K and the vibrational temperature of \(T_{\rm vib}\) = 800 K to better reproduce the experimental spectrum were assumed.
Figure 10: Diabatic PECs of the \(B/D\) and \(A/C\) pairs with the corresponding experimental energy term values (\(J=0.5\)).
Figure 9: Refined (lines) and _ab initio_ (points) PECs of YO: diabatic (left) and adiabatic (right).
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c c} \hline \hline \(i\) & Energy (cm\({}^{-1}\)) & \(g_{i}\) & \(J\) & unc & \(\tau\) & \(g\) & Parity & State & \(v\) & \(\Lambda\) & \(\Sigma\) & \(\Omega\) & Ma/Ca & Energy (cm\({}^{-1}\)) \\ \hline
329 & 17109.844600 & 8 & 1.5 & 0.060200 & 0.000000 & 0.000199 & + & f & A2P1 & 1 & 1 & -0.5 & 0.5 & Ma & 17109.788545 \\
330 & 17501.577299 & 8 & 1.5 & 0.220200 & 0.002893 & -0.400448 & + & f & X2Sigma+ & 22 & 0 & 0.5 & 0.5 & Ca & 17501.577299 \\
331 & 17538.465800 & 8 & 1.5 & 0.060200 & 0.00000 & 0.798921 & + & f & A2P1 & 1 & 0.5 & 1.5 & Ma & 17538.38615 \\
332 & 17635.336115 & 8 & 1.5 & 4.030000 & 0.000762 & 0.399545 & + & f & Ap2Delta & 4 & 2 & -0.5 & 1.5 & Ca & 17635.33615 \\
333 & 17916.571400 & 8 & 1.5 & 0.110200 & 0.00000 & 0.000194 & + & f & A2P1 & 2 & 1 & -0.5 & 0.5 & Ma & 17916.519129 \\
334 & 18229.072587 & 8 & 1.5 & 0.230200 & 0.002792 & -0.400448 & + & f & X2Sigma+ & 23 & 0 & 0.5 & 0.5 & Ca & 18229.072587 \\
335 & 18345.767700 & 8 & 1.5 & 0.110200 & 0.000000 & 0.798927 & + & f & A2P1 & 2 & 1 & 0.5 & 1.5 & Ma & 18345.681930 \\
336 & 18403.2444677 & 8 & 1.5 & 5.030000 & 0.000561 & 0.399553 & + & f & Ap2Delta & 5 & 2 & -0.5 & 1.5 & Ca & 18403.2444677 \\ \hline \end{tabular} \(i\): State counting number.
\(E\): State energy term values in cm\({}^{-1}\), MARVEL or Calculated (D\(\upsilon\)o).
\(g_{i}\): Total statistical weight, equal to \(g_{\rm ns}(2J+1)\).
\(J\): Total angular momentum.
\(\upsilon\): Uncertainty, cm\({}^{-1}\).
\(\upsilon\): Lifetime (s\({}^{-1}\)).
\(g\): Lande \(g\)-factors. Semenov et al. (2016)
\(+\)/-Total parity.
\(e\)/-\(f\): Rotational parity.
\(\upsilon\): State: Electronic state.
\(\upsilon\): State vibrational quantum number.
\(\Lambda\): Projection of the electronic angular momentum.
\(\Sigma\): Projection of the electronic spin.
\(\Omega\): Projection of the total angular momentum, \(\Omega=\Lambda+\Sigma\).
Label: ‘Ma’ is for MARVEL and ‘Ca’ is for Calculated.
Energy: State energy term values in cm\({}^{-1}\), Calculated (D\(\upsilon\)o).
\end{table}
Table 1: Extract from the states file of the line list for YO.
Figure 11: Observed minus calculated residuals for YO using the refined spectroscopic model for different vibronic states.
## 10 Conclusions
Accurate and extensive empirical BRYTS line lists for \({}^{89}\)Y\({}^{16}\)O, \({}^{89}\)Y\({}^{17}\)O and \({}^{89}\)Y\({}^{18}\)O are produced covering six lowest doublet electronic states and ranging up to 60 000 cm\({}^{-1}\). The line list is based on an refined set of curves in the diabatic representation obtained by fitting to a set of experimentally derived rovibronic energies of YO. The latter is based on the experimental data from the literature, either original laboratory line positions whenever available or spectroscopic constants. Using effective Hamiltonian to reconstruct molecular energies in place of the original experimental data is less than ideal as it lacks the information on any local perturbations, which is critical when using it to fit the spectroscopic model.
Although ExoMol line lists, including BRYTS, are usually intended for astrophysical applications of hot atmospheric environments, YO is one of the molecules used in cooling applications, where our line list may also be useful.
The _ab initio_ calculations, especially MRCI, of transition metal species is still a big challenge and therefore ultimately the lab data (transition frequencies, intensities, dipoles, lifetimes) is a crucial source of the information to produce useful line lists. For YO we were lucky to have the _ab initio_ PECs of excited electronic states of the CCSD(T) quality, while everything else had to rely on the fit to the experiment.
In this work, the hyperfine structure of the YO rovibronic
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(f\) & \(i\) & \(A_{fi}\) (s\({}^{-1}\)) & \(\bar{\nu}_{fi}\) \\ \hline
109581 & 110319 & 3.2942E+00 & 10000.000083 \\
29391 & 29538 & 2.8508E+03 & 10000.000408 \\
125490 & 124575 & 4.783E+01 & 10000.000458 \\
68522 & 16749 & 2.8177E+03 & 10000.000446 \\
37951 & 38099 & 2.4501E-01 & 10000.001410 \\
76842 & 76441 & 2.7473E-01 & 10000.001697 \\
53135 & 52194 & 5.5318E-03 & 10000.001811 \\
48271 & 48407 & 2.4789E-01 & 10000.001935 \\
41206 & 40812 & 2.4740E-02 & 10000.002216 \\
124421 & 124076 & 2.1636E+01 & 10000.002623 \\
13395 & 14082 & 1.7944E-02 & 10000.003270 \\
14204 & 13272 & 3.2344E-04 & 10000.003821 \\ \hline \hline \end{tabular} \(f\): Upper state counting number;
\(i\): Lower state counting number;
\(A_{fi}\): Einstein-\(A\) coefficient in s\({}^{-1}\);
\(\bar{\nu}_{fi}\): transition wavenumber in cm\({}^{-1}\).
\end{table}
Table 2: Extract from the transitions file of the line list for YO.
Figure 14: Different electronic band components of the absorption spectrum simulated at 2000 K using Lorentzian line broadening of 1 cm\({}^{-1}\) for each line computed at a resolution of 1 cm\({}^{-1}\).
Figure 12: Partition functions of YO: From this work (solid line), from Vardya (1970) ( filled circles) and by from Barklem & Collet (2016) (open squares). The latter two were multiplied by a factor of 2 to account for the different treatment of nuclear statistics.
Figure 13: The simulated YO absorption spectrum computed at different temperatures. We adopt a Lorentzian line broadening of 1 cm\({}^{-1}\) for each line which is computed at a resolution of 1 cm\({}^{-1}\). We see the intensity deviation is greatest around 0.5 and 0.6 \(\mu\)m where the \(X\,^{2}\Sigma^{+}\)\(\rightarrow\)\(A\,^{2}\Pi\) and \(X\,^{2}\Sigma^{+}\)\(\rightarrow\)\(B\,^{2}\Sigma^{+}\) bands dominate opacity.
states was ignored, mostly due to the lack of the experiment. Should it become important for YO spectroscopic applications to include the hyperfine effects, the methodology to compute the hyperfine-resolved energies and spectra is readily available as implemented in Duo(Qu et al., 2022, 2023; Bowesman et al., 2023).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline State & \(v\) & [77LiPa] & [17ZnZhZh] & [19SmSoYu] & This work \\ \hline \(A\,^{2}\Pi_{1/2}\) & 0 & \(33.0\pm 1.3\) & & 22.6 & 35.3 \\ & 1 & \(36.5\pm 2.4\) & & 23.0 & 35.9 \\ \(A\,^{2}\Pi_{3/2}\) & 0 & \(32.3\pm 0.9\) & & 20.9 & 20.9 \\ & 1 & \(30.4\pm 1.8\) & & 21.3 & 32.6 \\ & 2 & \(33.4\pm 1.5\) & & 21.6 & 33.2 \\ & 6 & \(41.6\pm 2.1\) & & 29.2 & 39.5 \\ \(B\,^{2}\Sigma^{+}\) & 0 & \(30.0\pm 0.9\) & \(38\pm 5\) & 32.5 & 31.1 \\ & 1 & \(32.5\pm 1.2\) & & 34.3 & 30.8 \\ \(D\,^{2}\Sigma^{+}\) & 0 & \(79\pm 5\) & 30.1 & 63.1 \\ & 1 & \(79\pm 5\) & 29.2 & 57.1 \\ \hline \end{tabular}
\end{table}
Table 4: Lifetimes of \({}^{89}\)Y\({}^{16}\)O states in ns: comparison with the measurements of [77LiPa] (Liu & Parson, 1977) and [17ZnZhZh] (Zhang et al., 2017), and the _ab initio_ calculations of [19SmSoYu] (Smirnov et al., 2019).
Figure 16: Comparison of the computed emission \(A\,^{2}\Pi\) – \(X\,^{2}\Sigma^{+}\) (0,0) band with the measurements of Simard et al. (1992) at \(T=77\) K and Lorentzian line profile of HWHM = 0.04 cm\({}^{-1}\).
Figure 17: Comparison of our computed \(D\,^{2}\Sigma^{+}\)–\(X\,^{2}\Sigma^{+}\) absorption spectra to the measurements of Zhang et al. (2017). The simulations assumed a cold rotational temperature of \(T_{\rm rot}\) = 50 K and a hot vibrational temperature of \(T_{\rm vib}\) = 800 K. A Doppler line profile corresponding to \(T_{\rm rot}\) = 50 K was used.
## Acknowledgements
This work was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme through Advance Grant number 883830 and the STFC Projects No. ST/M001334/1 and ST/R000476/1. The authors acknowledge the use of the Cambridge Service for Data Driven Discovery (CSD3) as part of the STFC DiRAC HPC Facility (www.dirac.ac.uk), funded by BEIS capital funding via STFC capital grants ST/P002307/1 and ST/R002452/1 and STFC operations grant ST/R00689X/1. A.N.S. and V.G.S. acknowledge support from Project No. FZZW-2023-0010.
## Data Availability
The states, transition, opacity and partition function files for the YO line lists can be downloaded from www.exomol.com. The open access programs Duo and ExoCross are available from github.com/exomol.
## Supporting Information
Supplementary data are available at MNRAS online. This includes the spectroscopic model in the form of the Duo input file, containing all the curves, parameters as well as the experimentally derived energy term values of YO used in the fit and the experimental line positions collected from the literature in the MARVEL format.
|
2301.08242
|
Modelling the cosmological Lyman-Werner background radiation field in
the Early Universe
|
The Lyman-Werner (LW) radiation field is a key ingredient in the
chemo-thermal evolution of gas in the Early Universe, as it dissociates H2
molecules, the primary cooling channel in an environment devoid of metals and
dust. Despite its important role, it is still not implemented in cosmological
simulations on a regular basis, in contrast to the ionising UV background. This
is in part due to uncertainty in the source modelling, their spectra and
abundance, as well as the detailed physics involved in the propagation of the
photons and their interactions with the molecules. The goal of this work is to
produce an accurate model of the LW radiation field at $z\geq6$, by
post-processing the physics-rich high-resolution FiBY simulation. Our novelties
include updated cross sections for H$_2$, H$^-$ and H$^+_2$ chemical species,
IGM absorption by neutral Hydrogen and various spectral models for Population
III and Population II stars. With our fiducial set of parameters, we show that
the mean LW intensity steadily increases by three orders of magnitude from
$z\sim23$ to $z\sim6$, while spatial inhomogeneities originate from massive
star-forming galaxies that dominate the photon budget up to a distance of
$\sim100$ proper kpc. Our model can be easily applied to other simulations or
semi-analytical models as an external radiation field that regulates the
formation of stars and massive black hole seeds in high-$z$ low-mass halos.
|
Andrea Incatasciato, Sadegh Khochfar, Jose Oñorbe
|
2023-01-19T18:57:57Z
|
http://arxiv.org/abs/2301.08242v2
|
# Modelling the cosmological Lyman-Werner background radiation field in the Early Universe
###### Abstract
The Lyman-Werner (LW) radiation field is a key ingredient in the chemo-thermal evolution of gas in the Early Universe, as it dissociates H\({}_{2}\) molecules, the primary cooling channel in an environment devoid of metals and dust. Despite its important role, it is still not implemented in cosmological simulations on a regular basis, in contrast to the general UV background. This is in part due to uncertainty in the source modelling, their spectra and abundance, as well as the detailed physics involved in the propagation of the photons and their interactions with the molecules. To overcome these difficulties, we present here a model (with the relative fit) for the mean LW intensity during the first billion years after the Big Bang, obtained by post-processing the high-resolution FiBY simulations with an approximated radiative transfer method that employs accurate cross sections for H\({}_{2}\), as well as for H\({}^{-}\) and H\({}_{2}^{+}\), the chemical species associated with its formation. Absorption by neutral Hydrogen in the IGM and various spectral models for Population III and Population II stars are also included. Our model can be easily applied to other simulations or semi-analytical models as an external homogeneous source of radiation that regulates the star formation in low-mass halos at high-z. We also show how to account for spatial inhomogeneities in the LW radiation field, originating from massive star-forming galaxies that dominate the photon budget up to distances of \(\sim 100\) proper kpc. Such inhomogeneities have a strong impact on the H\({}_{2}\) abundance and the feasibility of scenarios such as the formation of Direct Collapse Black Holes (DCBHs).
keywords: astrochemistry - molecular processes - stars: Population III - early Universe - radiative transfer - methods: numerical
## 1 Introduction
Molecular Hydrogen (H\({}_{2}\)) is a key ingredient of the early-universe chemistry, as it represents the main cooling channel of pristine gas at T \(<10^{4}\) K (Saslaw & Zipoy, 1967; Peebles & Dicke, 1968). Light primordial elements such as Hydrogen and Helium are efficient in their atomic form only above that temperature. On the other hand, heavier elements (collectively referred to as _metals_) do not form during the Big Bang Nucleosynthesis and are a product of the evolution and explosion of stars (Kobayashi et al., 2020), either in isolation or in binary systems; hence cooling due to metal-line transitions (Smith et al., 2008), C-, F-, and O-based molecules and dust grains (Hirashita & Ferrara, 2002) starts dominating the energy budget of the interstellar medium (ISM) only after the first chemical enrichment episodes.
The abundance of H\({}_{2}\) (and secondarily of other simple molecules, e.g. HD and HeH\({}^{+}\)) strongly influences the thermo-dynamical evolution of the gas that condenses in the first mini-halos forming at redshift \(z\leq 30\)(see e.g. Abel et al., 2000, or Galli & Palla, 2013 for a review). Molecular cooling allows to reach temperatures as low as \(\sim 200\) K, condense to high densities and form the first Population III (PopIII) stars (Haiman et al., 1996; Tegmark et al., 1997). Analytical models, 1D and 3D simulations all show that the compressional heating that develops while gas falls into dark matter halos is efficiently dissipated with a central H\({}_{2}\) fractional abundance of at least \(10^{-5}-10^{-4}\)(Abel et al., 2000; Machacek et al., 2001; Yoshida et al., 2006; Latif & Khochfar, 2019). This sets a clear consensus about the initial phase of metal-free PopIII star formation episodes, while models diverge on the final outcome of this process (the multiplicity and the Initial Mass Function - IMF - of PopIII stars), due to differences in the spatial and mass resolution, and in the treatment of accretion, gas chemistry and turbulence. (see Bromm & Larson, 2004 for a review, or e.g. Hirano et al., 2015; Chiaki & Yoshida, 2022 and Latif et al., 2022b for more recent discussions).
Nevertheless, PopIII stars are generally thought to be massive and hot (Bromm et al., 1999; Abel et al., 2002) and are predicted to emit a copious amount of energetic photons during their very short lifetime (Schaerer, 2002). They explode as violent supernovae, leaving black hole remnants with masses \(\sim 10-100\) M\({}_{\odot}\)(Fryer et al., 2001; Madau & Rees, 2001) and enriching the universe with metals (Heger & Woosley, 2002), that pave the way for the formation of the first proto-galaxies made of metal-poor Population II (PopII) stars (Bromm & Loeb, 2003).
Due to their peculiar features, PopIII stars represent also the most important source of Lyman-Werner (LW) photons at the Cosmic Dawn (e.g. Haiman et al., 2000; Agarwal et al., 2012). The LW radia
|
2310.11052
|
Investigating Threats Posed by SMS Origin Spoofing to IoT Devices
|
The short message service (SMS) is a service for exchanging texts via mobile
networks that has been developed not only as a means of text communication
between subscribers but also as a means to remotely manage Internet of Things
(IoT) devices. However, the originating number of an SMS can be spoofed. If IoT
devices authenticate administrators based on the originating number of an SMS,
the authentication is bypassed via SMS origin spoofing. Consequently, IoT
devices are at risk of accepting commands from attackers and performing
unauthorized actions. Accordingly, in this study, the specifications of major
cellular IoT gateways were evaluated by focusing on remote management via SMS,
and the authentication bypass hypothesis was verified. The results showed that
25 of the 32 targeted products supported SMS-based remote management, and 20
implemented authentication based on the originating number of the SMS.
Furthermore, by spoofing the originating number of the SMS, one product was
demonstrated to be remotely exploitable through authentication bypassing. Thus,
this study revealed the threats posed by SMS origin spoofing to IoT devices and
proved that SMS origin spoofing not only threatens text communication between
people but also puts machine communication at risk.
|
Akaki Tsunoda
|
2023-10-17T07:41:04Z
|
http://arxiv.org/abs/2310.11052v3
|
# Investigating Threats Posed by SMS Origin Spoofing to IoT Devices
###### Abstract.
The short message service (SMS) is a service for exchanging texts via mobile networks that has been developed not only as a means of text communication between subscribers but also as a means to remotely manage Internet of Things (IoT) devices. However, the originating number of an SMS can be spoofed. If IoT devices authenticate administrators based on the originating number of an SMS, the authentication is bypassed via SMS origin spoofing. Consequently, IoT devices are at risk of accepting commands from attackers and performing unauthorized actions. Accordingly, in this study, the specifications of major cellular IoT gateways were evaluated by focusing on remote management via SMS, and the authentication bypass hypothesis was verified. The results showed that 25 of the 32 targeted products supported SMS-based remote management, and 20 implemented authentication based on the originating number of the SMS. Furthermore, by spoofing the originating number of the SMS, one product was demonstrated to be remotely exploitable through authentication bypassing. Thus, this study revealed the threats posed by SMS origin spoofing to IoT devices and proved that SMS origin spoofing not only threatens text communication between people but also puts machine communication at risk.
Keywords:SMS, IoT, cellular IoT, remote management, spoofing +
Footnote †: copyrighted: Author’s address: Akaki Tsinonda, [email protected], Independent Researcher, Japan.
## 1. Introduction
The short message service (SMS) is used for remotely managing Internet of Things (IoT) devices. Originally, the SMS was developed for mobile subscribers to exchange text messages [(1)]. The capability of SMS to send and receive text over long-range wireless communications can also be utilized for communication with machines, such as IoT devices [(32)]. IoT devices are actively used as sensors to detect crop growth and weather conditions in farmlands, or vibrations in buildings, bridges, and other structures, and as tracking devices for delivery trucks [(14; 27; 44)]. Data observed from the real world using IoT devices are collected and analyzed by a management server to create new values. In these cases, most IoT devices are installed outdoors or in hard-to-reach locations. Thus, in many cases, short-range wireless communication cannot be used to access IoT devices, and it is not economically feasible to develop a new wired network to solve the problem. IoT devices installed in hard-to-reach locations can be accessed using existing SMS technology, as long as they are within the coverage area of the mobile network. Thus, the remote management of IoT devices can be facilitated by delivering commands to the devices via SMS.
However, the sender information in an SMS can be spoofed by attackers. The sender information of an SMS is specified by the sender's phone number (henceforth referred to as the "originating number") or a display name called the "sender ID" composed of alphanumeric characters. The sender ID can be specified as any string using the SMS gateway service. Thus, it can be spoofed as the name of a trusted entity [(20)]. However, short messages with specified sender IDs cannot receive a response and are unavailable in some countries [(34)]. Therefore, the originating number is generally used as the sender information to enable two-way communication between subscribers or machines via SMS. Previous studies have shown that originating numbers can also be spoofed as arbitrary phone numbers [(40; 45; 41; 24)]. Short messages with spoofed sender information have been exploited in various cybercrimes, such as phishing scams [(18; 33; 39)]. For example, an attacker can impersonate a widely trusted entity and send a message including the URL of a website that requires the victim to enter authentication or payment information. Unfortunately, because the sender's information displayed in short messages cannot be trusted, detection and mitigation measures for such attacks have become crucial research topics [(28; 4; 21; 4)].
If IoT devices authenticate the sender of an SMS command based on the originating number, the authentication can be bypassed via SMS origin spoofing. Remote management by an administrator to perform reboots and configuration changes is essential for IoT devices installed in hard-to-reach locations. An IoT device that receives a
|
2307.03099
|
Couplet scoring for research based assessment instruments
|
Contemporary content-focused research-based assessment instruments (RBAI's)
typically use instrument items (i.e., questions) as the unit of assessment for
instrument scoring, reporting, and validation. Couplet scoring, introduced in
this paper, employs the couplet as an alternative unit of assessment. A couplet
is essentially an item viewed and scored through the lens of a specific
assessment objective (AO), which is a proficiency the assessment aims to
measure. With couplet scoring, a single item may have more than one AO and
therefore more than one couplet. Here, we introduce couplet scoring, discuss
its affordances and limitations, and use both a recently developed content RBAI
on measurement uncertainty, as well as an established RBAI (the Force Concept
Inventory) to ground our discussion.
|
Michael Vignal, Gayle Geschwind, Marcos D. Caballero, H. J. Lewandowski
|
2023-07-06T16:15:23Z
|
http://arxiv.org/abs/2307.03099v2
|
# Couplet scoring for research based assessment instruments
###### Abstract
Contemporary content-focused research-based assessment instruments typically use instrument items (i.e., questions) as the unit of assessment for instrument scoring, reporting, and validation. However, traditional item-based scoring has a number of limitations, including several arising from the use of the common assessment development conventions of single-construct items, unidimensionality, and single-correct-answer items. Couplet scoring, introduced in this paper, employs the couplet as an alternative unit of assessment, where a couplet is essentially an item viewed and scored through the lens of a specific assessment objective (AO). With couplet scoring, a single item may have more than one AO and therefore more than one couplet. In this paper, we outline the limitations of traditional item scoring, introduce couplet scoring and discuss its affordances (especially as they relate to limitations of item scoring), and use a recently developed content RBAI to ground our discussion.
## I Introduction
Research-based assessment instruments (RBAIs) are surveys, questionnaires, and other tools that help educators make informed pedagogical and curricular decisions [1, 2, 3, 4]. These instruments can be used to gather information on student beliefs, experiences, proficiencies, and other aspects of education that are of interest to educators. Unlike course quizzes and summative exams, which typically assess individual students, RBAIs are intended to identify trends in populations of students [1, 2]. Here, we focus on primarily RBAIs that measure student proficiency in specific content areas, which we refer to as _content RBAIs_.
Recently, we created a physics content RBAI for measurement uncertainty [5], employing assessment objectives (AOs) [6] throughout the development process. AOs are statements (similar in structure to learning objectives [7]) about the content the instrument aims to assess. For our RBAI, these AOs are integral to the interpretation, scoring, and reporting of student responses, as each item is designed to align with one or more AO.
Our use of AOs supported developing an instrument that aligned with our assessment priorities. Indeed, the usefulness of RBAIs depends, in large part, on the degree to which an instrument measures what it purports to measure and how meaningfully these measures are reported to implementers [1, 8, 9, 10, 11, 12, 13, 4]. Typically, these measures are item scores, and they are often reported individually or as a total instrument score [1, 13].
In this paper, we introduce and formalize our novel, AO-aligned scoring paradigm for content RBAIs called _couplet scoring_, where a couplet is a scorable item-AO pair. In this paradigm, it is couplet scores, rather than item scores, that serve as the unit of assessment for reporting student proficiencies and validating the instrument.
We posit that couplet scoring offers a number of affordances as compared to traditional item scoring. However, this new scoring paradigm challenges a number of common conventions of contemporary assessment development, specifically single-construct items, unidimensionality, and single-correct-answer items. Through exploring these conventions, we aim to identify their purposes and consider how couplet scoring can achieve these same goals while also addressing many of their limitations.
The instrument for which we developed couplet scoring is the Survey of Physics Reasoning on Uncertainty Concepts in Experiments (SPRUCE) [5]. SPRUCE was created using a modified implementation of Evidence Centered Design (ECD) [14], an assessment development framework. SPRUCE is designed to fulfill the need for a widely-administable assessment of measurement uncertainty appropriate for introductory (first and second year) physics lab classes. Although SPRUCE was developed in parallel with couplet scoring and is used in this paper to demonstrate how this scoring paradigm might be applied to content RBAIs, the focus of this paper is couplet scoring and not SPRUCE. Currently, several other papers making use of couplet scoring to validate SPRUCE and examine student and proficiency with measurement uncertainty are in development.
Specifically, this paper has the following research goals:
1. Review item scoring, including the affordances and limitations of single-construct items, unidimensionality, and single-correct-answer items;
2. Introduce couplet scoring and explore its affordances and limitations; and
3. Demonstrate how couplet scoring can be employed in content RBAI development.
In Sec. II, we detail the affordances and limitations of single-construct items, unidimensionality, and single correct answer options. Sec. III then introduces our new scoring paradigm and how it may address some of the limitations of traditional item scoring. Details of the implementation of couplet scoring are shared in Sec. IV, and Sec. V discusses possible implications for other types of evaluation. A summary and discussion of future work are presented in Sec. VI.
## II Assessment goals, conventions, and assumptions
Research-based assessment instruments (RBAIs) are tools used by educators and researchers to gather information from students about teaching, learning, student experiences, and other aspects of education to inform curricular and pedagogical decisions. These instruments are "developed based on research into student thinking...[to] provide a standardized assessment of teaching and learning" when administered to students [3]. Importantly, these instruments are not intended to help educators evaluate individual students or assign grades [1; 2].
In physics, most RBAIs are designed to measure student proficiencies in specific content areas, such as in mechanics [15; 16], electricity and magnetism [17; 18; 19], quantum mechanics [20], thermodynamics [21], or laboratory settings [5; 22; 23; 24; 25]. These _content RBAIs_ (sometimes called concept inventories [2] or conceptual assessment instruments [11]) have proven valuable in identifying instructional weaknesses [2; 26] and evaluating the effectiveness of instructional changes [27; 28; 29; 30; 2; 2] in physics education. These and other instruments can be found on the PhysPort website [31].
The effectiveness of an RBAI is largely contingent on how well it "measures what it says it measures," a property known as validity [1]. As validity is a major concern of assessment developers and users, the investigation of various types of validity and their metrics is a major topic of scholarship [1; 8; 9; 10; 11; 12; 13; 32]. The 'things' that an RBAI attempts to measure are typically constructs (e.g., proficiency, knowledge, affect), which are ideas of interest to the researcher that are not directly measurable (unlike, for example, length) [10].
With as important as validity is, it is perhaps not surprising that many conventions have arisen to help assessment developers provide evidence of validity for assessment instruments. Three such conventions are single-construct items, unidimensionality, and single-correct-answer items. Importantly, we note that these conventions are ultimately in service to validity and intended to improve interpretations of instrument results: they are not themselves features or metrics of validity.
The rest of this section explores these three conventions, including their origins and affordances. We also discuss their limitations for content RBAIs (especially in physics), as well as how some researchers have navigated these limitations.
### Single-construct items
The first common convention of RBAI development we discuss is that of having each item in an assessment measure only a single construct [1; 8; 9; 10; 12; 13; 33]. We refer to this convention as single-construct items, which we discuss first as it informs the latter two conventions.
This convention impacts all stages of item development and validation. When creating items, each item is typically designed to align with a particular construct. The alignment of the created items can then be verified through consultation with independent experts via exercises such as sorting items into categories based on their constructs [1; 8; 9]. Finally, after the instrument has been piloted and sufficient data has been collected from students using the instrument, a number of statistical approaches can be used to empirically verify the association of each item with a construct [10].
#### ii.1.1 Reasons for having single-construct items
One reason for having only single-construct items is that it helps avoid artificial statistical correlations between multiple instrument constructs, as two constructs can become inappropriately correlated if both are predictive of the same response to an item [8; 9]. This consideration, as with much of assessment development theory and practice, emerged from the field of psychology, specifically the study of correlations between psychological constructs. While the correlation of constructs has been a focus of some RBAIs in physics (e.g., [34]), such RBAIs do not typically measure student proficiencies in specific content areas.
Having only single-construct items in an assessment instrument can also simplify the scoring of item responses and the interpretation of those scores. That is, for content RBAIs, if an item is a measure of only one construct (i.e., one proficiency), then its score can be interpreted as a quantitative representation of student proficiency regarding that construct (as opposed to a representation of some other construct or factor). Scoring along constructs is discussed in more detail in Sec. II.3.
The fidelity of these interpretations can be further improved by looking at all of the scores for a single construct, obtaining "a reliable total score out of unreliable items" [10]. Essentially, by viewing each item as a measure of the construct that has some random uncertainty (i.e., unreliability), that uncertainty can be minimized by considering the collective result of multiple, standalone items [10].
Limitations resulting from single-construct items
Single-construct items are limited in terms of what they can assess as, by design, they cannot measure proficiency in multiple constructs within a single item. This limitation impacts how developers address the assessment of practices (which "require the coordination of both knowledge and skill" [35]) and of the synthesis of multiple constructs in more complicated scenarios. Here, we call such synthesis and practices constructs "composite constructs" to distinguish them from the "simple" component constructs that contribute to them.
We note that the goal of many contemporary RBAIs, especially in STEM, is to measure not just knowledge of concepts, but also student proficiency with practices [36, 37, 38, 32]. Instrument developers thus have three options for creating single-construct items that assess both simple and composite constructs, with each of these options reflective of a trade-off between the length of an instrument and its scope and resolution.
The first option is to assign separate constructs (and items) to each simple and composite construct an instrument aims to assess. As an example, such an instrument that aims to measure student proficiency with collisions would need to have separate items for the simple construct of conservation of momentum for collisions, the simple construct of conservation of energy for collisions, and the composite construct of conservation of both energy and momentum together. This approach requires a relatively large number of items and risks feeling redundant for users, but it can provide a lot of detailed information about the topic being assessed.
The second option is rooted in the psychometric concept of linear separability, which describes how some knowledge structures can be cleanly partitioned into constituent ideas (i.e., constructs) [39, 40]. Essentially, this option assumes that proficiency with simple constructs provides sufficient information such that proficiency with composite constructs can be inferred and need not be measured separately or directly. This approach allows instruments to have relatively few items, as none are needed for assessing composite concepts. However, as practices [35] definitionally require the coordination of multiple constructs, this approach is not appropriate for many content RBAIs. We also note that this approach avoids introducing artificial statistical correlations between constructs, as the constructs are, by design, separable and independent. However, while avoiding artificial correlations is an important consideration in psychometrics, content RBAI developers and users are generally more concerned with measures of proficiency than with correlations between proficiencies.
The final option for instrument developers is less of a design decision and more of a framing decision: if a construct is defined in broad enough terms to incorporate simple and composite concepts, then many items (of various complexities) can contribute to the measurement of this single construct. While this approach has many practical benefits, there are many instances (for content RBAIs in STEM, especially) in which empirical analysis of responses has shown that various items do not statistically align with the single intended construct and that a set of finer-grain constructs are needed to adequately describe the instrument items [41, 36, 42].
Given the frequent violations of this convention in physics content RBAIs (e.g., [41, 42]), it would be easy to assume that content RBAI developers are doing a poor job of categorizing their instrument constructs and items appropriately. However, we posit that these violations are a reflection of a tension between content and psychometric priorities in assessment design. The idea that single-construct items can sufficiently capture student reasoning and proficiencies is not always reasonable [10], especially in physics and other sciences [36]. Additionally, items created to align with only one objective may have other undesirable properties, such as feeling overly contrived and unrealistic.
While we have not encountered a content RBAI that intentionally violates this convention, researchers have proposed ways of adapting instrument scoring when this issue arises. For example, in their paper discussing how the Force Concept Inventory (FCI) has many multi-construct items, Stewart _et al._ suggest that it may be desirable for "a linear combination of the scores on a subset of items [to provide] an estimate of the ability for a each [sic] principle, thus giving practitioners a detailed characterization of their learning outcomes" [41].
### Unidimensionality
A unidimensional assessment instrument is one that "measures only one thing" [10] with all of its items: it is a single-construct instrument. Unidimensionality is often an explicit goal of assessment developers: Engelhardt states, in her manuscript on Classical Test Theory (CTT), "the objective of any test is to have all items highly correlated with the total test score" [1], which is only reasonable for single-construct items in a unidimensional instrument. Unidimensional instruments offer a number of advantages over multi-dimensional instruments and, perhaps unsurprisingly, many of the measures, affordances, and limitations of unidimensionality overlap with those of single-construct items.
#### ii.2.1 Reasons for developing a unidimensional instrument
Assessments that are unidimensional benefit from streamlined interpretation of student scores for both individual items (as with single-construct items) and the instrument as a whole [10]. Most instruments add together scores for every item to obtain a total assessment score, which allows for fast and easy judgments about student proficiency by comparing this number to that of
other implementations, such as over time, before and after a course transformation, or between similar courses.
For assessment developers, unidimensional instruments also allow for relatively straightforward statistical validation techniques, including CTT [1; 12] and simple item response theory (IRT) models [10]. Indeed, Crocker and Aligna note that "most applications of [item response] theory assume that a single latent trait [(i.e. proficiency)] accounts for the responses to items on a test" [12], and thus that the instrument measures one and only one construct.
#### ii.2.2 Limitations resulting from unidimensionality
Despite the affordances described above, unidimensionality also constrains assessments in undesirable ways beyond those described for single-construct items. The same analyses that found the FCI to have multi-construct items found that "the assumption of unidimensional IRT, that a single ability parameter captures the students' facility with the test material...seems unlikely for the FCI, which measures a number of different facets of Newton's laws and kinematics" [41]. This finding suggests that even if items on this instrument were re-worked to be single-construct, the instrument itself would likely still contain many constructs.
Additionally, while unidimensionality has a clear theoretical definition, there are many different ways to establish how many dimensions an instrument has, and the various methods will often yield conflicting results [10]. Therefore, researchers should be wary of selecting a method based on the result it produces and of adopting an 'umbrella construct' approach (as discussed in Sec. II.1.2) in which the umbrella inappropriately extends to cover the entire instrument.
As with single-construct items, rather than simply regarding instruments that violate unidimensionality as being poorly developed, it is worth considering that the limitations of unidimensionality may simply be untenable in many situations. Nunnally and Bernstein state that "measures designed to assess broad, useful traits may not fit any of these [unidimensional] models, and the misfit may reflect desirable variation" [10].
To capture more information from existing RBAIs that may not be truly unidimensional, researchers have worked to identify and report sub-scale mean scores in addition to an overall mean score [13; 41; 42], where each sub-scale essentially represents an instrument construct. However, such sub-scale analyses are typically time and labor intensive to develop and external to the instrument report, and such approaches are often not considered in the design and validation of the instrument [13]. As an example, Stewart et. al. [41] and Hansen and Stewart [42] found the FCI and another popular physics content RBAI to have incoherent sub-scales, limiting the usefulness of such post hoc sub-scale approaches.
### Single-correct-answer items
We previously stated that the convention of having single-construct items allows for easy scoring of items. An underlying reason for this easy scoring is that it is often assumed that an item addressing a single construct will have a single correct response [1; 12]. Closed-response format items, especially multiple-choice items, are generally developed such that one response is considered the correct choice, as typically determined by alignment with expert response [1]. This convention especially supports the choice of multiple-choice items as opposed to multiple-response (sometimes called multiple-select or multiple-choice-multiple-response [43; 44]) and coupled multiple response items [19].
#### ii.3.1 Reasons for single-correct-answer items
Single-correct-answer items require minimally complex scoring mechanisms: the correct answer is given full credit, and a number of tempting but incorrect answer options typically award zero credit. This is done to ensure that items are scored as objectively as possible [1; 10] and to generate scores that work well with validation algorithms [10]. Even when using item formats other than multiple-choice, having a single correct answer aligns with instructor and student expectations around assessment, which we believe is a benefit for uptake of an instrument.
#### ii.3.2 Limitations of single-correct-answer items
While single-correct-answer items are easy to interpret and score, there are many limitations to such simple items. For example, although experts would generally provide the same responses to items probing factual knowledge, items that probe practices may involve multiple equally valid approaches with potentially multiple equally valid conclusions. This may be especially true in physics and other STEM contexts [36].
Even for items that do have a single correct or best answer, traditional scoring schemes often fail to capture useful information from student incorrect answers. Generally, tempting distractors are conceptually correct in some ways but incorrect in others, and so which distractors are selected can provide information on student conceptual reasoning or proficiency that is not typically captured in the item score.
Much work has been done to learn about student reasoning and proficiencies from their incorrect answers [45; 46], however, work of this kind is generally outside the scope of instrument development and scoring. To try and overcome this limitation, researchers have made efforts to encode some information about the quality of incorrect answers through partial credit scoring schemes, where some distractors, rather than being worth
no points, are worth a fraction of the points that are given for the correct answer. While this scoring model can be more sensitive to student proficiencies, the fraction of a point earned for these answers is subjective and can restrict which validation algorithms one can use, removing one of the advantages gained by having single-correct items in the first place. Additionally, the nuance of what part of a response earned a student credit is difficult, if not impossible, to convey in an overall item score.
## III Couplet Scoring for Research-Based Assessment Instruments
In the previous section, we discussed some affordances and limitations of three common conventions of traditional assessment development: single-construct items, unidimensionality, and single-correct-answer items. In this section, we present our novel scoring paradigm, couplet scoring, and explore how it might addresses some of the limitations of traditional item scoring, including those arising the from these conventions.
Central to couplet scoring is the use of assessment objectives (AOs), which we introduced previously [5; 6] and summarize below. We then introduce couplet scoring, including scoring details, examples, and affordances. Sec. IV discusses practical details, considerations, and limitations of implementing couplet scoring.
### Assessment Objectives
AOs are "concise, specific articulations of measurable desired student performances regarding concepts and/or practices targeted by the assessment" [6]. In the language of assessment development, AOs are the constructs the assessment aims to measure, only they are articulated as objectives. Table 1 includes some example AOs from SPRUCE.
In a previous paper [6], we outlined four broad affordances of AOs for instrument development: they facilitate incorporating instructor priorities into the instrument, they provide a means for evaluating and scoring authentic items, they provide a structure for feedback to implementers, and they are a means for communicating the content of the instrument to potential implementers [6].
### Couplet Scoring
Couplet scoring, as the name suggests, is a scoring paradigm in which item-AO couplets (or simply "couplets") are scored. This is in contrast to traditional scoring paradigms in which each item is scored.
Conceptually, a couplet is an assessment item viewed and scored through the lens of a particular AO. In other words, in couplet scoring, multi-AO (i.e., multi-construct) items have a couplet for each AO, and each couplet is scored by considering only that couplet's AO. This feature of couplet scoring violates the common conventions of single-construct items and unidimensionality.
It is possible (and, for SPRUCE, fairly common) for a couplet to award the maximum number of points to several different possible responses, even for closed-form items such as multiple-choice items. This feature of couplet scoring violates the common convention of single-correct-answer items.
An example of an item scored using couplet scoring, item 3.3 from SPRUCE (shown in Fig. 1) tasks students with determining the period of oscillation for a mass hanging vertically from a spring. This item has two AOs:
* Propagate uncertainties using formulas
* Report results with uncertainties with correct significant digits
We refer to item 3.3's two couplets as "3.3 H2" and "3.3 H3".
Even though the proficiencies represented by AOs H2 and H3 both impact the response a student would pro
Figure 1: SPRUCE item 3.3 (with alternate numbers to protect test security), in which students are attempting to determine the period of oscillation for a mass hanging vertically from a spring. This item has two AOs, H2 and H3.
\begin{table}
\begin{tabular}{|p{227.6pt}|} \hline \hline SPRUCE \\ \hline S2 Identify actions that might improve precision \\ S3 Identify actions that might improve accuracy \\ H1 Identify when to use fractional versus absolute uncertainty \\ H2 Propagate uncertainties using formulas \\ H3 Report results with uncertainties with correct significant digits \\ H4 Use concepts of error propagation to identify the largest contributor to uncertainty in a calculated value \\ D7 Determine if two measurements (with uncertainty) agree with each other \\ \hline \hline \end{tabular}
\begin{tabular}{|p{227.6pt}|} \hline You and your lab mates decide to measure 20 oscillations at a time. Using a handheld digital stopwatch, you measure a time of 28.42 seconds for 20 oscillations. You estimate the uncertainty in your measurement of 20 oscillations to be 0.4 seconds, based on an online search for human reaction time. What value and uncertainty do you report for the period of **a single oscillation**?
\begin{tabular}{|p{227.6pt}|} \hline \(\bigcirc\) & \(1.412\pm 0.02\) s \\ \(\bigcirc\) & \(1.41\pm 0.04\) s \\ \(\bigcirc\) & \(1.41\pm 0.4\) s \\ \hline \end{tabular}
\begin{tabular}{|p{227.6pt}|} \hline \(\bigcirc\) & \(1.41\pm 0.02\) s \\ \(\bigcirc\) & \(1.41\pm 0.04\) s \\ \hline \end{tabular}
\end{table}
Table 1: Examples of AOs from SPRUCE
vide on item 3.3, these proficiencies are conceptually independent and so it is straightforward to assess student responses independently for if they correctly propagated uncertainty (H2) and if they reported their value and uncertainty with correct significant digits (H3). For 3.3 H2, applying the appropriate uncertainty propagation formula simplifies to dividing the uncertainty in the time for 20 oscillations by 20 to obtain the uncertainty for the period of a single oscillation, yielding a value of \(\pm 0.02\) s. This uncertainty value appears in three of the answer options, and so the selection of any of these three responses awards a full point for couplet 3.3 H3. Similarly, two of the six answer options are presented with appropriate numbers of significant digits for the value of the period based on the value of the uncertainty, and thus two answer options receive full credit for 3.3 H3.
It is also possible for AOs (and therefore couplets) for an item to have conceptual overlap, and since each AO is reported independently, the overlap is not obscured in a single item score. For example, SPRUCE item 1.2 (shown in Fig. 2) has two AOs relating to error propagation:
* Identify when to use fractional versus absolute uncertainty
* Use concepts of error propagation to identify the largest contributor to uncertainty in a calculated value
While H1 and H4 have some conceptual overlap with each other (and with H2), these AOs are included in the assessment as separate AOs because instructors explicitly discussed these proficiencies as being valuable in and of themselves even if students do not use them to eventually perform a full, formal error propagation [47]. For example, these proficiencies may help students determine how to best focus their efforts to reduce uncertainty in a lab experiment.
The scoring for item 1.2's couplets awards points for the answer options representing the correct use of fractional versus percent uncertainty (AO H1) and for overall correct reasoning regarding which variable contributes more to the uncertainty in the final calculated result (AO H4). The ability to evaluate an item and response along two (or more) overlapping AOs, along with a number of other features of couplet scoring, is a benefit of couplet scoring elaborated on in the following section.
### Couplet Scoring Affordances
We now describe the affordances of couplet scoring, expanding on the affordances provided by AOs discussed previously [6], as couplet scoring is built upon AOs. We also discuss, when appropriate, the limitations of traditional item scoring addressed by these affordances. This information is summarized in Table 3 and elaborated on below.
#### iii.3.1 Item Alignment with Assessment Priorities
During instrument development, once items have been created, they undergo an iterative process of piloting and refining, which may change elements of the item such as the item context, the amount of information provided in the prompt, and the available answer options. Each of these changes has the potential to shift the item away from its intended objective. With couplet scoring, the scoring process itself requires that developers revisit the intended objective(s), and so the chance that the final item has shifted away from its intended objective(s) is minimized. Alternatively, if the item does shift, it may be appropriate to align the item with a different AO, either replacing or adding to the previously targeted AO(s).
In traditional item scoring, where the scoring schemes do not existentially depend on the item's objective(s), developers must be careful and take additional steps to ensure that the final products align with the intended assessment goals. This is necessary to ensure that the correctness of a response, and therefore the item score, is actually a measure of the targeted proficiency and not of another (though likely related) proficiency.
\begin{table}
\begin{tabular}{l l l l} \hline \hline \multicolumn{3}{c}{Scorc} \\ \multicolumn{1}{c}{Answer Option} & \multicolumn{2}{c}{Score} \\ & & H2 & H3 \\ \hline A & \(1.412\pm 0.02\) s & 1 & 0 \\ B & \(1.412\pm 0.4\) s & 0 & 0 \\ C & \(1.41\pm 0.02\) s & 1 & 1 \\ D & \(1.41\pm 0.4\) s & 0 & 0 \\ E & \(1.4\pm 0.02\) s & 1 & 0 \\ F & \(1.4\pm 0.4\) s & 0 & 1 \\ \hline \hline \end{tabular}
* Your measurements and uncertainties for \(t\) and \(d\) are: \[t=0.987\) s \[d=1.31\] m \[\delta t=0.006\text{ s (This is your uncertainty in $t$)}\] \[\delta d=0.01\text{ m (This is your uncertainty in $d$)}\]
\end{table}
Table 2: Example scoring for couplets of item 3.3
Figure 2: SPRUCE item 1.2 was developed to measure two AOs related to error propagation, H1 and H4 (see Table 1).
Additionally, having the instrument constructs clearly articulated as AOs facilitates effective verification of alignment between items and constructs by independent expert consultants [1; 8; 9].
#### ii.1.2 Item alignment with Instructional Priorities
Prior to using a content RBAI, instructors and researchers need to ensure that the instrument aligns with the content they wish to assess. By having the instrument constructs articulated as AOs (that are used in item development and scoring), it is straightforward to determine if the instrument objectives align with a course's learning objectives. If an instructor finds that the AOs for an assessment match their own learning objectives, then it is likely that assessment will be of use to them. This alignment, known as curricular validity, is "how well test items represent the objective of the curriculum" [48].
For many instruments, a clear list of instrument constructs is not articulated. With other instruments, the constructs are only listed in academic articles and not presented to implementers alongside the instrument. In either instance, an implementer would need to either go through all of the instrument items and interpret the intent of the developer and/or review published academic articles detailing instrument development in order to establish curricular validity.
#### ii.1.3 Authentic Items
Though not unique to physics, the synthesis of multiple ideas is often valued in physics instruction and evaluation. This means that items that more authentically depict interesting and relevant physics scenarios often incorporate multiple concepts as a reflection of the interconnected nature of physics [36] and thus may include both composite and simple constructs, as discussed in Sec. II.1.2.
While traditional assessment approaches require each assessment item to relate to only a single construct, couplet scoring allows for more interesting and appropriate assessment items that can be scored and interpreted to produce meaningful feedback to implementers. This allowance serves to alleviate the tension, described in Sec. II.2.2, between what makes a physics assessment good in terms of the physics and what makes it good in terms of psychometric properties.
Additionally, it is worth reiterating that, for content RBAIs, implementers are generally more interested in individual proficiencies than in statistical correlations between proficiencies. As a result, developers using couplet scoring are free to include rich, authentic items incorporating multiple instrument constructs without concern that such items will introduce artificial, statistical correlations between constructs (a major concern in psychology and other fields) when conducting empirical analyses of instrument data.
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt} p{142.3pt}} \hline \hline Affordance & Couplet-Scoring Design Feature & Addressed Item-Scoring Limitation \\ \hline Item alignment with assessment priorities & Alignment is embedded into item development and scoring, as items are developed to be scored by AO, and verification of this alignment is supported by having concise and explicit AOs & Item scoring schemes do not existentially depend on alignment with instrument constructs or objectives, and verification of alignment between items and constructs requires consistent and uniform interpretation of the constructs \\ Item alignment with instructional priorities & Instrument AOs (that inform and contextualize scoring) can be directly compared with course learning objectives & Instrument constructs are not necessarily framed in terms of objectives \\ Authentic items & Scoring by AO allows for complex, authentic items with nuanced scoring & Items are designed to assess only one construct, which can lead to long assessments and items that feel contrived \\ Scaffolded scoring & Developers creating a scoring scheme need only consider one AO at a time & Developers must ensure item score fully captures all relevant ideas \\ Partial credit & Largely precluded by scoring by AO, which often resolves what is “partially” correct and incorrect along separate AOs & Does not capture “partially” correct responses, or this information is obscured by arbitrarily-weighted overall item scores \\ Data yield & Items with multiple AOs yield more data than items with just one AO & All items effectively have one AO \\ User experience & Indistinguishable from traditional instruments Validation & Can use many traditional approaches with couplet scores as the unit of assessment \\ Reporting scores by AO & Scores are reported by AO in a manner that is clear and actionable & Scores are reported in aggregate and are typically not actionable \\ \hline \hline \end{tabular}
\end{table}
Table 3: Affordances
Scaffolded Scoring
With couplet scoring, the development of a scoring scheme is scaffolded by considering which item responses indicate proficiency in a particular AO. This feature is especially important for scoring schemes with more complex item types, such as multiple response items and coupled multiple response items.
Anecdotally, when developing couplet scoring schemes for SPRUCE, several members of the research team expressed that this scaffolding made developing a scoring scheme for SPRUCE's coupled multiple response items feel faster, easier, and less subjective than for coupled multiple response items developed during previous projects.
Scoring by AO also reduces the need for partial credit scoring, as discussed below.
#### iii.1.5 Partial Credit
For closed-form items, students select a response from a list of options that are generally made up of a correct answer and several tempting distractors. These distractors are most effective when they represent an answer that one would arrive at by employing mostly correct reasoning but also a common misunderstanding or an easy mistake [1]. However, educators often wish to distinguish between different incorrect responses, such as between a response resulting from a simple mistake and one resulting from a fundamental misunderstanding or misapplication of a core concept. As a result, researchers will sometimes employ partial credit scoring schemes.
In couplet scoring, each of the lines of reasoning that one needs to employ to obtain the correct answer can often be represented by an AO, and so various distractors may be completely correct in terms of a specific AO, while being incorrect in terms of others. As the item is scored by AO, it is possible for multiple responses to receive full credit in terms of one AO,while not receiving credit for another AO. This can largely eliminate the need for partial credit, which requires arbitrary weights for partially correct responses. It also better captures and reports the elements of desired reasoning that students employ, since two mostly correct responses will result in meaningfully different couplet scores that do not get obscured by representing the measures of student reasoning with a single number.
For SPRUCE, couplet scoring eliminated the need for partial credit on all items except coupled multiple response items. In instances where, if using item scoring, we might have considered awarding partial credit for a particular response, we instead were able to award full credit for the one AO and zero credit for another. For example, this can be seen by the couplet scores in table 2 all being zero or one.
#### iii.1.6 Data Yield
Items that align with more than one AO will have more than one couplet, and, as the couplet is the unit of assessment in couplet scoring, this means that using couplet scoring allows researchers and instructors to get more data from the same number of items as compared to traditional item scoring. This feature can help to reduce the overall number of items in an instrument, making it easier for students to complete.
As an example, for SPRUCE, numeric-open-response items allowed us to evaluate students use of significant digits independent of the numeric value they chose to report, and we were able to do so without presenting students with additional items.
#### iii.1.7 User Experience
As couplet scoring is essentially a back-end feature, the process of completing an RBAI that uses couplet scoring is virtually indistinguishable from completing an RBAI using traditional item scoring. The only perceivable difference might be that the items may be more complex and less contrived, potentially making the items more engaging for students.
#### iii.1.8 Validation
As couplets replace items as the unit of assessment, couplet scores replace item scores in statistical validation procedures. Many of the common statistical validation approaches can be easily adopted to work with couplets instead of items, and, for SPRUCE, this is the focus of an in-development paper. It is also worth reiterating that the validation of instruments using couplet scoring should primarily focus on ensuring that AO scores from couplets are meaningful measures of those constructs, and that common statistical metrics and thresholds may need to be reevaluated when used with couplet scores.
#### iii.1.9 Reporting Scores by AO
Central to couplet scoring is the idea that scores for different AOs should not be consolidated into a single score when reporting instrument results. We achieve this by reporting scores independently for each AO. While many instruments could report scores by objective, with couplet scoring, this is no longer just an option, but a core feature of the RBAI.
This reporting approach means we do lose the convenience of having a single number to ostensibly represent student proficiencies for a particular implementation of the assessment. However, we believe that the convenience of having scores expressed explicitly in terms of AOs makes the presentation of multiple scores (one for
each AO) more meaningful for instructors and does not require much more work or effort to understand or act upon. Indeed, we argue that scores for individual AOs are more actionable for instructors than is an overall instrument score.
## IV Implementation
Now that we have introduced the concept of a couplet and described many of the affordances of couplet scoring, we now discuss details and limitations of implementing a couplet scoring scheme while designing a physics content RBAI. This section draws on primarily our experience developing SPRUCE, which employed and expanded on the assessment development framework of evidence-centered design (ECD) [14].
The five layers of assessment development used by ECD, modified for couplet scoring, are:
* _Domain Analysis_: the collection of information about the topic to be assessed from texts, research literature, interviews with experts, etc.
* _Domain Model_: the distillation of information from the _domain analysis_ into AOs and potential item contexts, including detailing acceptable evidence of proficiencies and the methods for gathering this evidence.
* _Conceptual Assessment Framework_: the determination of appropriate and desirable assessment features and item formats to support gathering evidence of student proficiencies based on the domain model.
* _Assessment Implementation_: the writing and piloting of items (and couplets), and the revising of items, AOs, and couplets, to establish evidentiary arguments linking student responses to student reasoning.
* _Assessment Delivery_: the implementation of the finalized items, scoring scheme, and feedback for implementers.
Many of these layers are similar to steps described in other assessment development frameworks [49, 32], and many of the steps are largely the same for traditional item scoring and for couplet scoring. Below, we highlight when these steps are different for couplet scoring, contextualized in examples from our recent development of a couplet scoring scheme for SPRUCE.
### How to develop Assessment Objectives
The development of AOs begins similar to any effort to determine the priorities and objectives of an assessment. Such efforts include consulting the education research literature and commonly used textbooks, identifying and reviewing existing assessments on similar topics, and soliciting priorities and feedback from instructors and other content specialists.
However, as the name "assessment objective" suggests, the distillation of this information into constructs should be expressed in terms of desired student proficiency, not just the name of topic to be addressed. For example, SPRUCE contains the AO "S2 - Identify actions that might improve precision." Articulated in this way, AO S2 describes a measurable objective (much like a course learning objective [7]), as opposed to just stating that the instrument assesses the construct of "precision," which is ambiguous as to what specific knowledge or skills around precision will be assessed. As discussed in the previous section, these AOs can have conceptual overlap and need not be wholly independent, since they will be evaluated and reported independent of one another.
AOs may be added, removed, split, consolidated, or otherwise refined throughout instrument development, as long as they continue to align with the information gathered in the first stages of development (i.e., in the _domain analysis_ for ECD).
### How to develop Item-AO Couplets
The process of creating multi-construct items (with one couplet per construct) is largely similar to that of creating single-construct items, at least initially. An analysis of the topic to be assessed and the assessment priorities of instructors and other experts has already been used to develop a set of AOs. This information can now also inform the development of a set of tasks, with logistical considerations informing decisions around the types of items that are viable and reasonable for the tasks. These steps (for items, not for AOs), are described in many assessment development frameworks, such as in layers one through three of ECD [14].
The difference for couplet scoring at this stage, when items are being crafted and before they have been piloted, is that the tasks need not be narrowed to address a single instrument construct: they can instead reflect a level of complexity representative and reflective of instruction. For closed-form items, initial detractors can be developed by considering the responses that respondents might provide if they were to employ correct reasoning along some of the item's AOs, but incorrect reasoning along other, and the scoring of such distractors should vary between different couplets of the same item.
Once the items have been drafted, the process of item refinement continues much like that of traditionally scored items: an iterative process of piloting the items and refining the item prompts, answer options, and scoring. However, if a particular couplet is for some reason found to be inappropriate or too difficult to assess, it may be possible to remove that particular couplet from the instrument without discarding the entire item. This happened with SPRUCE where, for example, a task ask
ing students to report a best estimate of a value based on a set of repeated measurements dropped a couplet relating to removing outliers, but the item remained in SPRUCE because other couplets for this item were still viable.
### Scoring Couplets
For single-AO items (i.e., items with just one couplet), the process of developing a scoring scheme is much the same as with traditional items, with the added benefit of having an explicit AO guiding scoring to help ensure that the item measures what it was intended to measure.
It can initially be tricky for content experts, who are used to coordinating many different ideas at once, to look at a multi-AO item and consider how each response relates to only one AO at a time. Such compartmentalization is relatively straightforward for multiple-choice items with AOs that have no conceptual overlap, such as with SPRUCE item 3.3 and the scoring scheme shown in Table 2, however it can be more challenging for multiple response or coupled multiple response item formats and AOs that are more closely related. Fortunately, having the instrument and item constructs clearly articulated as objectives provides an easy reference for developers, and the process of scoring by AO can quickly become intuitive. In fact, for SPRUCE, the developers ultimately found the process of scoring by AO made scoring easier for complex items types such as coupled multiple response items, as the AOs themselves productively narrowed the idea-space the developers were considering at any one time.
Additionally, couplet scoring facilitates making checks for multi-AO item scoring schemes, where developers can consider all of the item's couplets simultaneously as though the item had a partial credit scoring scheme. For example, a response that aligns with only one of three AOs could be considered to award a third of the total points for the item. Such checks can help identify large issues in a scoring scheme: a response that is somewhat correct but receives either no credit or full credit could indicate that the item is missing an AO. However, since each AO is scored and reported on independently, we do not recommend using this approach to analyze fine-grain scoring nuances: a response that receives points on one of two couplets should indeed award "half credit" on the item, even if the developers feel that that is too much or too little credit.
### Statistical Validation
As discussed in Sec. III.3, by replacing items with couplets as the unit of assessment, instruments employing couplet scoring schemes can use many common statistical validation approaches. However, just as a considering a 'total item score' for a multi-AO item can serve as a check to identify large issues in the scoring of couplets for an item, validation metrics that use a total instrument score can be used as a check to identify large issues or oddities in an instrument. As couplet scoring is designed to score and report proficiencies on an AO by AO basis, it may not be reasonable to expect that metrics based on a total score necessarily adhere to the conventional thresholds of single-construct items and instruments. For example, pure guessing on SPRUCE item 3.3 (Fig. 1) would result in an average score of 50% for AO H2 (Table 2), which is higher than what is generally desired for item scores [12]. However, it is important to keep in mind that this item also serves as a measure of AO H3, and that the instrument score for AO H2 also takes into account other items, contributing to an overall more reliable measure of the AO as discussed in Sec. II.1.1.
There are a number of adaptations to traditional statistical validation approaches that researchers have employed, including modifications for multi-dimensional instruments. For example, researchers have employed multidimensional item response theory (MIRT) on items that were shown to not be unidimensional [41; 42]. A MIRT analysis of SPRUCE is the topic of future work, after an initial CTT analysis and once sufficient data have been collected.
### Instructor Reports
For SPRUCE, we have previously presented an example figure from an instructor report and discussed how the instructor of the course would use the results of the instrument to inform instruction in future iterations of the course [5]. More broadly, instruments that employ couplet scoring will produce a score for each of the instruments AOs, typically the mean score of several couplets that all target the same AO. These AO scores can be presented to instructors with minimal elaboration, as long as the AOs are clearly written, AO scores should thus provide specific and actionable feedback for instruction, at least compared to instruments that use traditional item scores to present instructors with a single number and/or a number for each item.
We also recommend using instruments with couplet scoring in a pre-instruction then post-instruction modality, as this allows instructors to see how proficiency with each of the AOs changed across the course. This is the intended modality of SPRUCE [50].
## V Implications for other assessment and evaluation
While couplet scoring was developed in parallel with SPRUCE, a physics content RBAI, we believe this scoring model can be employed in a variety of assessment settings, including in other fields and in formative and summative assessment within a course. Alignment between
assessment items and objectives, and having scores reported by objective, has be heralded as a valuable aspect of course assessment [7; 26; 48]. While content RBAIs are not intended to provide instructors with feedback about individual students, instructors may find some of the details of couplet scoring discussed in this paper helpful in efforts to ensure that course instruction and evaluation are truly aligned. Additionally, our descriptions of scoring and reporting by objective may prove useful for instructors who are interested in providing students scores or feedback in terms of specific course objectives.
## VI Summary
In this paper, we discussed conventions of traditional item scoring, including single-construct items, unidimensionality, and single-correct-answer items, and the affordances and limitations of these conventions. We then introduced a new scoring paradigm, couplet scoring, in which each instrument item is scored potentially multiple times, once for each of the assessment objectives (AOs) that the item aims to measure. We explored how couplet scoring and the use of AOs may address many of the limitations of traditional item scoring while still producing meaningful measures of student proficiency, despite not adhering to the conventions of single-construct items, unidimensionality, and single-correct-answer items. We then discussed some of the nuances and challenges of implementing a couplet scoring scheme for a research-based assessment instrument (RBAI) by using our experience developing the Survey of Physics Reasoning on Uncertainty Concepts in Experiments (SPRUCE) [5] as an example. Finally, we discussed how couplet scoring might inform physics assessment outside of formal content RBAIs.
Future work will use the results of SPRUCE's scoring scheme to statistically validate the instrument and to analyze student reasoning around measurement uncertainty.
## VII Acknowledgements
This work is supported by NSF DUE 1914840, DUE 1913698, and PHY 1734006. We would also like to thank Bethany Wilcox and Katie Rainey for their contributions regarding assessment objectives, and Rachel Henderson for her insights regarding how couplet scoring might be used with statistical validation approaches.
|
2303.09725
|
Policy/mechanism separation in the Warehouse-Scale OS
|
"As many of us know from bitter experience, the policies provided in extant
operating systems, which are claimed to work well and behave fairly 'on the
average', often fail to do so in the special cases important to us" [Wulf et
al. 1974]. Written in 1974, these words motivated moving policy decisions into
user-space. Today, as warehouse-scale computers (WSCs) have become ubiquitous,
it is time to move policy decisions away from individual servers altogether.
Built-in policies are complex and often exhibit bad performance at scale.
Meanwhile, the highly-controlled WSC setting presents opportunities to improve
performance and predictability.
We propose moving all policy decisions from the OS kernel to the cluster
manager (CM), in a new paradigm we call Grape CM. In this design, the role of
the kernel is reduced to monitoring, sending metrics to the CM, and executing
policy decisions made by the CM. The CM uses metrics from all kernels across
the WSC to make informed policy choices, sending commands back to each kernel
in the cluster. We claim that Grape CM will improve performance, transparency,
and simplicity. Our initial experiments show how the CM can identify the
optimal set of huge pages for any workload or improve memcached latency by 15%.
|
Mark Mansi, Michael M. Swift
|
2023-03-17T01:52:02Z
|
http://arxiv.org/abs/2303.09725v1
|
# Policy/mechanism separation in the Warehouse-Scale OS
###### Abstract
_"As many of us know from bitter experience, the policies provided in extant operating systems, which are claimed to work well and behave fairly 'on the average', often fail to do so in the special cases important to us"_[43]. Written in 1974, these words motivated moving policy decisions into user-space. Today, as warehouse-scale computers (WSCs) have become ubiquitous, it is time to move policy decisions away from individual servers altogether. Built-in policies are complex and often exhibit bad performance at scale. Meanwhile, the highly-controlled WSC setting presents opportunities to improve performance and predictability.
We propose moving all policy decisions from the OS kernel to the cluster manager (CM), in a new paradigm we call Grape CM. In this design, the role of the kernel is reduced to monitoring, sending metrics to the CM, and executing policy decisions made by the CM. The CM uses metrics from all kernels across the WSC to make informed policy choices, sending commands back to each kernel in the cluster. We claim that Grape CM will improve performance, transparency, and simplicity. Our initial experiments show how the CM can identify the optimal set of huge pages for any workload or improve memcached latency by 15%.
## 1 Introduction
In the early 2000s, service providers realized that building bigger, faster, more fault-tolerant servers was an impractical way to handle more traffic. They turned instead to large clusters of commodity hardware and general-purpose operating systems. Today, warehouse-scale computers (WSCs) have become a ubiquitous technique for service providers to operate large-scale services [5]. However, while general-purpose OSes have allowed the rise of WSCs, they present challenges and missed opportunities, too. In particular, their built-in policies largely ignore a WSC's unique combination of relative homogeneity and slow-changing workload mix. We assert that in a WSC setting, the cluster manager (CM), not the OS kernel, is best suited to make policy decisions.
General-purpose kernel code for making policy decisions is forced to handle all cases under unknown workloads, fostering implementation and performance complexity. For example, in Linux fast-path failures are handled by complex fallback paths [18]. At scale, they lead to performance anomalies that are hard to debug and harder to fix. For example, several databases recommend disabling the kernel's automatic huge page promotion due to unpredictable latency spikes [1, 12, 33, 34].
Meanwhile, leveraging the relative homogeneity of hardware and software in many WSCs can improve performance. Thousands of identical tasks run across a WSC for hundreds of machines-years, and the workload mix changes incrementally as software teams update their services. Yet the kernel treats each process as if it is the first and last of its kind. It assumes little about new processes and uses the same stock policies for decisions.
To address kernel policy deficiencies and leverage WSC workload opportunities, we propose moving all policy decisions from the kernel to the CM, in a new paradigm we call Grape CM. Each kernel monitors local system behavior, sending metrics to the CM. The CM aggregates historical and cluster
wide metrics to make more optimal policy decisions, which it sends to the kernels. The CM may also download a _preset_ into the kernel - a limited policy for handling frequent or latency-sensitive decisions without a network round-trip. Like software-defined networking [24, 2], where a central controller makes policy choices and individual switches use simple rules and tables, Grape CM benefits from global planning and simple, fast individual nodes.
Grape CM can use historic workload metrics to identify workloads suitable for eager memory allocation, resulting in a 15% improvement in memcached response latency. It can also automatically run experiments to identify the best set of pages to promote to huge pages. Notably, our examples are low-hanging fruit; our design exposes opportunity for much improvement over the status quo.
## 2 Target Setting
Definitions.We define a _warehouse-scale computer (WSC)_ as a fairly homogeneous set of machines inter-connected via a high-speed network and running relatively stable, large-scale distributed systems. A _policy_ is any kernel component that makes a runtime decision dynamically based on environmental inputs, including application behavior. Examples include when to schedule processes or flush dirty blocks, whether to use huge pages, or whether to run a background thread such as memory compaction. A _mechanism_ is an operation implemented in the kernel to accomplish some (usually hardware-related) objective. Examples include context-switching, low-level I/O primitives, virtual memory mapping, or physical memory allocation.
System Model.We target settings in which (1) large amounts of cluster metrics can be aggregated and used over time, (2) communication within the WSC has a latency of dozens of microseconds or less, and (3) humans rarely interact with machines, and then only through a CM that automatically manages the life cycle and resource allocation of applications and machines.
Our target setting has relatively stable and homogeneous hardware and software. Most machines in the cluster are similar to a large number of other machines (not necessarily a majority) in both hardware and software mix. Major changes occur infrequently, but there may be frequent incremental software updates. Many production WSCs satisfy these conditions [39, 17, 21, 32, 40].
## 3 Kernel policies considered harmful
In WSCs, built-in kernel policies beget many unnecessary kernel complexities and performance anomalies. Also, abundant cluster metrics are available for policy design and execution, but they are not used to their full potential.
the synchronous memory reclamation slow-path. It will try and retry each of these mechanisms to the extent allowed by the context of the allocation before resorting to the out-of-memory killer. However, in a WSC, memory allocation and overcommitment are carefully controlled by operators; beyond simple fallback paths, allocation failures should raise an alert.
Moreover, generality harms performance. Figure 1 shows page fault latency on Linux for a 350GB workload. Page fault latency varies over 6 orders of magnitude! Linux must somehow decide whether to allocate a base or huge page, attempt huge page promotion, share a COW page (e.g., a zero page), reclaim memory, etc. Often, its first choice fails and a fallback path executes, leading to high latency. In contrast, prior work has found that failing fast allows more predictable WSC behavior [18].
Cluster Metrics and History.WSC workloads are ripe to be accurately and automatically characterized. They are controlled and carefully allocated. Changes occur gradually as software teams update their services. The same applications run for thousands of machine-hours on the same machines continually [5, 7, 17, 39, 32, 39, 40].
Cluster managers can experiment with policies on a subset of nodes and improve policies for all nodes. For example, we measure the benefit of varying amounts of huge pages for three programs: a microbenchmark (ubmk) that allocates and writes to memory sequentially, and xz and mcf from SPEC (Figure 2). ubmk sees up to 60% reduction in page walks but bottlenecks on memory bandwidth, so runtime does not decrease. xz sees 7% improvement in runtime from promotion of a small number of pages. mcf sees 5% improvement in runtime from promotion of two large regions. This characterization gives the precise benefit of promoting different pages in each workload and shows the optimal set of pages to promote given a budget. The CM can generate this data by instructing different machines to map different sets of huge pages and aggregating the results. Similarly, prior work has explored how to quantify performance and security isolation in clusters by aggregating data from many experiments [31, 14, 36].
Historical data presents another major opportunity. Commodity kernels assume little prior knowledge of a program, but in a WSC, completely new binaries are uncommon. Rather, metrics from previous executions of a binary can inform policy for future executions. For example, with _eager paging_, the kernel eagerly allocates physical memory, rather than lazily on a page fault (the default) [22]. If the process uses its entire allocation, eager paging avoids page faults during the workload. Figure 3 shows the CDF of operation latency for a memcached workload. Eager paging improves latency by 3ms (15%) for this workload without changing throughput or memory usage. However, other workloads see up to 11% longer latency for memory allocations or up to 125% bloat in memory usage when using eager paging [22]. The first time a process runs, the CM can passively monitor it to determine whether it would benefit from eager paging. Subsequently, the CM can instruct all machines to use eager paging for this program.
## 4 Look Ma! No kernel polices!
Local kernel policies harm system performance, predictability, and implementation. And, current systems do not take advantage of the relative homogeneity and stability of the WSC setting. _Our key claim is that the CM, not the kernel, should make all policy decisions by leveraging WSC metrics._ Moving policy making out of the kernel simplifies system performance and implementation. It also enables the CM to use cluster-wide and historical metrics to improve policy decisions. Figure 4 shows an overview of our design, which we call Grape CM.
We implement a partial prototype which focuses on huge page policy. In this section, we use huge page management as a running example of our proposal.
### From Kernel to CM
In Grape CM, local kernels never make policy decisions independently; instead, they query the CM when a policy decision is needed. Additionally, the
CM may initiate a policy change (e.g., to move idle memory to far-memory, as in Google's far-memory system [27]). Listing 1 exemplifies a query request and response after a huge page allocation failure. In the example, the CM asks the kernel to stop allocating huge pages for some time and reclaim idle memory from a process. Figure 5 shows other policies and past work complemented by Grape CM.
In our implementation, we insert hooks at policy decision points in the kernel, allowing control through a sysfs interface. For example, we add two hooks in the page fault handler and hhueepaged. We run hundreds of experiments sequentially to simulate gathering data from a large cluster (Figure 2). Using this data, we select a subset of pages to promote. On our system, mcf achieves 86% of the benefit of THP with 42% less internal fragmentation overhead due to huge pages. Our implementation's simplicity allows greater insight into and control over the performance of the system.
Figure 4: Design Overview of Grape CM
Figure 3: 64GB memcached workload with and without eager paging. 1 operation = 100 insertions.
Figure 2: Improvement in runtime (yellow), store page walks (red), and load page walks (blue) as more huge pages are used for ubmk, xz, and mcf, respectively.
**Preset Policies.** The local kernel must make some policy decisions when contacting the CM is impractical. For example, scheduling and page fault handling are frequent and performance-critical; querying the CM each time would have massive performance and network overheads. Thus, the CM downloads a _preset policy_ into the local kernel. A preset is a policy that allows the kernel to make limited decisions without contacting the CM. We do not specify what a preset policy looks like, but possible forms include a match-action table (like in an SDN [8]), an eBPF program, or an automaton. Preset policies are limited; they do not handle edge cases or errors but fall back to the CM. This keeps the policy simple and fast to execute. It also informs the CM of exceptions, so it can improve the preset or alert an operator.
Listing 2 exemplifies a preset policy. It specifies both default policy choices and actions to take in specific cases. For example, on a page fault, the kernel checks if the faulting address is in use-huge-pages. If it is, it attempts to allocate a huge page (otherwise, a base page). If an error occurs, the kernel will query the CM.
Preset policies can improve performance over current systems. For example, when Linux's page fault handler fails to allocate a huge page, it attempts page compaction or swapping, which can take dozens of milliseconds, often without fruit (Figure 1). In contrast, preset policies fall back to the CM in uncommon cases, averting costly computation and long tail latency when it is wasteful.
### Policy Generation
The CM makes policy decisions for all machines in the cluster using the metrics it collects from the cluster (see Section 4.3). Policy decisions may apply cluster-wide (e.g., all machines should move 2GB to far-memory) or for specific machines (e.g., swap out a particular page). We do not specify how the CM acquires policies, but many possibilities exist. Google's far-memory system uses a Q-learning algorithm [27]. Other work suggests using neural networks [19]. Our prototype uses a simple parameterized template that accepts a list of address ranges that have the highest impact when stored in huge pages.
Figure 5: Other example policies that benefit from the CM’s scale and metrics collection.
The CM explores the space of policy decisions using a data-driven, partially-automated process. Human experts provide a set of tunable kernel mechanisms and service-level objectives or performance goals. As a workload runs, the CM tests different parameters across subsets of the cluster. As data builds up, the CM uses statistical methods (possibly including machine learning) to find the best parameters under different conditions or to eliminate parts of the search space. For example, our prototype measures the TLB-miss reduction of huge pages for large chunks of the address space and then narrows down to more promising regions. Prior work also demonstrates the potential of CM-based policy exploration; an autotuner was able to increase far-memory efficacy by 5% even after months of expert hand-tuning [27].
### Metrics Collection
The CM collects metrics from machines to make policy decisions for the whole cluster and individual machines. Useful metrics may include amount of free, remote, idle, or fragmented memory, memory access patterns, per-process resource usage, core temperature, TLB/cache misses, IPC, and device performance. Metrics collection must be efficient but frequent enough to detect changes in behavior (e.g., daily load variations). As a baseline, if 10,000 machines send 100KB of metrics per second (e.g., 25,000 4-byte counters), the CM will receive merely 1GBps of metrics. Other metrics may be too large to send frequently or may not need frequent reporting (e.g., memory usage data may be reported every 30s [27]). Moreover, stable metrics may be collected less frequently or from a subset of machines. The change in each metric and allowable staleness are measured to inform the frequency of collection, as in [27]. Our experiments gather TLB miss data and other metrics once at workload termination - roughly 1000B every 15 minutes.
### Discussion
Coordination.Moving policies to the CM enables it to coordinate across machines to avoid bottlenecks in a distributed computation. Google found that background activities, such as garbage collection, increase tail latencies because at any given time at least one machine is slow. Cluster-wide coordination can eliminate this bottleneck [13]. Similarly, Grape CM enables coordinating kernel-level background tasks, such as memory compaction.
Practical Implementation.Unlike other kernel designs (e.g., unikernels [30] or exokernels [15]), our design can be retrofitted into existing commodity kernels, such as Linux or Windows. The relevant kernel code can be enabled or disabled using compile-time configuration. Also, our proposal can be implemented incrementally by moving individual policies to the CM. For example, Google moved far-memory management to the CM while leaving other memory management policies intact [27]. Thus, our proposal is compatible with high-availability requirements that make sweeping changes impossible.
## 5 Related Work
There has been significant work on cluster management and scheduling [23, 23, 41, 42, 9, 20]. Prior work uses cluster-wide metrics and policies to improve efficiency [27, 37, 40] and performance isolation [31, 36, 14] in WSCs. Grape CM goes further by suggesting that all kernel policies should move to the CM. Our work can leverage prior work to identify sources of performance unpredictability in WSCs [44, 45, 16]
Software-defined networks are the networking analogue of Grape CM: a global network controller sets policies, while individual switches route traffic based on simple tables. This leads to higher performance and flexibility built with simple switches [24, 25, 2, 2].
Improving policy decisions requires accurate and precise data [35, 3, 4, 26], but it can be expensive to collect. Google and Facebook report a cost of 10% CPU for memory usage metrics [10]. Grape CM amortizes costs over the cluster and makes more efficient use of large-scale deployments for metrics gathering.
Cluster-wide workload traces have been published [17, 21, 32, 39, 40] and used to reduce memory fragmentation [29]. Other work uses live
profiling data on individual machines to improve fragmentation [11], I/O scheduling [19], and NUMA placement [28]. Our work complements and extends this work by using cluster-wide live metrics to make policy decisions for all nodes.
There has been much prior work on the structure of OS kernels. Hydra proposed separating policy from mechanism, and included a similar mechanism to our preset policies [43]. Unikernels hard-code policies into a library OS linked to the application [30]. Exokernels move policy decisions into userspace [15]. These approaches are complementary to Grape CM, providing a way to move policies out of the kernel.
## 6 Conclusion
We propose moving all policy making into the CM and removing it entirely from the OS kernel. This leads to better decision-making across the cluster by taking advantage of ample cluster-wide and historical workload metrics and a relatively constant workload mix.
By aggregating cluster-wide profiles, the CM is able not only to make better decisions itself, but to give operators greater visibility into their systems. We believe this will open doors for future optimizations that are currently impossible to implement. We also believe it will simplify system behavior and kernel implementation significantly.
## Acknowledgements
We thank our colleagues and anonymous reviewers for helpful feedback on our work.
This work was funded by NSF grants CNS 1815656 and CNS 1900758.
|
2306.05919
|
Reconstruction of Quantum Particle Statistics: Bosons, Fermions, and
Transtatistics
|
Identical quantum particles exhibit only two types of statistics: bosonic and
fermionic. Theoretically, this restriction is commonly established through the
symmetrization postulate or (anti)commutation constraints imposed on the
algebra of creation and annihilation operators. The physical motivation for
these axioms remains poorly understood, leading to various generalizations by
modifying the mathematical formalism in somewhat arbitrary ways. In this work,
we take an opposing route and classify quantum particle statistics based on
operationally well-motivated assumptions. Specifically, we consider that a) the
standard (complex) unitary dynamics defines the set of single-particle
transformations, and b) phase transformations act locally in the space of
multi-particle systems. We develop a complete characterization, which includes
bosons and fermions as basic statistics with minimal symmetry. Interestingly,
we have discovered whole families of novel statistics (dubbed transtatistics)
accompanied by hidden symmetries, generic degeneracy of ground states, and
spontaneous symmetry breaking -- effects that are (typically) absent in
ordinary statistics.
|
Nicolás Medina Sánchez, Borivoje DakiÄ
|
2023-06-09T14:22:38Z
|
http://arxiv.org/abs/2306.05919v2
|
# Reconstruction of Quantum Particle Statistics: Bosons, Fermions, and Transtatistics
###### Abstract
Identical quantum particles exhibit only two types of statistics: bosonic and fermionic. Theoretically, this restriction is commonly established through the symmetrization postulate or (anti)commutation constraints imposed on the algebra of creation and annihilation operators. The physical motivation for these axioms remains poorly understood, leading to various generalizations by modifying the mathematical formalism in somewhat arbitrary ways. In this work, we take an opposing route and classify quantum particle statistics based on operationally well-motivated assumptions. Specifically, we consider that a) the standard (complex) unitary dynamics defines the set of single-particle transformations, and b) phase transformations act locally in the space of multi-particle systems. We develop a complete characterization, which includes bosons and fermions as basic statistics with minimal symmetry. Interestingly, we have discovered whole families of novel statistics (dubbed transtatistics) accompanied by hidden symmetries, generic degeneracy of ground states, and spontaneous symmetry breaking- effects that are (typically) absent in ordinary statistics.
## I Introduction
The concept of identical particles was introduced by Gibbs in 1902 [1] as an alternative to solve the problem related to the extensitivity of entropy, the so-called _Gibbs paradox_. According to Gibbs, a system consists of identical particles if its physical magnitudes are invariant under any permutation of its elements. Bose has put forward this idea in quantum mechanics in his derivation of Planck's law of blackbody radiation [2]. This was further developed by Dirac [3] and Heisenberg [4], who formulated the well-known _symmetrization postulate_: physical states must be symmetric in such a way that the exchange of particles does not give any observable effect. Put in the standard language of wavefunctions, if the state of, say, two particles is given by \(\psi(x_{1},x_{2})\), then
\[\psi(x_{2},x_{1})=e^{i\varphi}\psi(x_{1},x_{2}). \tag{1}\]
Applying the particle swap twice trivially reveals \(e^{i\varphi}=\pm 1\). This is the origin of two types of particle statistics: _bosons_ (symmetric) and _fermions_ (antisymmetric).
Another approach to explain the origin of quantum statistics is the _topological_ argument [5; 6]. Namely, the exchange symmetry is directly related to the continuous movement of particles in a physical (configuration) space, which implies that only bosonic and fermionic phases are allowed, given that the number of spatial dimensions is three or greater. In lower dimensions, one gets fractional phases and anyonic statistics [7].
The third common way of addressing the question of particle statistics is to take the _algebraic (field)_ approach [8], i.e., by postulating the set of canonical relations
\[[a_{i},a_{j}^{\dagger}]_{\pm}=\delta_{ij}\openone, \tag{2}\]
where \(\pm\) stands for (anti)commutator of operators (for fermions and bosons, respectively). Starting with an assumption of a unique vacuum state, one can build the multi-particle state space (Fock space) for two types of particle statistics.
While these approaches agree at the level of ordinary statistics (bosons and fermions), all of them have been criticized for their _ad hoc_ nature [9; 10; 11]. This leaves the door open for various generalizations, many of which resort to somewhat arbitrary assumptions added to the quantum formalism. Earliest work along these lines dates back to Gentile and his attempt to interpolate between two statistics [12], and since then, we have seen dozens of generalized and exotic statistics, such as parastatistics [13], quons and intermediate statistics [14; 15; 16; 17], infinite statistics [18; 19], generalizations of fractal and topology-dependent statistics [20; 21; 22; 23], ewkons [24] modifications of statistics due to quantum gravity [25; 26; 27], non-commutative geometry [28] and others [29; 30; 31; 32].
### Operational approach and particle statistics
So far, exotic statistics have never been observed in nature. This situation can be interpreted at least in two ways: we need more sophisticated and precise experiments, or (some) generalizations are in collision with basic laws of physics (believed to hold universally). An excellent example of the latter point is a question of the parity superselection rule (PSR) for fermions derived from the impossibility of discriminating a \(2\pi\)-rotation from the
identity in three-dimensional space [33]. One may wonder how to apply this reasoning in a more abstract scenario, such as fermions occupying some discrete degrees of freedom (e.g., energy) where no notion of rotation (_a priori_) exists. An elegant study was provided in a recent work [34] based on techniques from quantum information, showing that a PSR violation would allow for superluminal communication. Thus, the parity superselection rule can be derived from a more basic law, i.e., the _no signaling_ principle [35]. Such an approach to physical theories (from physical laws to mathematical formalism) resembles Einstein's original presentation of special relativity. In that case, a concise set of physical postulates, namely the covariance of physical laws and the constancy of the speed of light in all frames of reference, paved the way for the formalism of Lorentz transformations. In the realm of quantum foundations, the application of this methodology was particularly successful. With the pioneering work of Hardy [36], the field of _operational reconstructions_ of quantum theory [36; 37; 38; 39; 40; 41] was established where one recovers the abstract machinery of Hilbert spaces starting from a set of information-theoretic axioms. Considering the significance of identical particles in quantum information processing (such as in linear optical quantum computing [42]), it becomes evident that utilizing this operational approach holds significant potential to derive particle statistics based on physically grounded assumptions. Rather than exploring possible modifications of the existing formalism, a more constructive approach may begin by defining a typical quantum experiment and addressing straightforward physical questions. For example, how do we define particle (in)distinguishability from an experimental standpoint? Is it possible to establish a clear operational differentiation between various types of identical particles, and if so, how do we characterize the corresponding mathematical formalism? Our work can be understood as an attempt to answer these questions. Along these lines, promising research studies appeared in the context of the symmetrization postulate [43], anyonic statistics [44], quantum field theory [45; 46] and identical particles in the framework of generalized probabilistic theories [47; 48].
### Reconstruction, mathematical foundations and summary of the results
Following the instrumentalist approach of Hardy [36], we study identical quantum particles in an operationally well-defined setup composed of laboratory primitives, such as preparations, transformations, and measurements (see Fig. 1). Our starting point is a single quantum particle which we assume is an ordinary quantum particle described by standard formalism and unitary dynamics. This appears rather natural, as a single quantum particle is insensitive to statistics. We introduce a typical apparatus for a single-particle transformation described by a unitary channel on \(d\)-modes (\(d\times d\) unitary matrix) and a set of \(d\) detectors at the output. For such a fixed circuit, we investigate the scenario with multiple identical particles at the input and analyze the probability of detecting them after the transformation. Detectors can register only particle numbers but cannot distinguish them; thus, indistinguishability is built in from the beginning. As we shall see, the Fock-space structure will naturally arise as an ambient space for multi-particle states. Two central mathematical ingredients will thus figure prominently in our reconstruction of particle statistics:
1. unitary group \(U(d)\) describing single-particle transformations, and
2. the Fock space structure encompassing multi-particle states.
Paired with the locality assumption (i.e., phase transformations acting locally in Fock space), these two elements will determine how particles are organized in multiparticle states. Mathematically, the problem concerns the classification of representations of the \(U(d)\) group in Fock space subjected to locality constraint. We found a one-to-one correspondence to the well-studied mathematical problem of characterizing completely-positive sequences [49; 50; 51; 52]. This, in turn, provided us with a _complete categorization of particle statistics_ based on integral polynomials. To be more precise, a list of integers
\[[q_{0},q_{1},\dots]_{\pm}, \tag{3}\]
defines a type of particle statistics, provided that \(Q_{\pm}(x)=\sum_{s}(\mp 1)^{st}q_{s}x^{s}\) are polynomials with all negative (\(+\)) or positive (\(-\)) roots. We coin the term _transatistics_ for this generalized statistics. This is a natural generalization of ordinary statistics into two types: fermionic-like \([\dots]_{-}\) (_transformions_) and bosonic-like \([\dots]_{+}\) (_transbosons_), and to the best of our knowledge, was not presented in the literature. Ordinary statistics is the simplest possibility (degree-one) \([1,1]_{\pm}\) with multiparticle Fock state being completely specified by irreducible representations (IR) of \(U(d)\). On the other hand, general transatistics requires additional quantum numbers to identify states of indistinguishable particles; thus, _hidden symmetries_[53] and _new degrees of freedom_ emerge exclusively from these types of particles. We discuss further physical consequences by analyzing the thermodynamics of non-interacting gases. In doing so, we find an interesting inclusive degeneracy of ground-states followed by _spontaneous symmetry breaking_[54], which (usually) does not exist in ordinary statistics.
Symmetry is central to our reconstructions. In particular, the \(U(d)\) symmetry of single-particle transformations is uniquely related to ordinary statistics and transatistics. This also brings the main difference to other generalized statistics, which rely on different symmetries. Apart from the foundational relevance, our findings apply to quantum information and quantum many-body physics. Concretely speaking, transatistics brings novel theoretical models for non-interacting identical particles. The
latter is relevant for studying strongly-correlated quantum systems (see [55] and references therein), many of which are reducible to non-interacting models of indistinguishable particles [56]. Therefore, one may find new integrable models among strongly interacting quantum systems reducible to our non-interacting model. On the quantum information side, quantum statistics is essential in complexity theory and intermediate quantum computing models, such as in boson sampling [57]. In this respect, our classification is relevant as it may lead to the discovery of new intermediate computational models. These points are only summarized here and will be discussed in more detail in the last section of the manuscript.
## II Operational setup for indistinguishable particles
The operational framework for indistinguishable particles is illustrated in Fig. 1\(a\)). The apparatus consists of \(d\) modes into which particles can be injected, followed by a transformation \(g\) and a set of \(d\) detectors that register particles after the transformation. The transformation \(g\) is fixed and independent of particle number at the input and particle-statistics type. One can think of this transformation as a quantum circuit composed of elementary gates, such as beam-splitters and phase shifters used in quantum linear optics to produce a general unitary transformation \(g\in U(d)\) on \(d\) modes [58], where \(U(d)\) is the set of \(d\times d\) unitary matrices. As long as just one particle is injected into the setup, e.g., in mode \(i\), the \(j\)th detectors will register the particle with the probability \(p_{j}=|g_{ji}|^{2}\) with \(g_{ji}\) being the matrix element of \(g\). In other words, \(g\) represents standard complex unitary dynamics of a single quantum particle with \(d\) levels (modes). The critical question to be answered is what will happen if more than one particle is injected into such apparatus? To formalize the situation, there are three points to be addressed in the first place:
1. We shall determine the ambient Hilbert space describing the multi-particle system,
2. we have to find the corresponding representation of transformations (as defined by the \(U(d)\) group) in such a space, and
3. finally, determine the Born rule to calculate probabilities of detection events.
To identify the Hilbert space of many particles, we use the fact that particles are indistinguishable, i.e., detectors can register only particle numbers (how many particles land in a particular detector without distinguishing them). Thus the overall measurement outcome is described by a set of numbers \((n_{1},n_{2},\dots,n_{d})\), with \(n_{k}=0,1,2,\dots\). This outcome fully specifies the physical configuration; thus, we associate to it the measurement vector \(|n_{1},\dots n_{d}\rangle\) such that the Born rule gives detection probabilities
\[p_{n_{1},\dots,n_{d}}=|\bra{n_{1},\dots,n_{d}}|\psi_{g}\rangle|^{2}, \tag{4}\]
where \(|\psi_{g}\rangle\) is the state of the system after the transformation \(g\). From here, we directly see that \(|\psi_{g}\rangle\in\mathcal{F}_{d}\) resides in a Fock space defined as a span over number states, i.e.,
\[\mathcal{F}_{d}=\text{span}\{|n_{1},n_{2},...,n_{d}\rangle\;\;|\;n_{k}=0,1,2, \dots p\}. \tag{5}\]
Here \(span\) denotes the complex linear span (hull) of basis vectors. Since outcomes \((n_{1},n_{2},\dots,n_{d})\) are perfectly distinguishable, vectors \(|n_{1},n_{2},...,n_{d}\rangle\) form an orthonormal set. We introduced the possibility of there being a maximal occupation number \(p\in\mathbb{N}\), which is the _generalized Pauli exclusion principle_. As we shall see, \(p=1\) will correspond to fermionic statistics, while bosons are associated with the case \(p=+\infty\). At this stage, \(p\) is characteristic of statistics and is kept as an integer parameter (possibly infinite). Note that the Fock space in (5) shall not be _a priori_ identified with the standard (textbook) Fock space constructed as a direct sum of particle sectors. Our Fock space is an ambient Hilbert space for multi-particle states naturally emerging from operational considerations and the measurement postulate defined in (4). Note also that the Fock space in (5) is of the tensor product form, i.e., \(\mathcal{F}_{d}=\mathcal{F}_{1}^{\otimes d}\).
Now, we shall find an appropriate unitary representation of \(g\in U(d)\) in the ambient space \(\mathcal{F}_{d}\), i.e. \(\Delta_{d}:U(d)\mapsto GL(\mathcal{F})\) such that
\[|\psi_{g}\rangle=\Delta_{d}(g)\,|\psi_{in}\rangle\,, \tag{6}\]
with \(\Delta_{d}(g)\) being unitary representation and \(|\psi_{in}\rangle\in\mathcal{F}_{d}\) is some input state to the circuit in Fig. 1\(a\)). For example, \(|\psi_{in}\rangle=|1,1,0,\dots,0\rangle\) represents the input state of two particles injected in mode 1 and 2. In general, \(|\psi_{in}\rangle\) may involve the superposition of number states. Representation \(\Delta_{d}\) is reducible in general, and the _group character_ completely determines its decomposition into irreducible (IR) sectors [59], that is, a function defined over the elements of the group
\[\chi_{d}(g)=\text{Tr}(\Delta_{d}(g)),\quad\forall g\in U(d). \tag{7}\]
Figure 1: **Operational setup**. a) Quantum circuit represented by \(U(d)\)-transformation for a single particle (example of \(d=4\) is shown). For many particles injected into the setup, detectors can register their number only. This incorporates the notion of indistinguishability. b) Disconnected (independent) phase gates acting on particles locally, in individual modes.
As we shall see, the irreducible decomposition of Fock space (5) will be in one-to-one correspondence to the type of particle statistics. So, the group character will be our main object of interest.
### Locality assumption
To evaluate character on \(U(d)\) group, recall that any unitary matrix can be diagonalized, i.e., \(g=StS^{\dagger}\), with \(t=\text{diag}[x_{1},\ldots,x_{d}]\in T_{d}=U(1)\times\cdots\times U(1)\) being an element of the maximal torus (also known as the phase group) with \(x_{k}=e^{i\theta_{k}}\in U(1)\). Therefore, the character of \(U(d)\) is entirely specified by the character evaluated on \(T_{d}\), that is, \(\chi_{d}(StS^{\dagger})=\text{Tr}\Delta_{d}(StS^{\dagger})=\text{Tr}\Delta_{d }(t)=\chi_{d}(t)\) (i.e., class function), thus it effectively becomes a function of phase variables, i.e., \(\chi_{d}(\vec{x})=\chi_{d}(x_{1},\ldots,x_{d})\).
Consider the case of a single-mode (\(d=1\)) with the Fock space \(\mathcal{F}_{1}=\text{span}\{\left|n\right\rangle|\ n=0,1,\ldots,p\}\) on which the group \(U(1)\) acts with representation \(\Delta_{1}(x)\), with \(x=e^{i\theta}\). We can think of \(\Delta_{1}(x)\) representing a simple device providing a phase shift to the state of a single particle placed in a mode. We can now consider the collection of \(d\) such devices disconnected from each other and operating independently in separate modes, as illustrated in Fig. 1\(b)\). These transformations form the phase group \(T_{d}\) acting in the entire Fock space, and given their operational independence, it appears natural to assume the following.
**Assumption 1** (**Locality)**.: The action of the phase group \(T_{d}\) in Fock space is local, i.e.,
\[\Delta_{d}(\vec{x})=\Delta_{1}(x_{1})\otimes\cdots\otimes\Delta_{1}(x_{d}), \tag{8}\]
for \(\vec{x}\in T_{d}\).
By taking the trace of the last equation, one gets
\[\chi_{d}(\vec{x})=\prod_{k=1}^{d}\chi_{1}(x_{k}), \tag{9}\]
with \(\chi_{1}(x)=\text{Tr}(\Delta_{1}(x))\) being the single-mode character. One can also go in the reversed direction, i.e., starting with the character factorization in (9), we may derive the tensor factorization in (8), which follows from general character theory [59].
Assumption 1 is our central assumption. We see that the single-mode character \(\chi_{1}\), a function of a single variable, entirely specifies the character of the whole \(U(d)\). The problem then simplifies, and our goal is to determine the most general form of \(\chi_{1}(x)\) such that \(\chi_{d}(\vec{x})\) in (9) is a valid character of \(U(d)\).
### Generalized number operator and conserved quantities
What follows from Assumption 1 and factorization given in (9) is that the single-mode character \(\chi_{1}(x)\) completely specifies the character of the whole \(U(d)\) and consequently determines the decomposition of Fock space into IR sectors. Note that the action of the single-mode phase transformation \(x=e^{i\theta}\in U(1)\) can be seen as an instance of the Hamiltonian evolution. Thus we can write \(\theta=\epsilon t/\hbar\), where \(\epsilon\) is the single-particle energy associated with this mode. With this, the representation of the phase transformation becomes \(\Delta_{1}(e^{i/\hbar t})=e^{i/\hbar\hat{H}t}\), where \(\hat{H}\) is the single-mode Hamiltonian (generator of phase). From the invariance under \((2\pi)\)-rotations, i.e., \(e^{i(\theta+2\pi)}=e^{i\theta}\), we conclude that all eigenvalues of \(\hat{H}\) are integer multiples of \(\epsilon\), that is, \(\hat{H}=\epsilon\hat{N}\) with \(\tilde{N}\) being the operator with integer eigenvalues. This defines the _generalized number operator_ or _excitation operator_\(\tilde{N}\). This work will consider only the case \(\tilde{N}\geq 0\). Without loss of generality, we can assume \(U(1)\) action to be number preserving, thus
\[\tilde{N}=\sum_{n=0}^{p}f_{n}\left|n\right\rangle\left\langle n\right|, \tag{10}\]
with \(f_{n}\) being non-negative integers. Note that \(\tilde{N}\) is in general different from the standard number operator \(\tilde{N}=\sum_{n=0}^{p}n\left|n\right\rangle\left\langle n\right|\). The two will coincide only if \(f_{n}=n\), and as we shall see, this happens only in the case of ordinary statistics.
Finally, we can write the single-mode character \(\chi_{1}(e^{i\theta})=\text{Tr}(e^{i\theta\tilde{N}})\) as
\[\chi_{1}(x)=\sum_{s=0}^{+\infty}a_{s}x^{s}=x^{f_{0}}+x^{f_{1}}+\cdots+x^{f_{p}}, \tag{11}\]
with \(a_{s}\) being a non-negative integer. Mathematically speaking, the formula above is the decomposition of \(\chi_{1}\) into irreducible representations of \(U(1)\). For fermions, we have \(\chi_{1}^{(-)}(x)=1+x\), while for bosons \(\chi_{1}^{(+)}(x)=1+x+x^{2}+\cdots=\frac{1}{1-x}\).
For the case of \(d\) modes, the action of the phase group in (8) becomes
\[\Delta_{d}(\vec{x})=e^{i\theta_{1}\tilde{N}_{1}+\cdots+i\theta_{d}\tilde{N}_{ d}}, \tag{12}\]
where \(\tilde{N}_{k}=\openone^{\otimes(k-1)}\otimes\tilde{N}\otimes\openone^{\otimes(d-k)}\) are generators of local phases. The vector \(\vec{x}=\theta(1,1,\ldots,1)^{T}\in T_{d}\) corresponds to the scalar \(d\times d\) matrix \(e^{i\theta}\openone_{d}\) commuting with all \(U(d)\) matrices, thus the operator
\[\tilde{N}=\sum_{k=1}^{d}\tilde{N}_{k}, \tag{13}\]
is a conserved quantity (Casimir operator) and represents the total number of excitations. We can also write (12) as being generated by the following Hamiltonian
\[\hat{H}=\sum_{k=1}^{d}\epsilon_{k}\tilde{N}_{k}, \tag{14}\]
where \(\theta_{k}=\epsilon_{k}t/\hbar\) and \(\epsilon_{k}\) is are the single-particle energies.
## III Particle statistics and its classification
### On exchange symmetry
In the 1st quantization approach, particle statistics are classified via the exchange of particles and symmetrization postulate as given in equation (1). However, this method does not apply to the Fock-space approaches simply because there is no particle label (they are indistinguishable). A partial solution to this problem is to introduce _permutation of modes operator_[60]
\[\Delta_{d}(\sigma)\ket{n_{1},n_{2},\ldots,n_{d}}=\ket{n_{\sigma(1)},n_{\sigma( 2)},\ldots,n_{\sigma(d)}}, \tag{15}\]
for some permutation \(\sigma\in S_{d}\) of \(d\) elements. In this way, the permutation group acts in Fock space and plays the same role as the exchange of particles in the 1st-quantized picture. For ordinary statistics, we have the usual sign change, i.e., \(\Delta_{d}(\sigma)\ket{1,1,\ldots,1}=(\pm)^{\sigma}\ket{1,1,\ldots,1}\), where \((..)^{\sigma}\) denotes the parity of permutation (\(+1\) for bosons and \((-1)^{\sigma}\) for fermions). Nevertheless, the permutation of modes is only a discrete subgroup of the group of single-particle transformation, thus insufficient for the whole physical picture. For example, in our case, it is the subgroup of the unitary group, i.e., \(S_{d}<U(d)\). But it can also be a subgroup of some other group, such as an orthogonal group, in which case one gets parastatistics [61; 62]. Therefore, to fully understand how different types of particles integrate into multi-particle states in Fock space, one must study transformation properties under the action of the whole group of single-particle transformations. This work concerns \(U(d)\) as our premise is that standard unitary quantum mechanics governs the physics of one particle.
### Physical consequences
To illustrate how single-particle transformations affect the physical behavior of indistinguishable particles, take an example of two particles entering the 50/50 beamsplitter (BS) at different ports (modes), as shown in Fig. 2. The beam-splitter is defined via unitary matrix \(u_{bs}=\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}\). Now, if particles are bosons, then the input state is \(\ket{1,1}=a_{1}^{\dagger}a_{2}^{\dagger}\ket{0,0}\), where \(a_{1(2)}^{\dagger}\) are bosonic ladder operators associated to two different modes (ports of BS). The output state (after BS) is given by \(\frac{1}{2}(a_{1}^{\dagger}+a_{2}^{\dagger})(a_{1}^{\dagger}-a_{2}^{\dagger}) \ket{0,0}=\frac{1}{\sqrt{2}}(\ket{2,0}-\ket{0,2})\). We see that bosons exit the BS bunched together, and this is the well-known Hong-Ou-Mandel effect [63]. In contrast, if particles were fermions, the calculation remains the same but with fermionic ladder operators \(a_{1/2}^{\dagger}\), thus we have the output state \(\frac{1}{2}(a_{1}^{\dagger}+a_{2}^{\dagger})(a_{1}^{\dagger}-a_{2}^{\dagger}) \ket{0,0}=-a_{1}^{\dagger}a_{2}^{\dagger}\ket{0,0}=-\ket{1,1}\). This means that fermions exit the BS antibunched (in different ports). These two complementary behaviors can be deduced from the decomposition of the Fock space (5) into IR sectors of the \(U(2)\) group and action of the \(u_{bs}\) element. In the case of two bosons, \(U(2)\) reduces into three-dimensional subspace \(\text{span}\{\ket{2,0},\ket{1,1},\ket{2,0}\}\) (bosonic IR) which encompasses the bunching effect. For two fermions, we have the one-dimensional IR spanned by \(\{\ket{1,1}\}\) (fermionic IR), directly resulting in fermionic antibunching.
### Particle statistics
As explained at the beginning of this section, the group of single-particle transformations determines the physical behavior of non-interacting indistinguishable particles, and different types of particle statistics arise due to the Fock space's \(U(d)\)-IR decomposition. Therefore, what we mean by classification of particle statistics is a _classification of all possible ways_ the Fock space (5) decomposes into IR sectors, i.e.
\[\mathcal{F}_{d}=\bigoplus_{\lambda}c_{\lambda}\mathcal{V}_{\lambda}, \tag{16}\]
where \(\mathcal{V}_{\lambda}\) is an \(U(d)\)-IR. These are indexed [59] by a partition (Young diagram) \(\lambda=(\lambda_{1},\ldots\lambda_{d})\) with \(\lambda_{1}\geq\cdots\geq\lambda_{d}\), and \(c_{\lambda}\in\mathbb{N}_{0}\) is the frequency of the IR. Now, recall that the character of a representation completely determines its decomposition into IR sectors. A well-known fact from representation theory is that IRs of \(U(d)\) have _Schur polynomials_\(s_{\lambda}(\vec{x})\) as characters (see Appendix A for definition). Thus, equation (16) translates to decomposition of character (9) into Schur-polynomials, i.e.
\[\prod_{k=1}^{d}\chi_{1}(x_{k})=\sum_{\lambda}c_{\lambda}s_{\lambda}(\vec{x}), \ \ c_{\lambda}\in\mathbb{N}_{0}. \tag{17}\]
We see that the single-mode character \(\chi_{1}(x)\)_completely specifies particle statistics_ (in the sense of definition (16)) and this is a direct consequence of our locality assumption 1.
To clarify the point, we provide examples of bosonic and fermionic statistics. For fermions, the maximal occupation number is \(p=1\), thus the single-mode character in (11) reduces to \(\chi_{1}^{(-)}(x)=1+x\). For \(d\)-modes, charac
Figure 2: **Hong-Ou-Mandel effect [63]**. a) Boson bunching, and b) fermion antibunching. See main text for details.
ter (9) can be expanded as
\[\chi_{d}^{(-)}(\vec{x}) = \prod_{k=1}^{d}(1+x_{k})=1+(x_{1}+\cdots+x_{d})+\] \[+ (x_{1}x_{2}+\cdots+x_{d-1}x_{d})+\cdots+x_{1}x_{1}\ldots x_{d}.\]
Written in terms of Schur-polynomials, this equation reads
\[\chi_{d}^{(-)}(\vec{x}) = s_{(0,0,\ldots,0)}(\vec{x})+s_{(1,0,\ldots,0)}(\vec{x})+s_{(1,1, \ldots,0)}(\vec{x})+\ldots \tag{19}\] \[+ s_{(1,1\ldots,1)}(\vec{x}).\]
This expansion corresponds to the decomposition of the Fock space \({\cal F}_{d}=\bigoplus_{N=1}^{d}{\cal V}_{-}^{(N)}\) into fermionic irreducible subspaces \({\cal V}_{N}^{(N)}\) associated with particle sectors.
Similarly, for the case of bosons and \(p=+\infty\), equation (11) reads \(\chi_{1}^{(+)}(x)=1+x+x^{2}+\cdots=\frac{1}{1-x}\). For \(d\) modes, (9) reads
\[\chi_{d}^{(+)}(\vec{x}) = \prod_{k=1}^{d}\frac{1}{1-x_{k}}=1+(x_{1}+\cdots+x_{d})\] \[+ (x_{1}^{2}+x_{1}x_{2}+x_{2}^{2}+\cdots+x_{d-1}x_{d}+x_{d}^{2})\] \[+ (x_{1}^{3}+x_{1}^{2}x_{2}+x_{1}x_{2}^{2}+x_{2}^{3}+\cdots+x_{d-1} x_{d}^{2}+x_{d}^{3})\ldots\]
or written in terms of bosonic Schur polynomials
\[\chi_{d}^{(+)}(\vec{x}) = s_{(0,0,\ldots,0)}(\vec{x})+s_{(1,0,\ldots,0)}(\vec{x})+s_{(2,0,\ldots,0)}(\vec{x}) \tag{21}\] \[+ s_{(3,0,\ldots,0)}(\vec{x})+\ldots\]
Again, this corresponds to the decomposition of the Fock space \({\cal F}_{d}=\bigoplus_{N=1}^{+\infty}{\cal V}_{+}^{(N)}\) into bosonic irreducible subspaces \({\cal V}_{+}^{(N)}\) associated with particle sectors.
An important remark is in order about the single-particle sector \({\cal F}_{d}^{(1)}={\rm span}\{|n_{1},n_{2},...,n_{d}\rangle\;|\;\sum_{k}n_{k }=1\}\) which is \(d\)-dimensional. This subspace is associated with the character \(s_{(1,0,\ldots,0)}(\vec{x})=x_{1}+\cdots+x_{d}\), same for bosons and fermions, i.e. we have \({\cal F}_{d}^{(1)}={\cal V}_{+}^{(1)}={\cal V}_{-}^{(1)}\). This is consistent with the fact that the quantum physics of one particle is insensitive to the type of statistics. This also agrees with our operational setup in Fig. 1\(a)\), which was defined through \(d\times d\) unitary matrices acting in the space of one particle. Such representation is called the _standard_ or _defining_ representation.
### Partition theorem and general statistics
Generally, not all \(U(1)\)-characters \(\chi_{1}(x)=\sum_{s\in{\mathbb{N}}_{0}}a_{s}x^{s}\) induce a valid \(U(d)\)-character in (9). To see this, take a simple example of \(\chi_{1}(x)=1+x^{2}\). For two-modes equation (9) reads \((1+x_{1}^{2})(1+x_{2}^{2})=s_{(0,0)}(x_{1},x_{2})+s_{(2,0)}(x_{1},x_{2})+s_{(2, 2)}(x_{1},x_{2})-s_{(1,1)}(x_{1},x_{2})\) and we have a negative expansion coefficient \(c_{(1,1)}<0\), which contradicts \(c_{\lambda}\geq 0\) in equation (16).
Next, suppose that the first \(k\) coefficients in the single-mode character expansion are linear. Then, we can always write \(\chi_{1}(x)=\sum_{s=k}^{+\infty}a_{s}x^{s}=x^{k}\sum_{s=0}^{+\infty}a_{s+k}x^ {s}=x^{k}\tilde{\chi}_{1}(x)\). For the general \(d\) mode character in (9), we will have
\[\chi_{d}(\vec{x})=(x_{1}\ldots x_{d})^{k}\tilde{\chi}_{d}(\vec{x}). \tag{22}\]
The term \((x_{1}\ldots x_{d})^{k}=(\det g)^{k}\) equals determinant of a unitary matrix \(g\) with eigenvalues \(x_{1},\ldots x_{d}\). From here, we recognize that \(\chi_{d}\) and \(\tilde{\chi}_{d}\) are equivalent up-to-determinant. Therefore, without loss of generality, we will assume \(a_{0}>0\).
The problem of classifying all single-mode characters that induce valid representation of \(U(d)\) involves non-trivial mathematics. Luckily, we found an equivalent formulation to the well-studied combinatorial problem of characterizing completely-positive sequences [49; 50; 51; 52]. Details are provided in the Appendix B together with the proof of our main theorem:
**Theorem 1** (Partition).: _For \(\chi_{1}(x)=\sum_{s\in{\mathbb{N}}_{0}}a_{s}x^{s}\) with \(a_{0}>0\), a symmetric function \(\prod_{k=1}^{d}\chi_{1}(x_{k})\) is a \(U(d)\) character for all \(d\in{\mathbb{N}}\) if and only if the generating function is of the form_
\[\chi_{1}(x)=\frac{Q_{-}(x)}{Q_{+}(x)}, \tag{23}\]
_where \(Q_{\pm}(x)\) is an integral polynomial with all positive (negative) roots. Furthermore \(Q_{+}(0)=1\)._
In other words, \(Q_{\pm}(x)=c_{\pm}\prod_{i}(1\mp\alpha_{i}x)\) are polynomials with integer coefficients, where \(\alpha_{1}>\alpha_{2}>\cdots>0\), \(c_{+}=1\) and \(c_{-}\in{\mathbb{N}}\). From here, it follows that \(Q_{\pm}(x)\) is a polynomial with all non-zero coefficients.
Note that we are interested only in _irreducible_ statistics, i.e., the Fock space cannot be factorized as a tensor product \({\cal F}={\cal F}_{1}\otimes{\cal F}_{2}\), with \({\cal F}_{1/2}\) being associated with different particle types. Therefore, the character of irreducible statistics cannot be factorized as \(\chi_{1}(x)=\mu_{1}(x)\nu_{1}(x)\), with \(\mu_{1}(x)\) and \(\nu_{1}(x)\) being of the type (23). Thus equation (23) for irreducible statistics is either \(\chi_{1}=Q_{-}\) or \(\chi_{1}=1/Q_{+}\). We conclude that statistics is of two kinds, i.e., _fermionic-like_\([\ldots]_{-}\) and _bosonic-like_\([\ldots]_{+}\) specified by
\[Q_{\pm}(x)=\sum_{s=0}^{\deg[Q_{\pm}]}(\mp 1)^{s}q_{s}x^{s}:=[q_{0},q_{1}, \ldots]_{\pm},\;\;q_{s}\in{\mathbb{N}}, \tag{24}\]
with \(Q_{\pm}(x)\) being irreducible polynomials over integers satisfying conditions in (23). The corresponding single-mode characters are \(Q_{-}(x)\) and \(1/Q_{+}(x)\), respectively. This classification naturally generalizes ordinary statistics, and we term it _transstatistics_ with two possible types: _transfermions_ (type \([\ldots]_{-}\)) and _transbosons_ (type \([\ldots]_{+}\)). Here \(\deg[Q_{\pm}]\) is the degree of \(Q_{\pm}(x)\) to which we also refer as _order of statistics_. Order \(0\) is a trivial case, thus we assume \(\deg[Q_{\pm}]\geq 1\). For \([\ldots]_{-}\) statistics, the generalized Pauli principle applies with
\(p=Q_{-}(1)-1<+\infty\) being the maximal number of particles per mode, while for \([\ldots]_{+}\) we have \(p=+\infty\). From now on, we shall use the label \([q_{0},q_{1},\ldots]_{\pm}\) to refer to a particular type of particle statistics.
Note that one can find the eigenvalues of the excitation operator \(\tilde{N}\) defined in (10) by solving the following equation
\[x^{f_{0}}+x^{f_{1}}+\cdots+x^{f_{p}}=(Q_{\pm}(x))^{\mp 1}\,. \tag{25}\]
## IV Irreducible particle sectors: bosons and fermions
Ordinary statistics is order-one statistics of the type \([1,1]_{\pm}\). To answer what makes bosons and fermions special in the whole family of generalized statistics classified in (24), we introduce the following assumptions:
**Assumption 2** (Irreducibility).: All symmetries of the system of indistinguishable particles are determined by the \(U(d)\) group.
Assumption 2 essentially states that the Fock space decomposes into \(U(d)\)-IR sectors without multiplicity; thus, no additional symmetries (conserved quantities) are present in the system. We show now that only ordinary statistics has this property.
We start with a general single-mode character \(\chi_{1}(x)=\sum_{s=0}^{+\infty}a_{s}x^{s}\). Character equation (9) for \(d\)-modes can be expanded as follows
\[\chi_{d}(\vec{x}) = a_{0}^{d}+a_{0}^{d-1}a_{1}(x_{1}+\cdots+x_{d})+W(\vec{x})\] \[= a_{0}^{d}s_{(0,0,\ldots,0)}(\vec{x})+a_{0}^{d-1}a_{1}s_{(1,0, \ldots,0)}(\vec{x})+W(\vec{x}),\]
where \(W(\vec{x})\) is the symmetric function that contains quadratic and higher-order terms in variables \(\vec{x}=(x_{1},\ldots,x_{d})^{T}\). Since Schur polynomials of degree \(l\) form the basis in the space of \(l\)-degree symmetric polynomials, the constant and linear terms in the equation (IV) are already IR-decomposed. Because assumption 2 requests no multiplicities, we have \(a_{0}=0,1\) and \(a_{1}=0,1\).
Now we turn to concrete cases. For transfermions, the single-mode character reads
\[Q_{-}(x) = c_{-}\prod_{i}(1+\alpha_{i}x)\] \[= c_{-}+c_{-}\left(\sum_{i}\alpha_{i}\right)x+c_{-}\left(\sum_{i<j }\alpha_{i}\alpha_{j}\right)x^{2}+\ldots\] \[= a_{0}+a_{1}x+a_{2}x^{2}+\ldots\]
with \(\alpha_{1}>\alpha_{2}>\cdots>0\) and \(c_{-}\in\mathbb{N}\). This is consistent with the previous analysis of (IV) only if \(c_{-}=a_{0}=1\) and \(\sum_{i}\alpha_{i}=a_{1}=1\). For the quadratic coefficient in (IV) we have \(a_{2}=\sum_{i<j}\alpha_{i}\alpha_{j}=\frac{1}{2}(\sum_{i}\alpha_{i})^{2}-\frac {1}{2}\sum_{i}\alpha_{i}^{2}=\frac{1}{2}-\frac{1}{2}\sum_{i}\alpha_{i}^{2}\in \mathbb{N}_{0}\) because \(Q_{-}\) is an integral polynomial. This is possible only if \(\alpha_{1}=1\) and \(\alpha_{2}=\cdots=0\). Thus, we recover the fermionic character \(\chi_{1}(x)=1+x\).
For the case of transbosons, we have the single-mode character
\[1/Q_{+}(x) = 1/\prod_{i}(1-\alpha_{i}x)\] \[= 1+\left(\sum_{i}\alpha_{i}\right)x+\left(\sum_{i}\alpha_{i}^{2} +\sum_{i<j}\alpha_{i}\alpha_{j}\right)x^{2}+\ldots\] \[= a_{0}+a_{1}x+a_{2}x^{2}+\ldots\]
By the same analysis as for transfermions, we conclude \(\sum_{i}\alpha_{i}=1\). For the quadratic term in (IV), we have \(a_{2}=\sum_{i}\alpha_{i}^{2}+\sum_{i<j}\alpha_{i}\alpha_{j}=\frac{1}{2}(\sum_{ i}\alpha_{i})^{2}+\frac{1}{2}\sum_{i}\alpha_{i}^{2}=\frac{1}{2}+\frac{1}{2}\sum_{i} \alpha_{i}^{2}\in\mathbb{N}_{0}\). Again, this is satisfied only if \(\alpha_{1}=1\) and \(\alpha_{2}=\cdots=0\). Thus \(\chi_{1}(x)=\frac{1}{1-x}\) and we recover the bosonic character. This concludes that only bosonic and fermionic statistics are consistent with the assumption 2.
For ordinary statistics, the excitation operator in (10) coincides with the standard number operator. The Casimir operator in (13) becomes the total number of particles which is a conserved quantity linked with \(N\)-particle sectors
\[\mathcal{F}_{d}^{(N)}=\mathrm{span}\{|n_{1},n_{2},...,n_{d}\rangle\;\mid\;\sum_{ k}n_{k}=N\}. \tag{29}\]
These are also \(U(d)\)-IR sectors associated with the standard bosonic (fermionic) subspaces \(\mathcal{V}_{\pm}^{(N)}\).
It is worth pointing out that only in the case of ordinary statistics is the solution to the equation (25) for spectrum \(f_{n}\) of the excitation operator \(\tilde{N}\) non-degenerate (in this case \(f_{n}=n\)). In all other cases, degeneracy necessarily appears. This follows from the fact that coefficients in the polynomial \(Q_{\pm}(x)\) are all non-zero, and at least one of them is 2 or greater (otherwise, all coefficients are equal to 1, and we have ordinary statistics). Given this, at least one expansion coefficient on the right-hand side of (25) is 2 or greater. Thus at least two \(f_{n}\) numbers on the left-hand side of (25) are the same.
## V Hidden symmetry and transstatistics
We learned from the previous analysis that multiplicities in the Fock space decomposition (16) will necessarily appear for all transtatistics apart from bosonic and fermionic. These multiplicities cannot be resolved without additional, so-called _hidden symmetry_, present in the system [53]. The latter is typically identified as a higher symmetry of the Hamiltonian required to fully resolve the degeneracy of the energy spectrum (sometimes called 'accidental' degeneracy). The classic example is the degeneration of the spectrum of the hydrogen atom not captured by the rotational symmetry (SO(3) group) of the Hamiltonian but requires a higher (hidden) symmetry for resolution, which is the SO(4) group [64]. In our case, the situation is similar; the multiplicities in the Fock space
decomposition are in one-to-one correspondence with the degeneration of the Hamiltonian \(\hat{H}=\sum_{k=1}^{d}\epsilon_{k}\tilde{N}_{k}\) defined in (14) (generator of the \(U(d)\) action). The total energy is given by
\[E=\sum_{k=1}^{d}\epsilon_{k}f_{k}, \tag{30}\]
with \(f_{k}\) being the eigenvalues of the excitation operator \(\tilde{N}\) defined in (10). As long as this operator is non-degenerate, the energy spectrum \(E\) is well-resolved with the set of quantum numbers \((f_{k_{1}},\ldots,f_{k_{d}})\). Nevertheless, we have seen that this happens only in the case of ordinary statistics. For all other cases, degeneracy in spectrum \(f_{n}\) necessarily appears, which is to be resolved by different quantum numbers unrelated to the \(U(d)\) group. Without the specification of these numbers, the representation of \(U(d)\) in Fock space remains unspecified, defined only up to IR-multiplicity.
We will study these effects in detail for the first nontrivial case beyond ordinary statistics, i.e., the order-one statistics \([1,q]_{\pm}\), with \(q\in\mathbb{N}\). To make the analysis more accessible, we will separate the notation for transposons (type \([1,\beta]_{+}\)) and transfermions (type \([1,\alpha]_{-}\)) with \(\alpha,\beta\in\mathbb{N}\). The reason why we set the first coefficient in \([\ldots]_{\pm}\) to be \(1\) is because we will restrict our analysis only to the case of a unique vacuum state \(\ket{0}^{\otimes d}\). To be more precise, we will study the cases in which the only invariant state under \(U(d)\) is a vacuum state. This is possible only if the first coefficient in the single-mode character \(\chi_{1}(x)=\sum_{s=0}^{+\infty}a_{s}x^{s}\) is set \(a_{0}=1\) (see discussion around equation (26)).
To begin with, take an example of transfermions \([1,\alpha]_{-}\) with \(\alpha=2\). In this case, the singe-mode Fock space is three-dimensional (follows from \(\chi_{1}(x)=1+2x=1+2e^{i\theta}\)), and the maximal occupation number is \(p=\alpha=2\). For the case of two modes, the character reads \(\chi_{2}(x_{1},x_{2})=1+2(x_{1}+x_{2})+4x_{1}x_{2}=s_{(0,0)}(x_{1},x_{2})+2s_{ (1,0)}(x_{1},x_{2})+2^{2}s_{(1,1)}(x_{1},x_{2})\). Thus, the Fock space decomposes into fermionic multiplets of the size \(\alpha^{N}=2^{N}\), for \(N=0,1,2\). This exponential growth of multiplicity is generic to order-one transatistics. It is formalized in the following theorem (see Appendix C for proof)
**Theorem 2**.: _Fock-spaces for \([1,\alpha]_{+}\) and \([1,\beta]_{-}\) decompose into IR sectors as_
\[\mathcal{F}_{d} =\bigoplus_{N=0}^{d}\alpha^{N}\mathcal{V}_{-}^{(N)}, \alpha\in\mathbb{N}, \tag{31}\] \[\mathcal{F}_{d} =\bigoplus_{N=0}^{+\infty}\beta^{N}\mathcal{V}_{+}^{(N)}, \beta\in\mathbb{N}, \tag{32}\]
_where \(\mathcal{V}_{-}^{(N)}\) and \(\mathcal{V}_{+}^{(N)}\) are the fermionic and bosonic IRs (\(N\)-particle sectors for ordinary statistics), respectively._
In the next section, we will build the concrete ansatz to identify auxiliary quantum numbers to resolve the degeneracy in (31)-(32). Based on this, we will construct the \(U(d)\) representation in Fock space.
### Hidden quantum numbers
We start with transfermions \([1,\alpha]_{-}\) for some \(\alpha\geq 2\). For this case, the single-mode character reads \(\chi_{1}(x)=1+\alpha x\) with \(x=e^{i\theta}\in U(1)\). The single-mode Fock space is \((\alpha+1)\)-dimensional \(\mathcal{F}_{1}=\text{span}\{\ket{n}\mid n=0,1,2,\ldots,p=\alpha\}\). Equation (25) reads
\[x^{f_{0}}+x^{f_{1}}+\cdots+x^{f_{r}}=1+\alpha x, \tag{33}\]
with the solution \(f_{0}=0\) and \(f_{n}=1\) for \(n=1,\ldots,\alpha\). The generator of \(U(1)\) action \(\Delta_{1}(e^{i\theta})=e^{i\theta\tilde{N}}\) is the excitation operator defined in (10) and in our case, it acts is as follows
\[\tilde{N}\ket{n}=\begin{cases}0&n=0,\\ +1\ket{n}&n=1,\ldots,\alpha.\end{cases} \tag{34}\]
Given this, one can re-interpret the single-mode states \(\ket{n}\) for \(n\geq 1\) as _de facto_ being the single-particle excitations distinguished by some auxiliary degree of freedom with \(\alpha\) values. Therefore, we can introduce decomposition \(n=k+z\) with \(k=0,1\) being the'real' occupation number of the fermionic type and \(z=0,\ldots\alpha^{k}-1\) as an auxiliary quantum number accounting for degeneracy. Having this, the formula (34) takes the standard form, i.e \(\tilde{N}\ket{k+z}=k\ket{k+z}\). Now, to separate degrees of freedom captured by \(k\) and \(z\) quantum numbers, we introduce the mapping
\[L_{1}\ket{k+z}=\begin{cases}\ket{0}_{F}&k=0,\\ \ket{1}_{F}\otimes\ket{z}_{A}&k=1,\end{cases} \tag{35}\]
where \(\ket{k}_{F}\) is the ordinary fermionic number state with \(k=0,1\), while \(\ket{z}_{A}\) (with \(z=0,\ldots\alpha-1\)) is a new degree of freedom emerged solely from the _statistics type_. The ansatz straightforwardly generalizes to the \(d\)-mode Fock space. We define
\[L_{d}\ket{n_{1},\ldots,n_{d}}=\mathcal{T}L_{1}^{\otimes d}\ket{n_{1},\ldots,n_{ d}}, \tag{36}\]
where \(\mathcal{T}\) is the shift operator needed to separate degrees of freedom, i.e., to shift all auxiliary states to the right. For example, \(\mathcal{T}\ket{k_{1}}\ket{z_{1}}\ket{k_{2}}\ket{z_{2}}=\ket{k_{1},k_{2}}_{F} \otimes\ket{z_{1},z_{2}}_{A}\). To fully clarify the mapping in (36), let \(\ket{n_{1},\ldots,n_{d}}=\ket{k_{1}+z_{1},\ldots,k_{d}+z_{d}}\), where again, \(k_{s}=0,1\) and \(z_{s}=0,\ldots,\alpha^{k_{s}}-1\). We form the ordered list \((z_{s_{1}},\ldots,z_{s_{N}})\) for which \(k_{s_{r}}=1\), i.e. the list of all non-zero fermionic excitations. Here \(N=d-(\delta_{0,n_{1}}+\cdots+\delta_{0,n_{d}})\) is the total number of them. Then, the equation (36) reads
\[L_{d}\ket{n_{1},\ldots,n_{d}}=\ket{k_{1},\ldots,k_{d}}_{F}\otimes\ket{z_{s_{1} },\ldots,z_{s_{N}}}_{A} \tag{37}\]
where \(\ket{k_{1},\ldots,k_{d}}_{F}\) is the ordinary \(N\)-particle fermionic state, with the auxiliary label of particles \(\ket{z_{s_{1}},\ldots,z_{s_{N}}}_{A}\).
This brings us precisely to the decomposition in (31), which can also be written as
\[\mathcal{F}_{d}=\bigoplus_{N=0}^{d}\mathcal{V}_{-}^{(N)}\otimes\mathcal{H}_{A}^{ \otimes N}, \tag{38}\]
where \(\mathcal{H}_{A}=\text{span}\{\left|z\right\rangle\left|\ z=0,\dots\alpha-1\right\}\) is the auxiliary space. Given this factorization, it is clear that \(U(d)\) acts only in the fermionic part \(\mathcal{V}_{-}^{(N)}\), while \(\mathcal{H}_{A}^{\otimes N}\) remains untouched. The additional \(U(\alpha)\) group acting in the space \(\mathcal{H}_{A}^{(N)}\) can be added to resolve degeneracy completely. Now, for an element \(g\in U(d)\), let the standard action on the fermionic number state is \(\Delta_{d}^{(F)}(g)\left|k_{1},\dots,k_{d}\right\rangle_{F}\). This induces the action \(\Delta_{d}(g)\) in the Fock space (5) as
\[\Delta_{d}(g)=L_{d}^{-1}\left(\Delta_{d}^{(F)}(g)\otimes\mathds{1}_{A}\right) L_{d}, \tag{39}\]
where \(L_{d}\) is the mapping given in(36). With this, we have defined the action of \(U(d)\) in the Fock space.
In complete analogy, we provide an ansatz for transbosons of \([1,\beta]_{+}\) type with \(\beta\geq 2\). In this case, we have the single mode character \(\chi_{1}(x)=\frac{1}{1-\beta x}\) with \(x=e^{i\theta}\in U(1)\) and the single-mode Fock space is infinite-dimensional \(\mathcal{F}_{1}=\text{span}\{\left|n\right\rangle\,\left|\ n=0,1,2,\dots\right\}\). As before, we shall solve equation (25)
\[x^{f_{0}}+x^{f_{1}}+x^{f_{2}}+\dots=\frac{1}{1-\beta x}. \tag{40}\]
It is convenient to write the particle number \(n\) in the form
\[n=1+\beta^{2}+\dots+\beta^{k-1}+z=\frac{\beta^{k}-1}{\beta-1}+z, \tag{41}\]
with \(k=0,1,2,\dots\) and \(z=0,\dots,\beta^{k}-1\). Having this notation, the solution to (40) is simple, i.e., \(f_{\frac{\beta^{k}-1}{\beta-1}+z}=k\). We have the following action of the excitation operator \(\tilde{N}\)
\[\tilde{N}\left|\frac{\beta^{k}-1}{\beta-1}+z\right\rangle=k\left|\frac{\beta ^{k}-1}{\beta-1}+z\right\rangle. \tag{42}\]
Here \(k\) represents the 'new' occupation number of the bosonic type, while \(z\) is an auxiliary quantum number. Since \(z=0,\dots,\beta^{k}-1\) counts all possible states associated to \(k\) bosonic excitations, it is convenient to write \(z\) in the \(\beta\)-base, i.e. \(z=z_{k-1}\beta^{k-1}+z_{k-2}\beta^{k-2}+\dots+z_{0}\beta^{0}:=z_{k-1}z_{k-2} \dots z_{0}\), where \(z_{s}=0,\dots,\beta-1\) are the digits. With this, we can introduce the mapping
\[L_{1}\left|\frac{\beta^{k}-1}{\beta-1}+z\right\rangle=\begin{cases}\left|0 \right\rangle_{B}&k=0,\\ \left|k\right\rangle_{B}\otimes\left|z_{k-1}z_{k-2}\dots z_{0}\right\rangle_{ A}&k>0,\end{cases} \tag{43}\]
where \(\left|k\right\rangle_{B}\) is the ordinary bosonic Fock (number) state with \(k=0,1,2,\dots\), while \(\left|z_{k-1}z_{k-2}\dots z_{0}\right\rangle_{A}\) (with \(z_{s}=0,\dots,\beta-1\)) is associated to the statistics degree of freedom. The generalization to the \(d\)-mode Fock state is as for the case of transfermions, i.e., we use the same equation (36). In this case, we have
\[L_{d}\left|n_{1},\dots,n_{d}\right\rangle=\left|k_{1},\dots,k_{d}\right\rangle _{B}\otimes\left|z_{s_{1}},\dots,z_{s_{N}}\right\rangle_{A}, \tag{44}\]
where \(\left|k_{1},\dots,k_{d}\right\rangle_{B}\) is the ordinary bosonic number state with \(k_{s}=0,1,2,\dots\), while \(\left|z_{s_{1}}\dots z_{s_{N}}\right\rangle_{A}\) comes from type of statistics. The action of \(U(d)\) is introduced in complete analogy to the fermionic case and equation (39).
### Is hidden symmetry an ordinary internal symmetry?
We may question if the hidden quantum numbers introduced in the previous section are related to some genuine degree of freedom emerging from the type of statistics. Could these numbers be associated with the standard internal degrees of freedom, such as spin? For example, degeneration in (30) could be potentially explained by the argument that energy is spin-independent, and then, transatistics may be just an ordinary (fermionic of bosonic) statistics where \(U(d)\) affects only external degrees of freedom (such as modes represented by paths of particles in Fig. 1\(a\)). However, this argument cannot be well-aligned with the Fock-state decomposition in (31)-(32), even though only multiplets of ordinary statistics appear in decomposition. This is due to the dimension discrepancy between ordinary statistics and transatistics. To see this, suppose that we deal with ordinary fermions with \(d\) real degrees of freedom (\(d\) modes on which \(U(d)\) acts) and some internal degree of freedom (e.g., spin) with \(z=0,\dots,\alpha-1\) values, which is unaffected by \(U(d)\). The overall dimension of the single-particle space is \(\alpha d\); hence the dimension of the fermionic Fock space is \(2^{\alpha d}\). This starkly contrasts the dimension \(\alpha^{d}\) of the transfermionic Fock space for \([1,\alpha]_{-}\) type. As we shall see, this dimension discrepancy will differentiate the thermodynamics of non-interacting systems of ordinary and transatistics. The latter will be accompanied by the effect of generic spontaneous symmetry breaking absent in ordinary statistics.
Note that analogy to, e.g., spin degree of freedom discussed here is only possible for order-one statistics. For higher-order statistics, no (obvious) similarities can be concluded. We will discuss this point later.
### Relation to thermodynamics
To study thermodynamics, we consider the single-particle energy spectrum \(\epsilon_{1},\dots,\epsilon_{d}\), where \(\epsilon_{k8}\) represent the energies associated with different modes. This situation is similar to the one discussed in Section II.2. In that section, we examined the unitary evolution generated by the Hamiltonian given in equation (14). However, the system is in contact with a thermal bath in the present case. The thermodynamical quantities (e.g., for canonical ensemble) can be derived from the partition function
\(Z_{d}(\beta)=\mathrm{Tr}e^{-\beta\tilde{H}}\), and its explicit form follows directly from the form of \(H\), i.e.,
\[Z_{d}(\beta)=\prod_{k=1}^{d}Z_{1}(e^{-\beta\epsilon_{k}})=\prod_{k=1}^{d}\chi_{1} (e^{-\beta\epsilon_{k}}) \tag{45}\]
with \(\beta=1/k_{B}T\) being the Boltzmann factor.
The physical relevance of character \(\chi\) can also be understood through thermodynamics [65]. This is because we can get the partition function from the character via Wick's rotation, that is, \(it/\hbar\epsilon_{k}\rightarrow-\beta\epsilon_{k}\). In this respect, the product form of equation (45) arises directly from our central assumption 1, i.e., the overall partition function can be expressed as a product of individual partition functions (associated with individual modes). This aligns with the expected behavior for independent systems, such as a set of independent modes. Therefore, assumption 1 is in one-to-one correspondence with the independence in a thermodynamical sense. The formula (45) trivially holds for ordinary statistics (bosons and fermions) [66].
When the system is capable of exchanging excitations (particles) with a reservoir, we can analyze its behavior using the grand canonical partition function \(\mathcal{Z}_{d}=\mathrm{Tr}e^{-\beta(\hat{H}-\mu\tilde{N})}\). In this expression, \(\tilde{N}\) represents the excitation operator as defined in equation (10), and \(\mu\) corresponds to the chemical potential (variable conjugated to \(\tilde{N}\)). The explicit form of the grand canonical partition function \(\mathcal{Z}_{d}\) is as follows
\[\mathcal{Z}_{d}=\prod_{k=1}^{d}Z_{1}(e^{-\beta(\epsilon_{k}-\mu)})=\prod_{k=1 }^{d}\chi_{1}((e^{-\beta\epsilon_{k}-\mu})). \tag{46}\]
### Thermodynamics of ideal gasses and spontaneous symmetry breaking
Let us examine the thermodynamical properties of a non-interacting system for general order-one statistics \([1,q]_{\pm}\) with \(q\in\mathbb{N}\). Ordinary statistics is recovered for \(q=1\). We consider a grand-canonical ensemble defined by a set of single-particle energies \(\epsilon_{1},\ldots,\epsilon_{d}\) associated with different modes. The system is described by a equilibrium state \(\rho=\frac{1}{\mathcal{Z}_{d}}e^{-\beta(\hat{H}-\mu\tilde{N})}\), where \(\mathcal{Z}_{d}\) is the grand-canonical partition function defined in (46). All thermodynamical quantities can be evaluated from the grand canonical potential \(\Omega=-\frac{1}{\beta}\log\mathcal{Z}_{d}\). For example, \(N=\frac{\partial\Omega}{\partial\mu}\) gives the mean particle number. For the case of transtatistics \([1,q]_{\pm}\), we get
\[N=\sum_{i}n_{i}=\sum_{i}\frac{1}{\frac{1}{q}e^{\beta(\epsilon_{i}-\mu)}\pm 1}. \tag{47}\]
This expression reduces to the Fermi-Dirac and Bose-Einstein distributions for \(q=1\). The plots for \(n_{i}\) in (47) for various statistics are presented in Fig. 3. For the fermionic-type statistics, equation (47) reduces to the Fermi-Dirac distribution \(n_{i}=\theta(\mu-\epsilon_{i})\) at zero temperature for all \(q\). For the bosonic type, the mean number diverges at the values of energy \(\epsilon=\mu+\frac{1}{\beta}\log q\) when the Bose-Einstein condensation occurs. In the classical limit of \(\beta(\epsilon-\mu)\gg 1\), the formula (47) reduces to the standard Maxwell-Boltzmann distribution, i.e., \(n_{i}\approx qe^{-\beta(\epsilon_{i}-\mu)}\), where the factor \(q\) appears as the degeneracy factor. The same factor appears in the classical limit for standard quantum gasses with \(q=2s+1\) coming from spin \(s\) (see, for example, Chapter 8.3. in [67]). This is because the energy is independent of spin, and thus, the energy spectrum degenerates.
Note that the chemical potential \(\mu\) in the formulas above is temperature dependent. To be more precise, the standard approach to thermodynamics of ideal gasses is to keep total particle number \(N\) as a fixed parameter and then invert (47) to calculate chemical potential \(\mu=\mu(N,T)\) as a function of a total number of particles and temperature [66]. Given this, one can introduce a simple change of variables \(\mu\rightarrow\mu-\frac{1}{\beta}\log q\), and the formula (47) would reduce to one for ordinary statistics. This means that solution for the chemical potential for order-one transtatistics is
\[\mu_{q}=\mu_{q=1}-k_{B}T\log q, \tag{48}\]
where \(\mu_{q=1}\) is the chemical potential of ordinary statistics. What follows is that almost all thermodynamical quantities (e.g., mean energy, heat capacity, etc.) remain the same as in the case of ordinary statistics for arbitrary \(q\). Nevertheless, the entropy will change. To see this, note that \(S=-\beta\Omega+\beta\langle E\rangle-\beta\mu N\), thus the shift of \(-k_{B}T\log q\) in the chemical potential introduces a change in the entropy, i.e.
\[S_{q}=S_{q=1}+k_{B}N\log q. \tag{49}\]
The entropy of ordinary statistics \(S_{q=1}\) vanishes at \(T=0\); hence, a residual entropy of \(k_{B}N\log q\) remains at zero temperature for all \(q>1\). This is consistent with the
Figure 3: **Mean particle number** for ordinary (blue and green) and generalized (orange and red) statistics.
fact that fermionic (bosonic) \(N\)-particles IRs in the Fock space decomposition (31)-(32) appear \(q^{N}\) times; therefore, the ground state is \(q^{N}\) times degenerate. This degeneration is known to result in _residual entropy at zero temperature_ and is associated with _spontaneous symmetry breaking_[54], here present for transtatistics. This is one of the main differences compared to ordinary quantum gasses exhibiting non-degenerate ground states.
## VI Discussion and Outlook
### Statistics of higher order
Here we briefly analyze some of the technical and conceptual difficulties that arise when dealing with statistics of higher order. As an illustration, we take the example of statistics of order two \([1,q_{1},q_{2}]_{\pm}\). A simple inspection shows that polynomial \(Q_{\pm}(x)=1\mp q_{1}x+q_{2}x^{2}\) has non-negative (positive) roots for \(q_{1}^{2}>4q_{2}\). To see how the Fock space decomposes in some simple cases, consider transfermions \([1,q,1]_{-}\) and the corresponding two-mode character
\[\chi_{2}(x_{1},x_{2}) = (1+qx_{1}+x_{1}^{2})(1+qx_{2}+x_{2}^{2}) \tag{50}\] \[= s_{(0,0)}(x_{1},x_{2})+qs_{(1,0)}(x_{1},x_{2})+\] \[= (q^{2}-1)s_{(1,1)}(x_{1},x_{2})+s_{(2,0)}(x_{1},x_{2})+\] \[+ qs_{(2,1)}(x_{1},x_{2})+s_{(2,2)}(x_{1},x_{2}).\]
The IR characters \(s_{(2,1)}\) and \(s_{(2,2)}\) that are not fermionic nor bosonic type show-up in the decomposition. This is a typical feature that appears for any higher-order statistics. In turn, finding Fock space's decomposition for general \(d\) modes, such as one provided for order-one statistics in (31)-(32), is more difficult. Next, the dimension of the single-mode Fock space is \(q+2\), and the maximal occupation number is \(p=q+1\). The solution to the single-mode character equation (25)
\[x^{f_{0}}+\cdots+x^{f_{q+1}}=1+qx+x^{2} \tag{51}\]
is \(f_{0}=0\) and \(f_{q+1}=2\), while \(f_{n}=1\) for \(n=1,\ldots,q\). Recall that these are the eigenvalues of the excitation operator \(\tilde{N}\) in (10), and as we see, we have three distinct values \(k=0,1,2\). Again, we have degeneration of the spectrum, but resolving it is a more delicate issue than for the case of order-one statistics we have presented in Section V. This is partially because a clear interpretation is missing. For example, we may try to label the single-mode Fock states with two quantum numbers, \(k=0,1,2\) (for excitations), and one auxiliary number \(z_{k}\), to account for degeneracy. As before, we have \(\tilde{N}\ket{k+z_{k}}=k\ket{k+z_{k}}\), with \(z_{k}=0\) for \(k=0,2\), while for \(k=1\) we have \(z_{k}=0,\ldots,q\). This appears paradoxical because degeneracy is present for one excitation but disappears for two. Unfortunately, due to such issues in categorizing hidden quantum numbers and the involvement of 'non-standard' IRs, the analysis becomes significantly more complicated, and we leave it for future investigations.
### Relation to other generalized statistics
An obvious question is if and how our statistics classified in (24) differs from other generalized statistics presented in the literature. Of course, we are not in the position to exhaustively compare but rather analyze the most common cases. The first remark is that the main difference is due to the underlying symmetries. Our classification relies on the \(U(d)\) group, while in most of the cases, other generalized statistics is based on a different group. Take an example of fractal statistics [5; 7] where topological defects and representation of braid groups [68] play the central role. We can have, for example, the action of \(2\pi\)-rotation, leaving a non-trivial phase. This contrasts the \(2\pi\)-periodicity, essential to derive the integer spectrum for the excitation operator in our equation (10). This suggests that we speak of different kinds of particle statistics due to the involvement of different symmetry groups. On the other hand, the recent work of [69] suggests that fractal statistics can be phrased in terms of Jack polynomials, which generalize Schur polynomials (our primary tool to classify statistics). This relationship is worth looking into in the future.
Very similar holds for many generalized statistics related to deformed canonical commutation relations. Take an example of \(q\)-deformations (quons) with \(a_{i}a_{j}^{\dagger}-qa_{j}^{\dagger}a_{i}=\delta_{ij}\mathbb{1}\)[14]. However, \(q\)-deformations introduce new symmetries even at the level of a single particle, i.e., \(q\)-deformed \(U(d)\)[70] group, while our statistics is directly paired to the \(U(d)\) symmetry. Still, some comparison might be possible for the order-one statistics, where our ansatz of Section V provides the means to construct the algebra of creation and annihilation operators and evaluate the corresponding commutation relations.
Finally, the question is how our generalization is related to parastatistics [13]. As already pointed out, the group behind the parastatistics is different [61; 62]. This leads to the different Fock space decomposition, i.e., for parastatistics of order \(p\), we have [71; 72]
\[\mathcal{F}_{\text{parab}}=\bigoplus_{l(\lambda)\leq p}\mathcal{V}_ {\lambda} \tag{52}\] \[\mathcal{F}_{\text{paraf}}=\bigoplus_{l(\lambda^{\prime})\leq p} \mathcal{V}_{\lambda}, \tag{53}\]
where the sum runs over Young diagrams \(\lambda\) (parabose case) or \(\lambda^{\prime}\) (parafermi case) of the length \(l(\lambda)\) (number of rows). Here \(\lambda^{\prime}\) is the conjugated diagram of \(\lambda\) and \(\mathcal{V}_{\lambda}\) is an \(U(d)\)-IR associated to \(\lambda\). This decomposition contains no multiplicities and thus is compatible with our classification only for the case of ordinary statistics.
### Some open questions and applications
The broad range of possibilities for generalized statistics introduced here leaves many interesting open questions and potential for applications. Firstly, an open
question is what is more on the physical side (compared to new effects already discussed) that tantstatistics brings. As we already discussed, there are technical difficulties with higher-order statistics, mainly in the context of hidden quantum numbers. Nevertheless, we may study thermodynamics directly by the ansatz defined in section V.4. One has to calculate partition functions given in (45) for more general characters. In this case, a simple shift of the chemical potential in (48) will not reduce thermodynamical quantities to ones given by ordinary statistics as it happens for order-one statistics. Given this, we can expect other novel physical effects to appear.
The next exciting point to analyze is the application of our method to diagonalize solid-state Hamiltonians, such as it is done for spin-chains via spin-fermion mapping (Jordan-Wigner transformation) [56]. For example, the transfermionic Fock space for \([1,\alpha]_{+}\) is isomorphic to \((\mathbb{C}^{(\alpha+1)})^{\otimes d}\) which is suitable to study higher dimensional spin chains. In complete analogy to the spin-fermion mapping, one can expect to find other integrable many-body Hamiltonians that reduce to our non-interacting model.
An interesting perspective on our results comes from the quantum computational complexity of quantum statistics. Namely, it is well-known that the non-interacting bosons are computationally hard to simulate [57], while non-interacting fermions are not [73]. One can ask a similar question here, i.e., what is the computational power of the non-interacting model for tantstatistics? Any answer to it is relevant and may find applications in quantum computing.
Finally, on the speculative side, an interesting idea of applying generalized statistics in the context of dark-matter modeling was recently presented [74]. The main point is to study thermodynamics and the effects of the negative relation between pressure and energy density, emphasized in many existent dark energy candidates. Our methods provide a direct way to calculate thermodynamical properties of transtatistics and thus might be worthy of investigating relations to dark-matter models.
###### Acknowledgements.
The authors thank S. Horvat, J. Morris and C. Brukner for their helpful comments. This research was funded in whole, or in part, by the Austrian Science Fund (FWF) [F7115] (BeyondC). For the purpose of open access, the author(s) has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
|
2303.06199
|
Turning Strengths into Weaknesses: A Certified Robustness Inspired
Attack Framework against Graph Neural Networks
|
Graph neural networks (GNNs) have achieved state-of-the-art performance in
many graph learning tasks. However, recent studies show that GNNs are
vulnerable to both test-time evasion and training-time poisoning attacks that
perturb the graph structure. While existing attack methods have shown promising
attack performance, we would like to design an attack framework to further
enhance the performance. In particular, our attack framework is inspired by
certified robustness, which was originally used by defenders to defend against
adversarial attacks. We are the first, from the attacker perspective, to
leverage its properties to better attack GNNs. Specifically, we first derive
nodes' certified perturbation sizes against graph evasion and poisoning attacks
based on randomized smoothing, respectively. A larger certified perturbation
size of a node indicates this node is theoretically more robust to graph
perturbations. Such a property motivates us to focus more on nodes with smaller
certified perturbation sizes, as they are easier to be attacked after graph
perturbations. Accordingly, we design a certified robustness inspired attack
loss, when incorporated into (any) existing attacks, produces our certified
robustness inspired attack counterpart. We apply our framework to the existing
attacks and results show it can significantly enhance the existing base
attacks' performance.
|
Binghui Wang, Meng Pang, Yun Dong
|
2023-03-10T20:32:09Z
|
http://arxiv.org/abs/2303.06199v1
|
# Turning Strengths into Weaknesses: A Certified Robustness Inspired
###### Abstract
Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph learning tasks. However, recent studies show that GNNs are vulnerable to both test-time evasion and training-time poisoning attacks that perturb the graph structure. While existing attack methods have shown promising attack performance, we would like to design an attack framework to further enhance the performance. In particular, our attack framework is inspired by certified robustness, which was originally used by defenders to defend against adversarial attacks. We are the first, from the attacker perspective, to leverage its properties to better attack GNNs. Specifically, we first derive nodes' certified perturbation sizes against graph evasion and poisoning attacks based on randomized smoothing, respectively. A larger certified perturbation size of a node indicates this node is _theoretically_ more robust to graph perturbations. Such a property motivates us to focus more on nodes with smaller certified perturbation sizes, as they are easier to be attacked after graph perturbations. Accordingly, we design a certified robustness inspired attack loss, when incorporated into (any) existing attacks, produces our certified robustness inspired attack counterpart. We apply our framework to the existing attacks and results show it can significantly enhance the existing base attacks' performance.
## 1 Introduction
Learning with graphs, such as social networks, citation networks, chemical networks, has attracted significant attention recently. Among many methods, graph neural networks (GNNs) [14, 33, 38, 41, 44] have achieved state-of-the-art performance in graph related tasks such as node classification, graph classification, and link prediction. However, recent studies [8, 19, 20, 23, 30, 34, 36, 37, 39, 40, 50, 51] show that GNNs are vulnerable to both test-time graph evasion attacks and training-time graph poisoning attacks1. Take GNNs for node classification as an instance, graph evasion attacks mean that, given a learnt GNN model and a (clean) graph, an attacker carefully perturbs the graph structure (i.e., inject new edges to or remove the existing edges from the graph) such that as many testing nodes as possible are misclassified by the GNN model. Whereas, graph poisoning attacks mean that, given a GNN algorithm and a graph, an attacker carefully perturbs the graph structure in the training phase, such that the learnt GNN model misclassifies as many testing nodes as possible in the testing phase. While existing methods have shown promising attack performance, we want to ask: Can we design a general _attack framework_ that can further enhance both the existing graph evasion and poisoning attacks to GNNs? The answer is yes.
Footnote 1: We mainly consider the graph structure attack in the paper, as it is more effective than the feature attack. However, our attack framework can be easily extended to the feature attack.
We design an attack framework inspired by certified robustness. Certified robustness was originally used by _defenders_ to guarantee the robustness of classification models against evasion attacks. Generally speaking, a testing example (e.g., an image or a node) with a better certified robustness guarantee indicates this example is _theoretically_ more robust to adversarial (e.g., pixel or graph) perturbations. While certified robustness is mainly derived for doing the good, _attackers_, on the other hand, can also leverage its property to do the bad. For instance, when an attacker knows the certified robustness of nodes in a graph, he can base on nodes' certified robustness to _reversely_ reveal the vulnerable region of the graph and leverage this vulnerability to design better attacks. We are inspired by such property of certified robustness and design the first certified robustness inspired attacks to GNNs.
Our attack framework consists of three parts: i) Inspired by the state-of-the-art randomized smoothing based certified robustness against _evasion attacks_ to image models [7, 28] and GNN models [35], we first propose to generalize randomized smoothing and derive the node's cer
tified perturbation size against graph _poisoning attacks_ to GNNs. Particularly, a larger certified perturbation size of a node indicates this node is _theoretically_ more robust to adversarial graph perturbations. In other words, an attacker needs to perturb more edges during the training phase in order to make this node wrongly predicted by the learnt GNN model. This property inspires us to focus more on disrupting nodes with relatively smaller certified perturbation sizes under a given perturbation budget. ii) We design a certified robustness inspired attack loss. Specifically, we modify the classic node-wise loss by assigning each node a weight based on its certified perturbation size--A node with a larger/smaller certified perturbation size will be assigned a smaller/larger weight. In doing so, losses for nodes with smaller certified perturbation sizes will be enlarged, and most of the perturbation budget will be automatically allocated to perturb these nodes. Thus, more nodes will be misclassified with the given perturbation budget. iii) We design the certified robustness inspired attack framework to generate adversarial graph perturbations to GNNs, based on our certified robustness inspired attack loss. We emphasize that, as our new attack loss only modifies the existing attack loss with certified perturbation size defined node weights, any existing graph evasion or poisoning attack method can be used as the base attack in our framework.
We apply our certified robustness inspired attack framework to the state-of-the-art graph evasion and poisoning attacks [40, 51] to GNNs. Evaluation results on multiple benchmark datasets show our attack framework can substantially enhance the attack performance of the base attacks. Our contributions are as follows:
* We propose a certified robustness inspired attack framework to GNNs. Our framework can be plugged into any existing graph evasion and poisoning attacks.
* To our best knowledge, we are the first work to use certified robustness for an attack purpose.
* Evaluation results validate the effectiveness of our attack framework when applied to the existing attacks to GNNs.
## 2 Background and Preliminaries
### Graph Neural Networks (GNNs)
Let \(G=(\mathcal{V},\mathcal{E})\) be a graph, where \(u\in\mathcal{V}\) is a node, \((u,v)\in\mathcal{E}\) is an edge between \(u\) and \(v\). Let \(\mathbf{A}\in\{0,1\}^{|\mathcal{V}|\times|\mathcal{V}|}\) be the adjacency matrix. _As \(\mathbf{A}\) contains all graph structure information, we will interchangeably use \(\mathbf{A}\) and \(G\) to indicate the graph in the paper._ We mainly consider GNNs for node classification. Each node \(u\in\mathcal{V}\) has a label \(y_{u}\) from a label set \(\mathcal{Y}\). Let \(\mathcal{V}_{Tr}\) and \(\mathcal{V}_{Te}\) be the set of training nodes and testing nodes, respectively. Given a GNN algorithm \(\mathcal{A}\), which takes the graph \(G(\mathbf{A})\) and training nodes \(\mathcal{V}_{Tr}\) as an input and produces a node classifier \(f_{\theta}\) parameterized by \(\theta\), i.e., \(f_{\theta}=\mathcal{A}(\mathbf{A},\mathcal{V}_{Tr})\). The node classifier \(f_{\theta}\) inputs \(G(\mathbf{A})\) and outputs labels for all nodes, i.e., \(f_{\theta}:\mathbf{A}\rightarrow\mathcal{Y}^{|\mathcal{V}|}\). To learn \(f_{\theta}\), a common way is to minimize a loss function \(\mathcal{L}\) defined on the training nodes \(\mathcal{V}_{Tr}\) and the graph \(G(\mathbf{A})\) as follows:
\[\min_{\theta}\mathcal{L}(f_{\theta},\mathbf{A},\mathcal{V}_{Tr})=\sum_{u\in \mathcal{V}_{Tr}}\ell(f_{\theta}(\mathbf{A};u),y_{u}), \tag{1}\]
where \(f_{\theta}(\mathbf{A};u)\) is the predicted label of a node \(u\). After learning \(f_{\theta^{*}}\), a testing node \(v\in\mathcal{V}_{Te}\) is then predicted a label as \(\hat{y}_{v}=f_{\theta^{*}}(\mathbf{A};v)\).
### Adversarial Attacks to GNNs
We denote by \(\delta\in\{0,1\}^{|\mathcal{V}|\times|\mathcal{V}|}\) the adversarial _graph perturbation_, where \(\delta_{s,t}=1\) (or \(0\)) means the attacker perturbs (or keeps) the edge status between a node pair \((s,t)\). Moreover, we denote \(\mathbf{A}\oplus\delta\) as the perturbed graph, with \(\oplus\) the element-wise XOR operator. For instance, if there is an (or no) edge between \((u,v)\), i.e., \(A_{uv}=1\) (or \(A_{uv}=0\)), perturbing this edge status (i.e, \(\delta_{u,v}=1\)) means removing the edge (or injecting a new edge), i.e., \(A_{u,v}\oplus\delta_{u,v}=0\) (or \(A_{u,v}\oplus\delta_{u,v}=1\)) to the graph. We assume an attacker has a perturbation budget \(\Delta\), i.e., \(\|\delta\|_{0}\leq\Delta\), meaning at most \(\Delta\) number of edges can be perturbed by the attacker.
**Graph evasion attacks to GNNs.** In graph evasion attacks, given a learnt node classifier \(f_{\theta^{*}}\), an attacker carefully crafts a graph perturbation \(\delta\) to the graph \(G\) such that \(f_{\theta^{*}}\) predicts nodes' labels using the perturbed graph \(\mathbf{A}\oplus\delta\) as the attacker desires. For instance, an attacker desires as many testing nodes as possible to be misclassified by \(f_{\theta^{*}}\) (called _untargeted attack_) under the perturbation budget \(\Delta\). Formally, an attacker aims to maximize the following 0-1 (_attack loss_):
\[\max_{\delta}\sum_{v\in\mathcal{V}_{Te}}\mathbf{1}[f_{\theta^{*}}(\mathbf{A} \oplus\delta;v)\neq y_{v}],\text{s.t.}\ ||\delta||_{0}\leq\Delta, \tag{2}\]
where \(\mathbf{1}[\cdot]\) is an indicator function, whose value is 1 if the condition satisfies and 0, otherwise.
The above problem is challenging to solve in that the indicator function is hard to be optimized. In practice, an attacker will solve an alternative optimize problem as below:
\[\max_{\delta}\sum_{v\in\mathcal{V}_{Te}}\ell(f_{\theta^{*}}(\mathbf{A}\oplus \delta;v),y_{v}),\,\text{s.t.}\ ||\delta||_{0}\leq\Delta. \tag{3}\]
For instance, [40] design the state-of-the-art PGD evasion attack by solving Equation 3.
**Graph poisoning attacks to GNNs.** In graph poisoning attacks, an attacker specifies a GNN algorithm \(\mathcal{A}\) and carefully perturbs the graph \(G\) with a graph perturbation \(\delta\) in the training phase, such that the learnt node classifier \(f_{\theta^{*}}\) misclassifies as many testing nodes as possible on the perturbed graph \(\mathbf{A}\oplus\delta\) in the testing phase. Formally, it solves the following bilevel optimization problem:
\[\max_{\delta}\sum_{v\in\mathcal{V}_{Te}}\mathbf{1}[f_{\theta^{*}}(\mathbf{A} \oplus\delta;v)\neq y_{v}], \tag{4}\] \[\text{s.t.}\ \theta^{*}=\arg\min_{\theta}\sum_{u\in\mathcal{V}_{Tr}} \mathbf{1}[f_{\theta^{*}}(\mathbf{A}\oplus\delta;u)\neq y_{u}],\,||\delta||_{0 }\leq\Delta,\]
where the inner optimization problem is learning the node classifier \(f_{\theta^{*}}\) on the perturbed graph \(\mathbf{A}\oplus\delta\) with training nodes \(\mathcal{V}_{Tr}\), while the outer optimization problem is learning to generate the graph perturbation \(\delta\) to maximally misclassify testing nodes \(\mathcal{V}_{Te}\) with the learnt node classifier \(f_{\theta^{*}}\).
In practice, the testing nodes \(\mathcal{V}_{Te}\)'s labels are unavailable during training, and thus we cannot directly optimize Equation 4. In addition, the indicator function in Equation 4 is hard to optimize. A common strategy to address this issue is by instead maximizing the loss on the _training nodes_\(\mathcal{V}_{Tr}\)[40, 51] and using an alternative continuous loss. Specifically, it solves the following alternative bilevel optimization problem:
\[\max_{\delta}\sum_{v\in\mathcal{V}_{Tr}}\ell(f_{\theta^{*}}( \mathbf{A}\oplus\delta;v),y_{v}), \tag{5}\] \[\text{s.t. }\theta^{*}=\arg\min_{\theta}\sum_{u\in\mathcal{V}_{Tr} }\ell(f_{\theta}(\mathbf{A}\oplus\delta;u),y_{u}),\,||\delta||_{0}\leq\Delta.\]
This is based on the intuition that if a node classifier misclassifies a large number of training nodes, then it generalizes poorly and thus is also very likely to misclassify a large number of testing nodes.
### Certified Robustness to Graph Evasion Attacks
We introduce certified robustness achieved via the state-of-the-art randomized smoothing [17, 15, 7]. Randomized smoothing was originally designed to build certified robust machine learning classifiers against evasion attacks. It is applicable to any classifier and scalable to large models, e.g., deep neural networks. Here, we introduce randomized smoothing that defends against graph evasion attacks to GNNs [35]. It consists of the following three steps.
**Constructing a smoothed node classifier.** Given a base node classifier \(f\), a graph \(G\), and a testing node \(u\) with label \(y_{u}\), randomized smoothing builds a _smoothed node classifier_\(g\) via adding a random noise matrix \(\epsilon\) to \(G\). Formally,
\[g(\mathbf{A};u)=\arg\max_{c\in\mathcal{Y}}\text{Pr}(f(\mathbf{A}\oplus\epsilon; u)=c), \tag{6}\]
where \(\text{Pr}(f(\mathbf{A}\oplus\epsilon;u)=c)\) is the probability that the base node classifier \(f\) predicts label \(c\) on the noisy graph \(\mathbf{A}\oplus\epsilon\) and \(g(\mathbf{A};u)\) is the predicted label for \(u\) by the smoothed node classifier \(g\). \(\epsilon\) has the following probability distribution in the binary space \(\{0,1\}^{|\mathcal{V}|\times|\mathcal{V}|}\):
\[\text{Pr}(\epsilon_{s,t}=0)=\beta,\,\text{Pr}(\epsilon_{s,t}=1)=1-\beta,\, \forall s,t\in\mathcal{V}. \tag{7}\]
Equation 7 means that for each pair of nodes \((s,t)\) in the graph, we keep its edge status (i.e., \(A_{s,t}\)) with probability \(\beta\) and change its edge status with probability \(1-\beta\).
**Deriving the certified robustness of graph evasion attacks to GNNs.** Suppose \(g(\mathbf{A};u)=y_{u}\), meaning that the smoothed node classifier \(g\) correctly predicts \(u\). Then, \(g\) provably predicts the correct label for \(u\) once the graph perturbation \(\delta\) is bounded. Formally [35]:
\[g(\mathbf{A}\oplus\delta;u)=y_{u},\forall||\delta||_{0}\leq K(\underline{p_{y _{u}}}), \tag{8}\]
where \(\underline{p_{y_{u}}}\leq\text{Pr}(f(\mathbf{A}\oplus\epsilon;u)=y_{u})\) is a lower bound of the probability that \(f\) predicts the correct label \(y_{u}\) on the noisy graph \(\mathbf{A}\oplus\epsilon\). \(K(\underline{p_{y_{u}}})\) is called node \(u\)'s _certified perturbation size_, indicating that \(g\) provably predicts the correct label when an attacker _arbitrarily_ perturbs (at most) \(K(\underline{p_{y_{u}}})\) edge status in the graph \(G\). _In other words, if a node has a larger certified perturbation size, then it is certifiably more robust to adversarial graph perturbation._
**Computing the certified perturbation size in practice.** Note that \(K(\underline{p_{y_{u}}})\) is (positively) related to \(\underline{p_{y_{u}}}\), which can be estimated via the Monte Carlo algorithm [35, 7]. Specifically, given a node classifier \(f\), a graph \(G(\mathbf{A})\), and a testing node \(u\), we first sample \(N\) random noise matrices \(\epsilon^{1},\cdots,\epsilon^{N}\) from the noise distribution defined in Equation 7 and add each noise matrix \(\epsilon^{j}\) to the graph \(G\) to construct \(N\) noisy graphs \(\mathbf{A}\oplus\epsilon^{1},\cdots,\mathbf{A}\oplus\epsilon^{N}\). Then, we use the node classifier \(f\) to predict \(u\)'s label on the \(N\) noisy graphs and compute the frequency of each label \(c\), i.e., \(N_{c}=\sum_{j=1}^{N}\mathbb{I}(f(\mathbf{A}\oplus\epsilon^{j},,u)=c)\) for \(c\in\mathcal{Y}\). Then, we can estimate \(\underline{p_{y_{u}}}\) as
\[\underline{p_{y_{u}}}=B(\alpha;N_{y_{u}},N-N_{y_{u}}+1), \tag{9}\]
where \(1-\alpha\) is the confidence level and \(B(\alpha;a,b)\) is the \(\alpha\)-th quantile of Beta distribution with shape parameters \(a\) and \(b\). With \(\underline{p_{y_{u}}}\), we can compute \(K(\underline{p_{y_{u}}})\), and details of computing \(K(\underline{p_{y_{u}}})\) can been seen in [35].
## 3 Certified Robustness to Graph Poisoning Attacks via Randomized Smoothing
Existing randomized smoothing mainly certifies robustness of _evasion attacks_. In this section, we generalize it and derive certified robustness of graph poisoning attacks. Our key idea is to extend randomized smoothing from the _classifier_ perspective to a general _function_ perspective. In particular, we will build a base function, a smoothed function, and then adapt randomized smoothing to certify robustness to poisoning attacks using the smoothed function. Such certified robustness guides us to design more effective graph poisoning attacks, as shown in Section 4.
**Building a base function.** Suppose we have a graph \(G(\mathbf{A})\), training nodes \(\mathcal{V}_{Tr}\), and a GNN algorithm \(\mathcal{A}\) that takes the graph and training nodes as an input and learns a node classifier \(f\), i.e., \(f=\mathcal{A}(\mathbf{A},\mathcal{V}_{Tr})\). We use the learnt \(f\) to predict the label for a testing node \(v\). Then, we can integrate the entire process of training the node classifier \(f\) and testing the node \(v\) as a function \(\bar{f}(\mathbf{A},\mathcal{V}_{Tr};v)\). In other words, the function \(\bar{f}\) is the composition of learning the node classifier \(f\) and predicting a node \(v\). We view \(\check{f}\) as the base function.
**Constructing a smoothed function.** In graph poisoning attacks, an attacker aims to perturb the graph in the training phase. To apply randomized smoothing, we first add a random noise matrix \(\epsilon\) to the graph, where each entry \(\epsilon_{s,t}\) is drawn from a discrete distribution, e.g., defined in Equation 7. As we add random noise \(\epsilon\) to the graph \(G\), the output of the base function \(\tilde{f}\) is also random. Then, inspired by Equation 6, we define the smoothed function \(\tilde{g}\) as follows:
\[\tilde{g}(\mathbf{A},\mathcal{V}_{Tr};v)=\arg\max_{c\in\mathcal{Y}}\text{Pr} (\tilde{f}(\mathbf{A}\oplus\epsilon,\mathcal{V}_{Tr};v)=c), \tag{10}\]
where \(\text{Pr}(\tilde{f}(\mathbf{A}\oplus\epsilon,\mathcal{V}_{Tr};v)=c)\) is the probability that \(v\) is predicted to be a label \(c\) by a GNN model trained on a noisy graph \(\mathbf{A}\oplus\epsilon\) using training nodes \(\mathcal{V}_{Tr}\). \(\tilde{g}(\mathbf{A},\mathcal{V}_{Tr};v)\) is the predicted label for \(v\) by the smoothed function \(\tilde{g}\).
**Deriving the certified robustness of graph poisoning attacks to GNNs.** An attacker adds an adversarial graph perturbation \(\delta\) to the graph \(G(\mathbf{A})\) to produce a perturbed graph \(\mathbf{A}\oplus\delta\), where \(\delta_{s,t}\) is the perturbation added to change the edge status of the node pair \((s,t)\) in the graph \(G\) during training. Then, we can leverage the results in Equation 8 to derive the certified perturbation size against graph poisoning attacks. Specifically, we have:
\[\tilde{g}(\mathbf{A}\oplus\delta,\mathcal{V}_{Tr};v)=y_{v},\;\forall||\delta ||_{0}\leq K(\underline{p_{y_{v}}}), \tag{11}\]
where \(p_{y_{v}}\leq\text{Pr}(\tilde{f}(\mathbf{A}\oplus\epsilon,\mathcal{V}_{Tr};v) =y_{v})\) is a lower bound probability. Our result means the smoothed function \(\tilde{g}\) provably predicts the correct label for \(v\) when (at most) \(K(p_{y_{v}})\) edge statuses in the graph are _arbitrarily_ poisoned by an attacker _in the training phase_.
**Computing the certified perturbation size in practice.** Given a GNN algorithm \(\mathcal{A}\), a graph \(G(\mathbf{A})\), training nodes \(\mathcal{V}_{Tr}\), a discrete noise distribution defined in Equation 7, and a node \(v\), we first sample \(N\) random noise matrices \(\epsilon^{1},\cdots,\epsilon^{N}\) from the discrete noise distribution and add each noise to the graph \(G(\mathbf{A})\) to construct \(N\) noisy graphs \(\mathbf{A}\oplus\epsilon^{1},\cdots,\mathbf{A}\oplus\epsilon^{N}\). Then, we train \(N\) node classifiers \(\tilde{f}^{1}=\mathcal{A}(\mathbf{A}\oplus\epsilon^{1},\mathcal{V}_{Tr}), \cdots,\tilde{f}^{N}=\mathcal{A}(\mathbf{A}\oplus\epsilon^{N},\mathcal{V}_{Tr})\). We use each of the \(N\) node classifiers to predict \(v\)'s label and compute the frequency of each label \(c\), i.e., \(N_{c}=\sum_{j=1}^{N}\mathbb{I}(\tilde{f}^{j}(\mathbf{A}\oplus\epsilon^{j}, \mathcal{V}_{Tr};v)=c)\) for \(c\in\mathcal{Y}\). Finally, we estimate \(\underline{p_{y_{v}}}\) using Equation 9 and use it to calculate the certified perturbation size, following [35]. Note the trained \(N\) node classifiers is re-used to predict node labels and compute certified perturbation size for different nodes.
## 4 Certified Robustness Inspired Attack Framework against GNNs
In this section, we will design our attack framework to GNNs inspired by certified robustness. Our attack framework can be seamlessly plugged into the existing graph evasion and poisoning attacks to design more effective attacks.
### Motivation and Observation
Certified robustness, more specifically, certified perturbation size derived in Section 2.3 and Section 3, was used by _defenders_ to defend GNN models against attacks. On the other hand, from the _attacker_ perspective, he can leverage the properties of certified robustness to better attack GNN models. Specifically, certified perturbation size of a node characterizes the extent to which the GNN model _provably_ and accurately predicts this node against the worst-case graph perturbation. An attacker can base on nodes' certified perturbation sizes to _reversely_ reveal the vulnerable region of the graph and leverage this vulnerability to design better attacks. In particular, we have the following observation that reveals the _inverse_ relationship between a node's certified perturbation size and the perturbation associated with this node when designing the attack.
_Observation 1:_ _A node with a larger (smaller) certified perturbation size should be disrupted with a smaller (larger) number of perturbed edges._
If a node has a larger (smaller) certified perturbation size, it means this node is more (less) robust to graph perturbations. To misclassify this node, an attacker should allocate a larger (smaller) number of perturbed edges. Thus, to design more effective attacks (i.e., misclassify more nodes) with a perturbation budget, an attacker should avoid disrupting nodes with larger certified perturbation sizes, but focus on nodes with smaller certified perturbation sizes.
Based on the above observation, our attack needs to solve three correlated problems: i) How to obtain the node's certified perturbation size for both graph evasion and poisoning attacks? ii) How to allocate the perturbation budget in order to disrupt the nodes with smaller certified perturbation sizes? iii) How to generate the adversarial graph perturbation for both evasion and poisoning attacks? To address i), we adopt the derived node's certified perturbation size against graph evasion attacks (Section 2.3) and graph poisoning attacks (Section 3). To address ii), we design a certified robustness inspired loss, by maximizing which an attacker will put more effort into disrupting nodes with smaller certified perturbation sizes. To address iii), we design a certified robustness inspired attack framework, where any existing graph evasion/poisoning attacks to GNNs can be adopted as the base attack in our framework.
### Certified Robustness Inspired Loss Design
Suppose we have obtained nodes' certified perturbation sizes. To perform a more effective attack, a naive solution is that the attacker sorts all nodes' certified perturbation sizes in an ascending order, and then carefully perturbs the edges to misclassify the sorted nodes one-by-one until reaching the perturbation budget. However, this solution is both computationally intensive--as it needs to solve an optimization
problem for each node; and suboptimal--as all nodes and the associated edges collectively make predictions and perturbing an edge could affect predictions of many nodes.
We design a certified perturbation size inspired loss that assists to _automatically_ seeking the "ideal" edges to be perturbed for both evasion attacks and poisoning attacks. Particularly, we notice that the loss function of evasion attacks in Equation 3 or poisoning attacks in Equation 5 is defined per node. Then, we propose to modify the loss function in Equation 3 or Equation 5 by assigning each node with a weight and multiplying each node loss with the corresponding weight, where the node weight has a strong connection with the node's certified perturbation size. Formally, we design the certified perturbation size inspired loss as follows:
\[\mathcal{L}_{CR}(f_{\theta},\mathbf{A},\mathcal{V}_{T})=\sum_{u\in\mathcal{V} _{T}}w(u)\cdot\ell(f_{\theta}(\mathbf{A};u),y_{u}), \tag{12}\]
where \(\mathcal{V}_{T}=\mathcal{V}_{Te}\) for evasion attacks and \(\mathcal{V}_{T}=\mathcal{V}_{Tr}\) for poisoning attacks; and \(w(u)\) is the weight of the node \(u\). Note that when setting all nodes with an equal weight, our certified perturbation size inspired loss reduces to the conventional loss in Equation 3 or Equation 5. Next, we show the _inverse_ relationship between the node's certified perturbation size and the assigned weight.
_Observation 2: A node with a larger (smaller) certified perturbation size is assigned a smaller (larger) weight._
As shown in **Observation 1**, we should disrupt more nodes with smaller certified perturbation sizes, as these nodes are more vulnerable. In other words, we should put more weights on nodes with smaller certified perturbation sizes to enlarge these nodes' losses--making these nodes easier to be misclassified with graph perturbations. In contrast, we should put smaller weights on nodes with larger certified perturbation sizes, in order to save the usage of the perturbation budget. Formally, we assign the node weight such that \(w(u)\sim 1/K(\underline{p_{y_{u}}})\). There are many ways to assign node weights satisfying the inverse relationship. In this paper, for instance, we propose to define node weights as
\[w(u)=\frac{1}{1+\exp(a\cdot K(\underline{p_{y_{u}}}))}, \tag{13}\]
where \(a\) is a tunable hyperparameter. We can observe that the node weight is exponentially decreased as the node's certified perturbation size increases. Such a property ensures that the majority of perturbed edges are used for disrupting nodes with smaller certified perturbation sizes (See Figures 3) when performing the attack.
### Certified Robustness Inspired Attack Design
Based on the derived certified perturbation size and our certified robustness inspired loss, we now propose to generate graph perturbations against GNNs with both graph evasion and poisoning attacks.
**Certified robustness inspired graph evasion attacks to generate graph perturbations.** We can choose any graph evasion attack to GNNs as the base evasion attack. In particular, given the attack loss from any existing evasion attack, we only need to modify the loss by multiplying it with our certification perturbation sizes defined node weights. For instance, we can use the PGD attack [40] as the base evasion attack. We replace its attack loss by our certified robustness inspired loss \(\mathcal{L}_{CR}\) in Equation 12. Then, we have our certified robustness inspired PGD (CR-PGD) evasion attack that iteratively generates graph perturbations as follows:
\[\delta=\text{Proj}_{\mathbb{B}}(\delta+\eta\cdot\nabla_{\delta}\mathcal{L}_{ CR}(f_{\theta},\mathbf{A}\oplus\delta,\mathcal{V}_{Te})), \tag{14}\]
where \(\eta\) is the learning rate in PGD, \(\mathbb{B}=\{\delta:\mathbf{1}^{T}\delta\leq\Delta,\delta\in[0,1]^{|\mathcal{ V}|\times|\mathcal{V}|}\}\) is the allowable perturbation set, and
\[\text{Proj}_{\mathbb{B}}(\mathbf{a})=\begin{cases}\Pi_{[0,1]}( \mathbf{a}-\mu\mathbf{1}),&\text{if }\mathbf{1}^{T}\Pi_{[0,1]}(\mathbf{a}-\mu\mathbf{1})= \Delta,\\ \Pi_{[0,1]}(\mathbf{a}),&\text{if }\mathbf{1}^{T}\Pi_{[0,1]}(\mathbf{a})\leq \Delta,\end{cases} \tag{15}\]
where \(\mu>0\), \(\Pi_{[0,1]}(x)=x\) if \(x\in[0,1]\), 0 if \(x<0\), and 1 if \(x>1\). The final graph perturbation is used to perform the evasion attack.
**Certified robustness inspired graph poisoning attacks to generate graph perturbations.** Likewise, we can choose any graph poisoning attack to GNNs as the base poisoning attack. Given the bilevel loss from any existing poisoning attack, we simply modify each loss by multiplying it with our certified perturbation sizes' defined node weights. Specifically, we have
\[\max_{\delta}\mathcal{L}_{CR}(f_{\theta^{*}},\mathbf{A}\oplus \delta,\mathcal{V}_{Tr}),\] (16) s.t. \[\theta^{*}=\arg\min_{\theta}\mathcal{L}_{CR}(f_{\theta},\mathbf{A}\oplus \delta,\mathcal{V}_{Tr}),\,||\delta||_{0}\leq\Delta, \tag{17}\]
where \(\mathcal{L}_{CR}(f_{\theta},\mathbf{A}\oplus\delta,\mathcal{V}_{Tr})=\sum_{v \in\mathcal{V}_{Tr}}w(v)\cdot\ell(f_{\theta}(\mathbf{A}\oplus\delta,v)\,y_{ v})\). Then, solving Equation 16 and Equation 17 produces the poisoning attack graph perturbations with our framework.
Algorithm 1 and Algorithm 2 in Appendix show two instances of applying our CR inspired attack framework to the PGD evasion attack and Minmax [40] poisoning graph, respectively. To save time, we calculate nodes' certified perturbation sizes per \(INT\) iterations. Then, comparing with PGD, the computational overhead of our CR-PGD is calculating the node's certified perturbation size with a set of \(N\) sampled noises every \(INT\) iterations, which only involves making predictions on \(N\) noisy matrices and is efficient. Note that the predictions are independent and can be also parallelized. Comparing with Minmax, the computational overhead of our CR-Minmax is to independently train (a small number of) \(N\) models every \(INT\) iterations, which can be implemented in parallel.
## 5 Experiments
### Setup
**Datasets and GNN models.** Following [40, 51], we evaluate our attacks on benchmark graph datasets, i.e., Cora, Citeseer [29], and BlogCataLogs [27]. Table 3 in Appendix shows basic statistics of these graphs. We choose Graph Convolutional Network (GCN) [14] as the targeted GNN model, also following [40, 51].
**Base attack methods.** For graph evasion attacks, we choose the PGD attack [40]2 that uses the cross-entropy loss and CW loss [3] as the base attack methods, and denote the two attacks as CE-PGD and CW-PGD, respectively. For graph poisoning attacks, we choose the Minmax attack [40] and MetaTrain attack [51]3 as the base attack methods. We apply our CR inspired attack framework to these evasion and poisoning attacks and denote them as CR-CE-PGD, CR-CW-PGD, CR-Minmax, and CR-MetaTrain, respectively. All attacks are implemented in PyTorch and run on a Linux server with 96 core 3.0GHz CPU, 768GB RAM, and 8 Nvidia A100 GPUs.
Footnote 2: [https://github.com/KaidiXu/GCN_ADV_Train](https://github.com/KaidiXu/GCN_ADV_Train)
Footnote 3: [https://www.kdd.in.tum.de/gnn-meta-attack](https://www.kdd.in.tum.de/gnn-meta-attack)
**Training and testing.** Following [51], we split the datasets into 10% training nodes, 10% validation nodes, and 80% testing nodes. The validation nodes are used to tune the hyperparameters, and the testing nodes are used to evaluate the attack performance. We repeat all attacks on 5 different splits of the training/evaluation/testing nodes and report the mean attack accuracy on testing nodes, i.e., fraction of testing nodes are misclassified after the attack.
**Parameter settings.** Without otherwise mentioned, we set the perturbation budget \(\Delta\) as 20% of the total number of edges in a graph (before attack). We set the parameter \(\beta=0.999\) in the noise distribution Equation 7, the confidence level \(1-\alpha=0.9\), the number of samples \(N\) in Monte Carlo sampling to calculate node's certified perturbation size is set to be 200 and 20 in evasion attacks and poisoning attacks, respectively, and \(a=1\) in Equation 13. The number of iterations \(T\) is 100 and 10, and the interval is set to be \(INT=10\) and \(INT=2\) in evasion attacks and poisoning attacks, respectively. The other hyperparameters in CE-PGD, CW-PGD, Minmax, and MetaTrain are selected based on their source code, and we set equal values in our CR inspired attack counterparts. We also study the impact of the important hyperparameters that could affect our attack performance: \(\Delta\), \(N\), \(1-\alpha\), \(\beta\), and \(a\). When studying the impact of a hyperparameter, we fix the other hyperparameters to be their default values.
### Attack Results
**Our attack framework is effective.** Figure 1 and Figure 2 show the evasion attack accuracy and poisoning attack accuracy of the base attacks and those with our attack framework vs. perturbation budget, respectively. We can observe that _our certified robustness inspired attack framework can enhance the base attack performance in all datasets_. For instance, when attacking GCN on Cora and the perturbation ratio is \(20\%\), our CR-CE-PGD and CR-CW-PGD have a relative \(7.0\%\) and \(5.6\%\) gain over the CE-PGD and CW-PGD evasion attacks. Moreover, CR-Minmax and CR-MetaTrain have a relative \(12.2\%\) and \(10.3\%\) gain over the Minmax and MetaTrain poisoning attacks. These results demonstrate that the node's certified robustness can indeed guide our attack framework to find the more vulnerable region in the graph to be perturbed, which helps to better allocate the perturbation budget, and thus makes the base attacks with our attack framework misclassify more nodes.
Figure 1: Evasion attack accuracy vs. perturbation budget.
Figure 2: Poisoning attack accuracy vs. perturbation budget.
To further understand the effectiveness of our framework, we visualize the distribution of the perturbed edges vs. node's certified perturbation size. Specifically, we first obtain the perturbed edges via the base attacks and our CR inspired attacks, and calculate testing/training nodes' certified perturbation sizes for evasion/poisoning attacks, respectively. Then we plot the distribution of the perturbed edges vs node's certified perturbation size. Specifically, if a perturbed edge is connected with a testing/training node in the evasion/poisoning attack, then we map this perturbed edge to this node's certified perturbation size. Our intuition is that a perturbed edge affects its connected node the most. Figure 3 shows the results on Citeseer (We observe that the conclusions on the other datasets are similar). We can see that a majority number of the perturbed edges connect with testing/training nodes that have relatively smaller certified perturbation sizes in our CR inspired attacks. In contrast, a significant number of the perturbed edges in the base attacks connect with nodes with relatively larger certified perturbation sizes. Hence, under a fixed perturbation budget, our attacks can misclassify more nodes.
**Comparing with other weight design strategies.** Recall that our weight design is based on node's certified robustness: nodes less provably robust to graph perturbations are assigned larger weights, in order to enlarge these node attack losses. Here, we consider three other possible strategies to design node weights that aim to _empirically_ capture this property: 1) **Random**, where we uniformly assign node weights between \([0,1]\) at random; 2) **Node degree**, where a node with a smaller degree might be less robust to graph perturbations, and we set a larger weight. Following our weight design, we set \(w_{\text{deg}}(u)=\frac{1}{1+\exp(a\cdot\text{deg}(u))}\); 3) **Node centrality**[25], where a node with a smaller centrality might be less robust to graph perturbations, and we set a larger weight. Similarly, we set \(w_{\text{cen}}(u)=\frac{1}{1+\exp(a\cdot\text{cen}(u))}\). As a baseline, we also consider no node weights.
Table 1 shows the attack results by applying these weight design strategies to the existing graph evasion and poisoning attacks. We have the following observations: 1) **Random** obtains the attack performance even worse than **No weight**'s. This indicates an inappropriate weight design can be harmful to the attack. 2) Node **Degree** and **Centrality** perform slightly better than **No weight**. One possible reason is that nodes with larger degree and centrality are empirically more robust to perturbations, which are also observed in previous works, e.g., [34, 50]. 3) Our weight design strategy performs the best. This is because our weight design _intrinsically_ captures nodes' certified robustness and thus yields more effective attacks.
**Ablation study.** In this experiment, we study the impact of hyperparameters: \(\beta\) in Equation 7, confidence level \(1-\alpha\) in Equation 9, and \(N\) in Equation 9, \(a\) in Equation 13, as well the running time vs. \(N\). Figure 4 shows the results of \(\beta\), \(1-\alpha\), and \(N\), and running time vs. \(N\) on our attack. We observe that: 1) Our attack is not sensitive to \(\beta\); 2) Our attack slightly becomes worse as the confidence level \(1-\alpha\) increases. Such an observation can guide an attacker to set a relatively small \(\beta\) in practice. 3) Our attack becomes better as \(N\) increases, but already works well with a relatively smaller \(N\). From this observation, an attacker can choose a small \(N\) in practice to save the time and cost when performing the attack. 4) Running time does not increase too much with \(N\) on the evasion attacks and is linear to \(N\) on poisoning attacks, consistent with our analysis in Section 4.3.
Table 2 shows the impact of \(a\). We see the performances are stable across different \(a\). This is largely because our weight design already ensures the node weight is inversely and exponentially to the node's certified perturbation size.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Dataset** & **Method** & **CW-FGD** & **CE-FGD** & **Mimmax** & **MetaTrain** \\ \hline \multirow{3}{*}{**Cora**} & **No weight** & 0.74 & 0.71 & 0.62 & 0.68 \\ \cline{2-6} & **Random** & 0.77 & 0.75 & 0.65 & 0.72 \\ \cline{2-6} & **Degree** & 0.72 & 0.70 & 0.61 & 0.66 \\ \cline{2-6} & **Contality** & 0.73 & 0.70 & 0.60 & 0.66 \\ \cline{2-6} & **Ours** & **0.70** & **0.60** & **0.55** & **0.62** \\ \hline \hline \multirow{3}{*}{**Citeseer**} & **No weight** & 0.64 & 0.63 & 0.63 & 0.61 \\ \cline{2-6} & **Random** & 0.66 & 0.66 & 0.68 & 0.64 \\ \cline{2-6} & **Degree** & 0.64 & 0.61 & 0.60 & 0.59 \\ \cline{2-6} & **Contality** & 0.64 & 0.62 & 0.60 & 0.38 \\ \cline{2-6} & **Ours** & **0.60** & **0.60** & **0.57** & **0.52** \\ \hline \hline \multirow{3}{*}{**B.C.Log**} & **No weight** & 0.48 & 0.51 & 0.53 & 0.31 \\ \cline{2-6} & **Random** & 0.54 & 0.53 & 0.40 & 0.35 \\ \cline{2-6} & **Degree** & 0.46 & 0.30 & 0.32 & 0.28 \\ \cline{2-6} & **Contality** & 0.47 & 0.49 & 0.32 & 0.27 \\ \cline{2-6} & **Ours** & **0.44** & **0.46** & **0.29** & **0.24** \\ \hline \end{tabular}
\end{table}
Table 1: Attack performance with different weight design.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Dataset** & \(a\) & **CR-CW-FGD** & **CR-CE-FGD** & **CR-Mimmax** & **CR-MetaTrain** \\ \hline \multirow{3}{*}{**Cora**} & \(0.5\) & 0.70 & 0.67 & 0.55 & 0.62 \\ \cline{2-6} & \(1\) & 0.70 & 0.60 & 0.53 & 0.62 \\ \cline{2-6} & \(2\) & 0.70 & 0.66 & 0.54 & 0.62 \\ \hline \hline \multirow{3}{*}{**Citeseer**} & \(0.5\) & 0.60 & 0.61 & 0.58 & 0.54 \\ \cline{2-6} & \(1\) & 0.60 & 0.50 & 0.57 & 0.52 \\ \cline{2-6} & \(2\) & 0.60 & 0.59 & 0.57 & 0.53 \\ \hline \hline \multirow{3}{*}{**B.C.Log**} & \(0.5\) & 0.44 & 0.47 & 0.31 & 0.25 \\ \cline{2-6} & \(1\) & 0.44 & 0.46 & 0.29 & 0.24 \\ \cline{2-6} & \(2\) & 0.44 & 0.46 & 0.29 & 0.24 \\ \hline \end{tabular}
\end{table}
Table 2: Attack performance with different \(a\).
Figure 3: Distribution of the perturbed edges vs. node’s certified perturbation size
## 6 Discussion
**Evaluations on other GNNs.** We mainly follow existing attacks [40, 51], which only evaluate GCN. Here, we also test SGC [38] on Cora and results show our CR-based GNN attacks also have a 6%-12% gain over the base attacks. This validates our strategy is generic to design better attacks.
**Transferability between different GNNs.** We evaluate the transferability of the graph perturbations generated by our 4 CR-based attacks on GCN to SGC on Cora, when the attack budget is 15. Accuracy on SGC under the 4 attacks are: 73%, 76%, 66%, and 67%, while accuracy on GCN are 71%, 73%, 63%, and 65%, respectively. This implies a promising transferability between GCN and SGC.
**Defenses against our attacks.** Almost all existing empirical defenses [10, 11, 13, 39, 45, 48, 49] are ineffective to adaptive attacks [24]. We adopt adversarial training [22], which is the only known effective empirical defense. Specifically, we first generate graph perturbations for target nodes via our attack and use the perturbed graph to retrain GNN with true node labels. The trained GNN is used for evaluation. We test on Cora and show this defense is effective to some extent, but has a nonnegligible utility loss. For instance, when budget=15, the accuracy under the CR-CW-PGD (CR-CE-PGD) attack increases from 73% (71%) to 76% (73%), but the normal accuracy reduces from 84% to 73% (72%).
## 7 Related Work
**Attacking graph neural networks (GNNs).** We classify the existing attacks to GNNs as evasion attacks [8, 20, 21, 23, 36, 39, 50] and poisoning attacks [8, 19, 30, 40, 50, 51, 46, 50]. E.g., Xu et al. [40] proposed an untargeted PGD graph evasion attack to the GCN. The PGD attack leverages first-order optimization and generates discrete graph perturbations via convexly relaxing the binary graph structure, and obtains the state-of-the-art attack performance. Regarding graph poisoning attacks, Zugner et al. [51] proposed a graph poisoning attack, called Metattack, that perturbs the whole graph based on meta-learning. Our attack framework can be seamlessly plugged into these graph evasion and poisoning attacks and enhance their attack performance.
**Attacking other graph-based methods.** Besides attacking GNNs, other adversarial attacks against graph data include attacking graph-based clustering [6], graph-based collective classification [34, 32], graph embedding [1, 4, 5, 9], community detection [18], graph matching [47], etc. For instance, Chen et al. [6] proposed a practical attack against spectral clustering, which is a well-known graph-based clustering method. Wang and Gong [34] designed an attack to the collective classification method, called linearized belief propagation, by modifying the graph structure.
**Certified robustness and randomized smoothing.** Randomized smoothing [15, 16, 17, 14, 42, 7] was the first method to obtain certified robustness of large models and achieved state-of-the-art performance. For instance, Cohen et al. [7] leveraged the Neyman-Pearson Lemma [26] to obtain a tight \(l_{2}\) certified robustness for randomized smoothing with Gaussian noise on normally trained image models. Salman et al. [28] improved the certified robustness by combining the design of an adaptive attack against smoothed soft image classifiers and adversarial training on the attacked classifiers. [12, 35], and [2] applied randomized smoothing in the graph domain and derived certified robustness for community detection and node/graph classifications methods against graph perturbations. In this paper, we use randomized smoothing to design better attacks against GNNs.
## 8 Conclusion
We study graph evasion and poisoning attacks to GNNs and propose a novel attack framework motivated by certified robustness. We are the first work that uses certified robustness for an attack purpose. In particular, we first derive the node's certified perturbation size, by extending randomized smoothing from the classifier perspective to a general function perspective. Based on it, we design certified robustness inspired node weights, which can be seamlessly plugged into the existing graph perturbation attacks' loss and produce our certified robustness inspired attack loss and attack framework. Evaluations on multiple datasets demonstrate that existing attacks' performance can be significantly enhanced by applying our attack framework.
**Acknowledgments.** This work was supported by Wang's startup funding, the Cisco Research Award, and the National Science Foundation under grant No. 2216926. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funding agencies.
Figure 4: Impact of (a) \(\beta\), (b) \(1-\alpha\), (c) \(N\) (# in bracket in x-axis is for poisoning attacks), and (d) running time vs. \(N\) on Citeseer. Note that (c): “No evasion attack” and “No poisoning attack” curves are overlapped; (d) \(INT\)=10 (2) for our evasion (poisoning) attacks.
|
2305.02071
|
Application of the disk instability model to all Quasi-Periodic
Eruptions
|
After the first quasi-periodic eruptions (QPEs, GSN069) was reported in 2019,
four other sources have been identified as QPEs or its candidate. However, the
physics behind QPEs is still unclear so far, though several models have been
proposed. Pan et al. (2022) proposed an instability model for the accretion
disk with magnetically driven outflows in the first QPEs GSN 069, which is able
to reproduce both the light curve and the evolution of spectrum fairly well. In
this work, we exploit this model to all the QPEs. We imporve the calculations
of the spectrum of disk by introducing a hardening factor, which is caused by
the deviation of opacity from the blackbody. We find that the light curves and
evolution of the spectra of the four QPEs or candidate can all be well
reproduced by our model calculations.
|
Xin Pan, Shuang-Liang Li, Xinwu Cao
|
2023-05-03T12:18:23Z
|
http://arxiv.org/abs/2305.02071v1
|
# Application of the disk instability model to all Quasi-Periodic Eruptions
###### Abstract
After the first quasi-periodic eruptions (QPEs, GSN069) was reported in 2019, four other sources have been identified as QPEs or its candidate. However, the physics behind QPEs is still unclear so far, though several models have been proposed. Pan et al. (2022) proposed an instability model for the accretion disk with magnetically driven outflows in the first QPEs GSN 069, which is able to reproduce both the light curve and the evolution of spectrum fairly well. In this work, we exploit this model to all the QPEs. We imporve the calculations of the spectrum of disk by introducing a hardening factor, which is caused by the deviation of opacity from the blackbody. We find that the light curves and evolution of the spectra of the four QPEs or candidate can all be well reproduced by our model calculations.
0000-0002-2081-8088]Xin Paw
0000-0002-4073-387X]Shuang-Liang Li
0000-0002-4073-387X]Xinwu Cao
## 1 Introduction
After the first discovery of Quasi-periodic eruptions (QPEs) in GSN 069, four other QPEs sources, i.e., RX J1301.9+2747, eRO-QPE1, eRO-QPE2 and XMMSL1 J024916.6-041244 (most possible) have been discovered (Miniutti et al., 2019; Giustini et al., 2020; Arcodia et al., 2021; Chakraborty et al., 2021). All the QPEs show some similar features, such as the short eruption periods, high-amplitude bursts, and occurring mainly in the soft X-ray band. The primary challenges of this phenomenon are how to construct a physical scenario to produce such a shortly periodic eruptions (several to a dozen of hours). A number of models have been proposed, which can be roughly divided into two categories: while the first one suggests that the periodic outbursts in QPEs originates from the periodic orbital motion of a star captured by the black hole (King, 2020; Ingram et al., 2021; Xian et al., 2021; Sukova et al., 2021; Metzger et al., 2022; Wang et al., 2022; Krolik and Linial, 2022; King, 2022; Lu and Quataert, 2022; Chen et al., 2022; Linial and Sari, 2022), another one ascribes the periodic behavior to the instability of inner accretion disk dominated by the radiation pressure (Sniegowska et al., 2020; Pan et al., 2021; Raj and Nixon, 2021; Pan et al., 2022; Sniegowska et al., 2022; Kaur et al., 2022). Notably, only the model of Pan et al. (2022) is able to fit both the light curves and the phase-resolved X-ray spectrum simultaneously during outbursts in GSN 069.
It was suggested there is some evidence of tidal disruption events (TDE) in two QPEs (GSN 069 and XMMSL1 J024916.6-041244) (Esquej et al., 2007; Shu et al., 2018; Sheng et al., 2021). If this is the case, a remnant of core or a white dwarf that continually rotates around the black hole may produce QPEs by partial TDEs (Sheng et al., 2021; Miniutti et al., 2023). However, such a model may only apply for these two QPEs, and the detailed physical processes of the such TDE evolution and their radiation properties are still quite unclear (Piran et al., 2015; Metzger and Stone, 2016; Bonnerot and Stone, 2021).
It is well known that the inner part of a thin disk dominated by radiation pressure is both thermally and viscously unstable leading to limit-cycle behaviours (Shakura and Sunyaev, 1973, 1976). Such an instability may be responsible for the outbursts observed in cataclysmic variables (CVs) and X-ray nova (e.g., Meyer and Meyer-Hofmeister, 1982; Smak, 1982; Cannizzo, 1993), which was also suggested as an possibility for accretion disk eruptions in AGNs (Siemiginowska et al., 1996). Furthermore, disk instability may also be the probably physical origin of the multiple changing-look AGNs challenged the AGN paradigm (Yang et al., 2018; MacLeod et al., 2019; Wang et al., 2022), though lots of model had
been produced (Merloni et al., 2015; Ricci et al., 2020; Sniegowska et al., 2020; Wang and Bon, 2020; Pan et al., 2021; Lyu et al., 2022). The main difficulty of this disk instability model for QPEs is the viscous timescale of a thin accretion disk being significantly larger than the observed periods of QPEs (e.g., Pan et al., 2021, 2022). It was suggested that the viscous timescale of a disk driven predominantly by the magnetic outflows can be substantially shortened (Cao and Spruit, 2013; Li and Begelman, 2014; Li and Cao, 2019; Feng et al., 2021; Kaur et al., 2022; Pan et al., 2022; Sniegowska et al., 2022). Pan et al. (2022) constructed an instability model of the disk with magnetically driven outflows for the QPE GSN 069, and both of its light curve and phased-resolved X-ray spectra have been fitted by their model fairly well. In this work, we employ this model to the other QPEs based with the archived observational data such as the periods of bursts, and spectra, etc.
## 2 Model
Similar with our previous work (Pan et al., 2022), we consider a thin accretion disk with winds driven by large-scale magnetic fields around a spinning super massive black hole (SMBH), where the general relativistic correction factors, the general form for the viscous torque and non-zero torque condition at innermost stable circular orbit (ISCO) are adopted to modify our basic equations. The steady outer thin disk can be described as:
\[\frac{\mathrm{d}\dot{M}}{\mathrm{d}R}+4\pi R\dot{m}_{\mathrm{w}}=0, \tag{1}\]
\[-\frac{1}{2\pi}\frac{\mathrm{d}(\dot{M}l_{\mathrm{k}})}{\mathrm{d}R}-\frac{ \mathrm{d}}{\mathrm{d}R}(R^{2}\mathscr{B}\mathscr{C}^{-1/2}\mathscr{D}T_{r \phi})+T_{\mathrm{m}}R=0, \tag{2}\]
\[P_{\mathrm{tot}}=(1+\frac{1}{\beta_{1}})(P_{\mathrm{gas}}+P_{\mathrm{rad}}), \tag{3}\]
\[-\frac{3}{2}\Omega_{\mathrm{k}}T_{r\phi}\frac{\mathscr{B}\mathscr{D}}{ \mathscr{C}}=\frac{8acT_{c}^{4}}{3\tau}. \tag{4}\]
For a thin disk with relatively high accretion rates, the inner disk region dominated by radiation pressure will produce limit-cycle bursts. In some specific parameter space, this unstable zone can be limited in a narrow annulus (Sniegowska et al., 2020; Pan et al., 2021, 2022). The evolution equation of surface density and central temperature of this narrow zone can be written as
\[\begin{split}&\left[u^{t}-\frac{C_{\mathrm{H}}H\left(1-\beta_{2} \right)}{\Sigma\left(1+\beta_{2}\right)}\right]\frac{\mathrm{d}\Sigma}{ \mathrm{d}t}+\frac{C_{\mathrm{H}}H\left(4-3\beta_{2}\right)}{T\left(1+\beta_ {2}\right)}\frac{\mathrm{d}T}{\mathrm{d}t}\\ &-\frac{\dot{M}_{0}-\dot{M}-4\pi R\dot{m}_{\mathrm{w}}\Delta R}{ 2\pi R\Delta R}=0,\end{split} \tag{5}\]
\[\begin{split}\frac{\mathrm{d}T}{\mathrm{d}t}=& \frac{T(Q^{+}-Q^{-}-Q_{\mathrm{adv}})(1+\frac{1}{\beta_{1}})(1+ \beta_{2})}{2PHu^{t}(28-22.5\beta_{2}-1.5\beta_{2}^{2}+\frac{12-9\beta_{2}}{ \beta_{1}})}\\ &+2\frac{T\mathrm{d}\Sigma}{\Sigma\mathrm{d}t}\frac{4-3\beta_{2} +\frac{2-\beta_{2}}{\beta_{1}}}{28-22.5\beta_{2}-1.5\beta_{2}^{2}+\frac{12-9 \beta_{2}}{\beta_{1}}},\end{split} \tag{6}\]
respectively. The meanings of all above symbols are the same as those in Pan et al. (2022).
The temperature of the disk derived from the X-ray continuum spectra in quiescent state detected in other four QPEs, \(kT_{\mathrm{disk}}[=11.5\left(M/10^{8}M_{\odot}\right)^{-1/4}\dot{m}^{1/4}( \mathrm{eV})]\), are all higher than 50eV with black hole mass roughly ranged from \(10^{5}\) to \(10^{6}M_{\odot}\)(Giustini et al., 2020; Chakraborty et al., 2021; Chen et al., 2022). As argued by Pan et al. (2022), the maximum effective temperature of a thin accretion disk surrounding a black hole (\(M>2\times 10^{5}M_{\odot}\)) is hard to exceed 50 eV. It was argued that the disk radiation is more complex than a sum of blackbody emission from the disk, as the electron scattering opacity is much greater than absorption opacity at the inner disk (see, e.g., Done et al., 2012). Thus, a color correction factor for the effective temperature is required in the calculations of the emergent spectrum of the disk. A precise calculation of the disk spectrum should include the vertical structure of disk, with which the full radiative transfer equation is to be solved. This is beyond the scope of this work, instead, we adopt a diluted blackbody as a reasonable approximation (Shimura and Takahara, 1995):
\[F_{\nu}^{\mathrm{db}}=\frac{1}{f_{\mathrm{cor}}^{4}}\pi B_{\nu}(f_{\mathrm{ cor}}T_{\mathrm{eff}}), \tag{7}\]
where \(f_{\mathrm{cor}}\) and \(B_{\nu}\) are the hardening factor and Planck function, respectively. A typical value of hardening factor, \(f_{\mathrm{cor}}\sim 1.7\), is usually adopted for black hole binaries (BHBs). In principle, it may vary with temperature, which can be written as (Davis et al., 2006; Done et al., 2012):
\[f_{\mathrm{cor}}\sim(72/T_{\mathrm{keV}})^{1/9}\,. \tag{8}\]
This equation is valid for the accretion disk in AGN when \(T_{\mathrm{max}}>10^{5}\mathrm{K}\). However, when the disk temperature is sufficiently low, the disk spectra will return to the disk blackbody as electron scattering no longer dominates the opacity. Therefore it is necessary to calculate the hardening factor at each radius because the temperature is much lower than \(10^{5}\mathrm{K}\) for the outer accretion disk in AGN. We set a threshold of disk temperature \(T_{\mathrm{disk}}=10^{5}\mathrm{K}\). When \(T_{\mathrm{disk}}>10^{5}\mathrm{K}\), we adopt equation (8) to calculate \(f_{\mathrm{cor}}\), otherwise \(f_{\mathrm{cor}}=1\) is chosen. The discontinuity value of \(f_{\mathrm{cor}}\) near the threshold makes the
spectrum not very smoothly, but it has little effect on the X-ray band we are concerned about.
## 3 Results
We try to extend our model in Pan et al. (2022) to all the QPEs in this work. \(a_{\ast}=0.98\) and \(f=0.9\) are always adopted for convenient. All other parameters adopted for each QPEs are shown in Table 1. The spectra used in this work are corrected by the instrumental effective area and galaxy absorption, which is same with our previous work. Since the hardening factor used in this study varies with radius, we present here only the hardening factor \(f_{\rm cor,in}\) at the inner radius of the outer stable disk in Table 1, as it has the greatest impact on the soft X-ray spectrum.
### Rx j1301.9+2747
Several rapid flares have been observed in this source from 1991 to 2019 (Dewangan et al., 2000; Sun et al., 2013; Giustini et al., 2020). Compared with GSN 069, the flare recurrence time of RX J1301.9+2747 appears to be more complex and evolve rapidly. We suppose that its burst mechanism is the same with GSN 069 and that the evolution of light curve may be caused by the disturbance of accretion rate or magnetic field strength.
Two sets of parameters with the same black hole mass are adopted to fit the spectral evolution and light curve of RX J1301.9+2747 in 2000 and 2019, respectively. As shown in Figure 1 and 2, our model can roughly reproduce the observational data. Note that the data of light curve in Figure 1 is taken from EPIC-MOS1, since the EPIC-PN data only covered one eruption in 2000. Except for Figure 1, all other XMM-Newton data comes from EPIC-PN. The mass of this source was suggested as \(8\times 10^{5}M_{\odot}\) with a scatter of 0.5 dex (Sun et al., 2013), which is roughly consistent with the value \(3\times 10^{5}M_{\odot}\) in table 1. Since the two recurrence times in 2019 are distinct with each other (Figure 2), we take the average interval time as the period of our model to calculate the limit-cycle.
### eRo-Qpe1
This source had four observational campaigns, i.e., one of eROSITA, two of XMM-Newton and one of NICER, of which the quality of XMM-Newton data is the best (Arcodia et al., 2021). Therefore we adopt the data from XMM-Newton to analyse the spectral evolution. However, due to the short duration of XMM-Newton campaign, the data from NICER (including 15 complete eruptions) are adopted to investigate the light curve (see Figure 3). There are two observations available from XMM-Newton. The first one on 27 July 2020 (eRO-QPE1-XMM1) showed a complex profile that seems to be formed by the overlapping of several eruptions, which cannot be simplified explained by an instability model. We thus adopt the observation on 4 August 2020 (eRO-QPE1-XMM2) showing a single isolated burst (Arcodia et al., 2022), to compare with our spectral result.
In Figure 3, we compare the calculated spectrum and light curves with the observations. The mass of black hole in this source is apoted as \(M_{\rm BH}=1\times 10^{6}M_{\odot}\), which is consistent with the estimation of Chen et al. (2022) (\(\sim 9.1\times 10^{5}M_{\odot}\)) by using a empirical scaling relation. Excluding the effect of background, both the spectrum and light curves can qualitatively be reproduced by our model.
### eRo-Qpe2
The timing property and spectral evolution of this source seem to be similar with GSN 069, i.e., showing a small duty cycle and alternating longer and shorter recurrence times. But the period of eruptions in eRO-QPE2 is much shorter and the temperature of spectra is much higher than those in GSN 069. We can
Figure 1: Phase-resolved Spectral analysis and \(0.2-2\)keV light curve of RX J1301.9+2747 during the 10-11 December 2000. Upper panel: The red square and the yellow circles selected from eruptions represent the peak phase and the plateau phase, respectively. The dashed lines are given by our model. All the observational data are corrected by introducing instrumental effective area and absorption. Lower panel: The black dots and blue line represent the data obtained from XMM-Newton observations (EPIC-MOS1) and the light curve produced by our model, respectively.
therefore infer that the mass of eRO-QPE2 should be smaller than GSN 069. In our calculation, a black hole mass \(M_{\rm BH}=1\times 10^{5}M_{\odot}\) is adopted, smaller than the mass of eRO-QPE2 adopted by Chen et al. (2022) (\(2.3\times 10^{5}M_{\rm BH}\)).
We present the results in Figure 4. Both the spectral evolution and light curves are well reproduced by our model, just like the results of our previous work for GSN 069.
### Xmmsl1 J024916.6-041244
This source is regarded as the most probable QPEs because it showed 1.5 QPE-like flares in 2006 and had a spectral evolution similar to that of GSN 069.
Two black hole mass of XMMSL1 J024916.6-041244 are given by the previous works (Strotjohann et al., 2016; Wevers et al., 2019), i.e., \(M_{\rm BH}\sim 8.5\times 10^{4}M_{\odot}\) and \(M_{\rm BH}\sim 5\times 10^{5}M_{\odot}\). However, the black hole mass in this source should be much smaller than that of GSN 069 due to its higher temperature in low state. Here we adopt \(M_{\rm BH}=7\times 10^{4}M_{\odot}\). Figure 5 gives the comparison of our model with observations. We find that our model can qualitatively reproduce the light curve and spectral evolution as a whole.
### Gsn 069
To investigate the effect of hardening factor on GNS 069, we also compare our numerical results with the observed light curve and X-ray spectra in Figure 6. It is found that, by including the hardening factor, we can still achieve satisfactory results with a higher black hole
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Source & \(M(\times 10^{5}M_{\odot})\) & \(\dot{m}(\dot{M}_{\rm Edd})\) & \(\alpha\) & \(\beta_{1}\) & \(\mu\) & \(n_{\rm H}(\times 10^{20}{\rm cm}^{-2})\) & \(\Delta R(R_{*})\) & \(f_{\rm cor,in}\) \\ \hline RX J1301.9+2747 (2000) & 3 & 0.15 & 0.15 & 5.5 & 0.1 & 0.4 & 0.088 & 2.37 \\ RX J1301.9+2747 (2019) & 3 & 0.15 & 0.15 & 5 & 0.1 & 0.4 & 0.083 & 2.41 \\ eRO-QPE1 & 10 & 0.15 & 0.1 & 8 & 0.15 & 2 & 0.048 & 2.51 \\ eRO-QPE2 & 1 & 0.19 & 0.1 & 10.5 & 0.1 & 10 & 0.119 & 2.24 \\ XMMSL1 J024916.6-041244 & 0.7 & 0.08 & 0.15 & 24.5 & 0.22 & 6 & 0.099 & 2.19 \\ GSN 069 & 6 & 0.11 & 0.15 & 4.5 & 0.13 & 1 & 0.047 & 2.5 \\ GSN 069 (2) & 4 & 0.1 & 0.15 & 18 & 0.24 & 1 & 0.067 & 1.55 \\ \hline \end{tabular}
\end{table}
Table 1: Detailed parameter of our model
Figure 3: Phase-resolved Spectral analysis and \(0.3-1\)keV light curve of eRO-QPE1. The spectral data was obtained from XMM-Newton on 4 August 2020 and the light curve data was observed by NICER on 19 August 2020.
Figure 2: Same with Figure 1, but observed during the 30-31 May 2019.
mass (\(M_{\rm BH}=6\times 10^{5}M_{\odot}\)) compared with those in Pan et al. (2022).
## 4 Conclusions and Discussion
In this work, we adopt the model of Pan et al. (2022) to fit the observed X-ray spectra of the five QPEs, while we improve the calculations the emergent spectra of the disk in this work. A hardening factor (\(f_{\rm cor}\)) is induced to modify the effective temperature of accretion disk based on the model of Pan et al. (2022), which is reasonable because the radiative spectrum of an accretion disk is much more complex than a simple sum of disk blackbody. Several methods had been proposed to estimate the value of hardening factor (e.g., Chiang, 2002; Davis et al., 2006; Done et al., 2012; Davis and El-Abd, 2019). We adopt the formula given by Done et al. (2012) in this work rather than the more recent one in Davis and El-Abd (2019) for
Figure 4: Phase-resolved Spectral analysis and \(0.2-10\)keV light curve of eRO-QPE2. Both of spectral and light curve data was obtained from XMM-Newton on 6 August 2020.
Figure 5: Phase-resolved Spectral analysis and \(0.3-2\)keV light curve of XMMSL1 JJ024916.6-041244. Both of spectral and light curve data was obtained from XMM-Newton on 14 July 2006.
Figure 6: Phase-resolved Spectral analysis and \(0.4-2\)keV light curve of GSN 069. The seven spectral segments are same as in Miniutti et al. (2019).
the reason that our model involves a high spin black hole with inner disk temperature of \(\sim 2\times 10^{5}\)K, while the updated fitting equation provided by Davis and El-Abd (2019) is designed for a non-spinning black hole and an accretion disk with higher temperature (Zdziarski et al., 2022). Furthermore, it is found that the results of our model depend relatively weak on the value of hardening factor. For example, we also adopt the hardening factor given by equation (9) in Davis and El-Abd (2019) to fit the observational data of GSN 069, where the parameters employed are shown in Table 1 GSN 069 (2). A similar good fitting result can be achieved by slightly reducing the mass of black hole. Therefore, we adopt the color correction of Done et al. (2012) for all of our other calculations. It is found that all the five QPEs can be qualitatively described by this model. However, there are still some inconsistencies in details, e.g., the profile and the irregular period of eruptions, which may be partly solved by considering more physical process in the model (see Pan et al., 2022 for details).
In general, the import of hardening factor can equivalently increase the effective temperature of disk and thus harden the radiation spectrum of the accretion disk (Done et al., 2012), which can help to solve the inconsistency between the temperature given by a standard thin disk and that required by observations (see section 2 for details). The hardening factors adopted in this work are all within the range of 2.1 to 2.6, which is also consistent with previous works (e.g., Ross et al., 1992; Done et al., 2012). In addition, although the presence of large-scale magnetic fields can drive outflows from disk, the disk structure can always remain optically thick required to generate a diluted black-body spectrum. The reason is that the magnetic pressure is far smaller than the sum of gas and radiation pressure in our model. Indeed, the magnetically driven outflows in this case will reduce the temperature of accretion disk and simultaneously increase the surface density of disk in the inner region, resulting on the increase of effective optical depth (see Li, 2014 for details).
We need to emphasis that the spin and mass of black hole are coupled in some ways when fitting the observational data. A higher spin adopted in our model will produce a disk closer to the central black hole, implying a higher temperature in the inner disk and a shorter timescale of limit-cycle behaviour. This trend can be equivalently obtained by adopting a smaller black hole mass. However, a somewhat smaller black hole mass compared with that inferred from observations (but within the error bars) has to be usually adopted in order to accord with the effective temperature observed in low state, even we have adopted a high black hole spin.
Two in the total five QPEs, i.e., GSN 069 and XMMSL1 J024916.6-041244, appear the long-term light curves consistent with TDEs, of which the fraction is much higher than the normal galaxy. Therefore, some authors suggested that QPEs may be triggered by the partial disruption of a remnant core or star after TDEs (Miniutti et al., 2022). If the TDEs did happen in these two sources, the disk instability may still at work when the gas is being swallowed by the BHs. At the beginning of the decay phase in TDE, the accretion rate could be super-Eddington corresponding with the slim disk (Abramowicz et al., 1988), which is thermally stable. However, the disk will slowly become the unstable thin accretion disk with decreasing accretion rate. QPEs will appear when the unstable region of disk is small enough with suitable accretion rate and magnetic fields. Analyzing the detailed parameter range that can generate QPE is very complicated because we have lots of parameters in this work. However, we can roughly constraint part of the parameters, such as mass accretion rate and magnetic pressure, by fixing other parameters. For example, we present how \(\beta_{1}\) vary with mass accretion rate in GSN 069 by fixing the unstable region as \(0.1R_{\rm s}\) and other parameters (see Figure 7). However, we need to note that the period of QPE depends not only on the width of the unstable region, mass accretion rate and magnetic pressure, but also on other parameters, such as the black hole mass. For QPE with different black hole mass and periods, it is therefore necessary to adopt different parameters in order to achieve good fittings.
Miniutti et al. (2022) showed that the QPEs of GSN 069 disappeared when a rebrightening of quiescent state happened after January 2020. According to the above discussion, the unstable zone in a thin disk will expand
Figure 7: The distribution of magnetic field strength and mass accretion rate with a fixed unstable region width of \(0.1R_{\rm s}\). Other parameters are fixed as: \(M=6\times 10^{5}M_{\odot}\), \(\alpha=0.15\), \(\mu=0.15\), \(f=0.9\), \(a_{*}=0.98\).
to larger radius, resulting on the increase of periods in QPEs when the accretion rate increases. Therefore, if the rebrightening of GSN 069 is caused by a repeating TDEs, the period of QPEs will be greatly increased with the sudden increase of accretion rate. So the QPEs can hardly be observed in a short duration campaign. The problem is that the period remains almost constant while the amplitude of QPEs in the XMM6 decays linearly with time at the beginning of rebrightening (Miniutti et al., 2022), which is inconsistent with the disk instability model. We suggest that the flux at the fast rising phase of rebrightening may come from the stream shocks at apocentre by TDEs, the disk instability model works only when the mass accretion rate has decreased to a critical value.
## Acknowledgements
We thank the reviewer for helpful comments. XP thanks Dr. Riccardo Arcodia, Erin Kara, Joheen Chakraborty and Margherita Giustini for providing of data in our figures. This work is supported by the NSFC (grants 12273089, 12073023, 12233007, 11833007, and 12147103), the science research grants from the China Manned Space Project with No. CMS-CSST- 2021-A06, and the Fundamental Research Fund for Chinese Central Universities.
|
2306.14343
|
TCE: A Test-Based Approach to Measuring Calibration Error
|
This paper proposes a new metric to measure the calibration error of
probabilistic binary classifiers, called test-based calibration error (TCE).
TCE incorporates a novel loss function based on a statistical test to examine
the extent to which model predictions differ from probabilities estimated from
data. It offers (i) a clear interpretation, (ii) a consistent scale that is
unaffected by class imbalance, and (iii) an enhanced visual representation with
repect to the standard reliability diagram. In addition, we introduce an
optimality criterion for the binning procedure of calibration error metrics
based on a minimal estimation error of the empirical probabilities. We provide
a novel computational algorithm for optimal bins under bin-size constraints. We
demonstrate properties of TCE through a range of experiments, including
multiple real-world imbalanced datasets and ImageNet 1000.
|
Takuo Matsubara, Niek Tax, Richard Mudd, Ido Guy
|
2023-06-25T21:12:43Z
|
http://arxiv.org/abs/2306.14343v1
|
# TCE: A Test-Based Approach to Measuring Calibration Error
###### Abstract
This paper proposes a new metric to measure the calibration error of probabilistic binary classifiers, called _test-based calibration error_ (TCE). TCE incorporates a novel loss function based on a statistical test to examine the extent to which model predictions differ from probabilities estimated from data. It offers (i) a clear interpretation, (ii) a consistent scale that is unaffected by class imbalance, and (iii) an enhanced visual representation with repect to the standard reliability diagram. In addition, we introduce an optimality criterion for the binning procedure of calibration error metrics based on a minimal estimation error of the empirical probabilities. We provide a novel computational algorithm for optimal bins under bin-size constraints. We demonstrate properties of TCE through a range of experiments, including multiple real-world imbalanced datasets and ImageNet 1000.
## 1 Introduction
In recent years, it has become ubiquitous to deploy complex machine learning models in real-world production systems. Many of these systems rely on probabilistic classifiers that predict the probability that some target outcome occurs. For such systems, it is often crucial that their predictive probabilities are _well-calibrated_, meaning that the predictive probability accurately reflects the true frequency that the target outcome occurs. In some contexts, failures to achieve calibration can lead to negative consequences. In applications like medical diagnoses (Topol, 2019) and autonomous driving (Grigorescu et al., 2020), associated risks are often assessed based on model predictions and the consequences of a misguided risk evaluation can be severe. In online advertising auctions (Li et al., 2015), it is common to incorporate a prediction of the probability of some outcome of interest (e.g., a click on an advert) when calculating an advertiser's bid.
While a number of metrics--such as log-likelihood, user-specified scoring functions, and the area under the receiver operating characteristic (ROC) curve--are used to assess the quality of probabilistic classifiers, it is usually hard or even impossible to gauge whether predictions are well-calibrated from the values of these metrics. For assessment of calibration, it is typically necessary to use a metric that measures _calibration error_, that is, a deviation between model predictions and probabilities of target occurrences estimated from data. The importance of assessing calibration error has been long emphasised in machine learning (Nixon et al., 2019; Minderer et al., 2021) and in probabilistic forecasting more broadly (Dawid, 1982; Degroot and Fienberg, 1983).
However, existing metrics of calibration error have several drawbacks that in certain scenarios can mean that their values do not appropriately reflect true calibration performance. In particular, we will demonstrate that values of existing calibration error metrics have an inconsistent scale that is influenced by the target class proportion. In applications such as fraud detection (Abdallah et al., 2016; Tax et al., 2021) and advertising conversion prediction (Yang and Zhai, 2022), the prevalence, i.e., the proportion of instances belonging to the target class, is often very low. This leads to situations where one may be unable to identify whether the values of calibration error metrics are small due to good calibration performance or due to the low prevalence. This is also problematic for monitoring applications aimed at tracking the calibration performance of a model in a production system, where the prevalence can change over time (i.e., _prior probability shift_(Storkey et al., 2009)) and that makes it difficult to understand whether to attribute changes in the metric to an actual change in calibration performance or to the change in prevalence.
Furthermore, _binning_ of model predictions--an essential component of most calibration error metrics (Naeini et al., 2015)--is often based on heuristics and lacks clear design
principles. For calibration error metrics, empirical probabilities of target occurrences are typically estimated by clustering data into several subsets based on binning of the associated model predictions. The design of the binning scheme is a vital factor in the accurate estimation of the empirical probabilities, yet few principles guiding the design of binning schemes have emerged to date.
In this paper, we elaborate on the issues of existing calibration error metrics in Section 2. We establish a simple yet novel metric that counterbalances the issues in Section 3. Section 4 empirically demonstrates properties of the proposed metric by experiments based on various datasets. Related works are discussed in Section 5, followed by the conclusion in Section 6. This paper focuses on the methodological aspects of the proposed new metric for binary classification, while theoretical development is left for future research. Our contributions are summarised as follows:
#### Contributions
* Our primary contribution is a novel calibration error metric called _test-based calibration error_ (TCE). TCE is based on statistical hypothesis testing and is interpretable as a percentage of model predictions that deviate significantly from estimated empirical probabilities. TCE produces values in a normalised, comparable range \([0,100]\) regardless of the class prevalence.
* We propose an explanatory visual representation of TCE called the _test-based reliability diagram_. It carries more information than the standard reliability diagram and facilitates a better understanding of calibration performance (See Figure 1).
* We introduce an optimality criterion for bins under which optimal bins minimise an estimation error of the empirical probabilities. We then propose a novel algorithm to compute optimal bins approximately under the constraints of the minimum and maximum size of each bin.
## 2 Background
In this section, we introduce the definition of _calibration_ and recap one of the most common _calibration error_ metrics. We then outline several critical challenges of existing calibration error metrics. The basic notation used in this paper is introduced below.
Denote input and output spaces respectively by \(\mathcal{X}\) and \(\mathcal{Y}\). We focus on probabilistic binary classification, i.e. \(\mathcal{Y}=\{0,1\}\), in which a probabilistic classifier \(P_{\theta}:\mathcal{X}\rightarrow[0,1]\) models a conditional probability of \(Y=1\) given an input \(x\in\mathcal{X}\). The data \(\mathcal{D}:=\{x_{i},y_{i}\}_{i=1}^{N}\) are assumed to be i.i.d. realisations from a random variable \((X,Y)\sim\mathbb{P}\). To simplify notation, for any data subset \(\mathcal{S}\subseteq\mathcal{D}\), we denote by \(\mathcal{S}^{x}\) a set of all inputs \(x\) in \(\mathcal{S}\) and by \(\mathcal{S}^{y}\) a set of all outputs \(y\) in \(\mathcal{S}\). By "a set of bins" or simply "bins", we mean a set of arbitrary disjoint intervals whose union is the unit interval \([0,1]\). For example, a set \(\{\Delta_{b}\}_{b=1}^{2}\) of intervals \(\Delta_{1}=[0.0,0.4)\) and \(\Delta_{2}=[0.4,1.0]\) is a set of bins.
### Calibration Error
A probabilistic classifier \(P_{\theta}:\mathcal{X}\rightarrow[0,1]\) is said to be _calibrated_[1, 2] if
\[\mathbb{P}(Y=1\mid P_{\theta}(X)=Q)=Q \tag{1}\]
for all \(Q\in[0,1]\) s.t. the conditional probability is well-defined. Informally, this criterion implies that the model prediction coincides with the actual probability of \(Y=1\) for all inputs. Any deviation between the actual probabilities and the model predictions in eq. (1) is often referred to as _calibration error_, which quantifies to what degree the classifier \(P_{\theta}\) is calibrated. The empirical computation of such a deviation involves estimating conditional probability \(\mathbb{P}(Y=1|P_{\theta}(X)=Q)\) from data. For given bins \(\{\Delta_{b}\}_{b=1}^{B}\), define disjoint subsets \(\{\mathcal{D}_{b}\}_{b=1}^{B}\) of data \(\mathcal{D}\) by
\[\mathcal{D}_{b}:=\{(x_{i},y_{i})\in\mathcal{D}\mid P_{\theta}(x_{i})\in\Delta _{b}\}. \tag{2}\]
Simply put, \(\mathcal{D}_{b}\) is a subset of data whose model predictions have similar values. The conditional probability \(\mathbb{P}(Y=1\mid P_{\theta}(X)=Q)\) for any \(Q\in\Delta_{b}\) can then be estimated by the empirical mean of the labels in subset \(\mathcal{D}_{b}\):
\[\mathbb{P}(Y=1\mid P_{\theta}(X)=Q)\approx\widehat{P}_{b}:=\frac{1}{N_{b}} \sum_{y_{i}\in\mathcal{D}_{b}^{y}}y_{i} \tag{3}\]
where we denote by \(\widehat{P}_{b}\) the estimated conditional probability in \(\mathcal{D}_{b}\) and by \(N_{b}\) the sample size of \(\mathcal{D}_{b}\).
One of the most common metrics to measure calibration error is _expected calibration error_ (ECE) [11]. ECE uses equispaced bins \(\{\Delta_{b}\}_{b=1}^{B}\) over \([0,1]\) for a given number \(B\) and measures an absolute difference between the averaged model predictions and the estimated conditional probability \(\widehat{P}_{b}\) within each data subset \(\mathcal{D}_{b}\). The value of ECE is defined as
\[\text{ECE}:=\sum_{b=1}^{B}\frac{N_{b}}{N}\left|\widehat{P}_{b}-\frac{1}{N_{b}} \sum_{x_{i}\in\mathcal{D}_{b}^{x}}P_{\theta}(x_{i})\right|. \tag{4}\]
ECE has an associated practical visual representation known as the _reliability diagram_[1, 12], which aligns the averaged model prediction and the estimated conditional probability in each \(\mathcal{D}_{b}\) (see Figure 1). The reliability diagram is a powerful tool to intuitively grasp the deviation between the model and the estimated probability in ECE.
### Challenges in Calibration Error
Calibration error metrics, such as ECE, are widely used in real-world applications. There nonetheless exist several
challenges that may cause a misassessment of calibration. These problems become evident especially when a distribution of model predictions \(\{P_{\theta}(x_{i})\}_{i=1}^{N}\) is not well-dispersed. This scenario often arises in imbalanced classification where model predictions tend to be severely skewed towards either \(0\) or \(1\). The following paragraphs illustrate challenges of existing calibration error metrics, which we aim to address.
Challenge 1 (Scale-Dependent Interpretation)In most calibration error metrics, the deviation between the model prediction and the estimated probability \(\widehat{P}_{b}\) in each \(\mathcal{D}_{b}\) is measured by the absolute difference as in eq. (4). However, the use of the absolute difference can result in values that have an inconsistent scale influenced by the class prevalence. To illustrate this problem, consider an estimated probability \(\widehat{P}_{b}\) and an averaged model prediction denoted \(\overline{Q}_{b}\) for some \(b\) in eq. (4). If \(\widehat{P}_{b}=0.50\) and \(\overline{Q}_{b}=0.49\), their absolute difference is \(0.01\). On the other hand, if \(\widehat{P}_{b}=0.01\) and \(\overline{Q}_{b}=0.0001\), their absolute difference is \(0.0099\). Despite the comparison under the absolute difference suggesting that the probability \(\overline{Q}_{b}=0.0001\) with respect to \(\widehat{P}_{b}=0.01\) in the latter case is better calibrated than in the former case, one may reasonably argue that the latter is not well-calibrated--or at least not comparable to the former--given the stark difference in the order of magnitude. Similarly to this illustration, the values of existing calibration metrics built on the absolute difference can be proportionally small whenever the scales of \(\widehat{P}_{b}\) and \(\overline{Q}_{b}\) are small. This issue makes it difficult to distinguish whether the metric values are low due to good calibration performance or due to the small scale of the probabilities as in imbalanced classification.
Challenge 2 (Lack of Normalised Range)The range of values of calibration error metrics built on absolute differences is not normalised. The range can vary depending on the choice of bins \(\{\Delta_{b}\}_{b=1}^{B}\). To illustrate this problem, consider a bin \(\Delta_{b}\) for some \(b\). If \(\Delta_{b}=[0.4,0.6]\), the absolute difference between \(\widehat{P}_{b}\) and \(\overline{Q}_{b}\) falls into a range \([0.0,0.6]\) because \(\widehat{P}_{b}\) is the estimated probability in \([0.0,1.0]\) and the averaged model prediction \(\overline{Q}_{b}\) in the bin \(\Delta_{b}\) takes the value within \(\Delta_{b}\). Similarly, a different choice of bin \(\Delta_{b}\) leads to a different range of the absolute difference. Consequently, the choice of bins \(\{\Delta_{b}\}_{b=1}^{B}\) impacts the range of the final value of calibration error metrics that are built on the absolute difference. To assure rigorous comparability of the final value of a calibration error metric, it is desirable to establish a measurement of the deviation whose value has a fixed, normalised range independent of the choice of bins.
Challenge 3 (Arbitrary Choice of Bins)An appropriate choice of bins is critical because it meaningfully impacts on final values of calibration error metrics. Equispaced bins \(\{\Delta_{b}\}_{b=1}^{B}\) over \([0,1]\) for a given number \(B\) are one of the most common choices of bins in practice, as used in ECE. However, equispaced bins can often cause a situation where a few particular bins contain the majority of the model predictions when they are not well-dispersed over \([0,1]\), as often happens in imbalanced classification. If some bin \(\Delta_{b}\) contains the majority of model predictions, the corresponding estimated probability \(\widehat{P}_{b}\) coincides approximately with the empirical mean of all labels. On the other hand, estimated probabilities of the bins other than \(\Delta_{b}\) become unreliable due to the small size of samples contained. A potential solution to this problem is to use bins that adapt based on the dispersion of model predictions. Nixon et al. [2019] proposed _adaptive calibration error_ (ACE) that computes the value of eq. (4) using bins \(\{\Delta_{b}\}_{b=1}^{B}\) based on \(B\)-quantiles of model predictions \(\{P_{\theta}(x_{i})\}_{i=1}^{N}\) for given \(B\). However, questions remain regarding the optimal number \(B\) of bins and the appropriate quantile to use for each bin. To the best of our knowledge, there is no established notion of what makes bins optimal, nor do clear design principles for bins exist.
## 3 Calibration Error Based on Test and Optimal Bins
We propose a new calibration error metric that offers a simple yet novel solution to the challenges outlined in Section 2.2. First, in Section 3.1, we present a general formulation of calibration error metrics that encompasses most metrics used in practice. This general formulation allows for a structured understanding of the design of calibration error metrics. In Section 3.2, we derive from the general formulation a new calibration error metric, called TCE, which incorporates a loss based on a statistical test to compare model predictions with estimated empirical probabilities. TCE produces a value that has a clear interpretation as a percentage of model predictions determined to deviate significantly from estimated empirical probabilities, which leads to a normalised range of possible values \([0,100]\) regardless of the choice of bins \(\{\Delta_{b}\}_{b=1}^{B}\). In Section 3.3, we consider an optimal criterion of bins \(\{\Delta_{b}\}_{b=1}^{B}\) from the perspective of minimising an estimation error of the empirical probabilities \(\{\widehat{P}_{b}\}_{b=1}^{B}\). We then develop a practical regularisation approach that ensures a minimum and maximum sample size in each subset \(\mathcal{D}_{b}\).
### General Calibration Error
The following definition presents an abstract formulation of calibration error metrics, which we call _general calibration error_ (GCE) for terminological convenience. Denote by \(2^{\mathcal{D}}\) a power set of \(\mathcal{D}\), i.e. a space of all subsets of \(\mathcal{D}\) and by \(\mathcal{M}\) a space of all probabilistic classifiers below.
Definition 1 (Gce): Let \(L:2^{\mathcal{D}}\times\mathcal{M}\rightarrow\mathbb{R}\) be a loss of any probabilistic classifier evaluated for any data subset. Let \(\mathcal{B}\) be a set of bins \(\{\Delta_{b}\}_{b=1}^{B}\) that define data subsets \(\{\mathcal{D}_{b}\}_{b=1}^{B}\) as in eq. (2). Let \(\|\cdot\|\) be a norm of a \(B\)-dimensional vector
space. For a given probabilistic classifier \(P_{\theta}:\mathcal{X}\rightarrow[0,1]\), define a scalar \(\text{GCE}_{b}\in\mathbb{R}\) for each \(b=1,\cdots,B\) by_
\[\text{GCE}_{b}:=L\left(\mathcal{D}_{b},P_{\theta}\right). \tag{5}\]
_Then, GCE of the probabilistic classifier \(P_{\theta}\) is defined by_
\[\text{GCE}=\|(\text{GCE}_{1},\cdots,\text{GCE}_{B})\|. \tag{6}\]
This formulation translates the problem of designing a calibration error metric into a problem of choosing the tuple \((L,\mathcal{B},\|\cdot\|)\). Most existing calibration error metrics used in practice can be derived by selecting an appropriate tuple of the loss \(L\), the bins \(\mathcal{B}\), and the norm \(\|\cdot\|\) in GCE. See Example 1 below for the case of ECE. It is also immediate to show that ACE can be recovered from GCE.
**Example 1**.: _Let \(\mathcal{B}\) be equispaced bins \(\{\Delta_{b}\}_{b=1}^{B}\) over \([0,1]\), let \(L\) be \(L(\mathcal{D}_{b},P_{\theta})=|\frac{1}{N_{b}}\sum_{y\in\mathcal{D}_{b}^{ \mathcal{F}}}y-\frac{1}{N_{b}}\sum_{x\in\mathcal{D}_{b}^{\mathcal{F}}}P_{ \theta}(x)|\), and let \(\|\cdot\|\) be a weighted 1-norm \(\|v\|=\sum_{b=1}^{B}\frac{N_{b}}{N}\times|v_{b}|\). The ECE corresponds to the GCE under this tuple._
We aim to choose the tuple \((L,\mathcal{B},\|\cdot\|)\) so that it addresses the aforementioned challenges in Section 2.2. Section 3.2 addresses a loss \(L\) based on a statistical test and presents the resulting TCE. Subsequently, Section 3.3 addresses a choice of bins \(\mathcal{B}\) that is obtained through optimisation to minimise an estimation error of the empirical probabilities \(\{\widehat{P}_{b}\}_{b=1}^{B}\). All norms \(\|\cdot\|\) are equivalent in finite dimensions, and hence we do not focus on any particular choice. As with ECE, we use the weighted 1-norm \(\|\cdot\|\) in Example 1 for TCE.
### Test-Based Calibration Errors
We present our main contribution, a new calibration error metric called TCE, that is derived from GCE by specifying a novel loss \(L\) based on a statistical test. Our proposed loss \(L\) summarises the percentage of model predictions that deviate significantly from the empirical probabilities in each subset \(\mathcal{D}_{b}\). We effectively test a null hypothesis "the probability of \(Y=1\) is equal to \(P_{\theta}(x)\)" at each \(x\in\mathcal{D}_{b}^{x}\) using the output data \(\mathcal{D}_{b}^{y}\). A rigorous formulation of this loss \(L\) is provided below, combined with the definition of the TCE. Note that the bins \(\{\Delta_{b}\}_{b=1}^{B}\) and the norm \(\|\cdot\|\) of TCE are arbitrary, while the weighted 1-norm is our default choice of \(\|\cdot\|\).
**Definition 2**.: **(TCE)** _Given a statistical test and its significance level \(\alpha\in[0,1]\), let \(R\) be a function of any observed dataset of random variable \(Y\in\{0,1\}\) and any probability \(Q\in[0,1]\), which returns \(1\) if a hypothesis \(P(Y=1)=Q\) is rejected based on the dataset and returns \(0\) otherwise. In Definition 1, let \(L\) be an average rejection percentage s.t._
\[L(\mathcal{D}_{b},P_{\theta})=100\times\frac{1}{N_{b}}\sum_{x\in\mathcal{D}_{ b}^{x}}R\left(\mathcal{D}_{b}^{y},P_{\theta}(x)\right). \tag{7}\]
_GCE in Definition 1 is then called TCE._
In contrast to existing metrics that examine the difference between averaged model predictions and empirical probabilities in each bin, TCE examines each prediction \(P_{\theta}(x)\) and summarises the rejection percentage in each bin. The procedure of TCE can be intuitively interpreted as follows.
**Remark 1**.: _Informally speaking, TCE examines whether each model prediction \(P_{\theta}(x)\) can be regarded as an outlier relative to the empirical probability of the corresponding data \(\mathcal{D}_{b}^{y}\), where the test in function \(R\) acts as a criterion for determining outliers. The level of model-calibration is then measured by the rate of outliers produced by the model._
In this paper, we use the Binomial test as the _de facto_ standard statistical test to define \(R\) in the TCE. TCE based on other tests, including Bayesian testing approaches, is an open direction for future research. Algorithm 1 summarises the computational procedure of TCE. There are multiple advantages of TCE as follows.
**Advantage 1 (Clear Interpretation)** The final value of TCE has a clear interpretation as a percentage of model predictions that are determined by the test of choice (here the Binomial test) to deviate significantly from estimated empirical probabilities. Because the value is a percentage, the range of the value is normalised to \([0,100]\).
**Advantage 2 (Consistent Scale)** The test evaluates the statistical deviation of data from a model prediction \(P_{\theta}(x)\) adaptively and appropriately for each scale of \(P_{\theta}(x)\) and data size \(N_{b}\). Informally, TCE is the number of relative outliers determined for each \(P_{\theta}(x)\) adaptively. This endows the value with a consistent scale robust to class imbalance.
**Advantage 3 (Enhanced Visualisation)** TCE leads to a new visual representation that shows the distribution of model predictions, and the proportion of model predictions that deviate significantly from an empirical probability in each bin. See Figure 1 for the description and comparison with the standard reliability diagram.
```
data \(\mathcal{D}\), model \(P_{\theta}\), norm \(\|\cdot\|\), bins \(\{\Delta_{b}\}_{b=1}^{B}\), function \(R\) based on a chosen test and significant level output \(\text{\bf{true}}\in\mathbb{R}\) for\(b=1,\ldots,B\)do \(\mathcal{D}_{b}\leftarrow\{(x_{i},y_{i})\in\mathcal{D}\mid P_{\theta}(x_{i})\in \Delta_{b}\}\)\(\triangleright\) make subset \(\text{TCE}_{b}\gets 0\) for\(x_{i}\in\mathcal{D}_{b}^{x}\)do \(\text{TCE}_{b}\leftarrow\text{TCE}_{b}+R(\mathcal{D}_{b}^{y},P_{\theta}(x_{i}))\)\(\triangleright\) test each end for \(\text{TCE}_{b}\gets 100/N_{b}\times\text{TCE}_{b}\) endfor \(\text{TCE}\leftarrow\|(\text{TCE}_{1},\ldots,\text{TCE}_{B})\|\)
```
**Algorithm 1** Computation of TCE
Our interest is in the aggregated rejection percentage of all the tests performed, and so multiple testing corrections--e.g., the Bonferroni correction to offer a frequentist guarantee to control the familywise error rate--are not considered. If all the null hypotheses were simultaneously true, TCE would simply coincide with the false positive rate which equals in expectation to type I error specified by the significant level of the test. Full discussion on when and how adjustments for multiple hypotheses tests should be made may be found in Bender and Lange (2001).
Given that TCE is based on a statistical testing procedure, it may be possible to apply ideas from power analysis to inform the desired sample size in each \(\mathcal{D}_{b}\). Such analysis may also benefit the algorithm in the next subsection to compute optimal bins under the bin-size constraints, providing insights on what bin-size should be used as the constraints. Finally, it is worth noting that TCE can be extended to multi-class classification. The following remark presents one straightforward approach to the extension.
**Remark 2**.: _Any calibration error metric defined for binary classification can be extended to multi-class classification by considering classwise-calibration (e.g. Kull et al., 2019), where the calibration error metric is applied for one-vs-rest classification of each class independently. A modification of TCE in multi-class classification settings can then be defined as an average of TCEs applied for one-vs-rest classification of each class._
### Optimal bins by monotonic regressor and bin-size constraints
It is a fundamental challenge to establish a practical and theoretically sound mechanism to design bins used in calibration error metrics. Ideally designed bins provides accurate probability estimates \(\{\widehat{P}_{b}\}_{b=1}^{B}\) from data \(\mathcal{D}\) while keeping the size of each bin reasonable. To this end, we propose a novel algorithm to compute bins that aim to minimise an estimation error of the probability estimates \(\{\widehat{P}_{b}\}_{b=1}^{B}\) under the constraint of the size of each bin.
Recently, Dimitriadis et al. (2021) pointed out that an existing quadratic programming algorithm, called pool-adjacent-violators algorithm (PAVA), can be directly applied to compute "optimal" bins in the context of obtaining a better reliability diagram. The bins are designed in a manner that minimises the _Brier score_(Brier, 1950) of resulting empirical probabilities by virtue of PAVA. Forging ahead with this observation, we introduce the following definition that makes explicit in what sense bins \(\{\Delta_{b}\}_{b=1}^{B}\) can be considered optimal given an arbitrary estimation error \(\mathrm{D}\) of the probability estimates \(\{\widehat{P}_{b}\}_{b=1}^{B}\) from data \(\mathcal{D}\).
**Definition 3**.: **(Optimal Bins)** _Let \(\Pi\) be a space of all sets of bins \(\{\Delta_{b}\}_{b=1}^{B}\) for any \(B\), with associated data subsets denoted by \(\{\mathcal{D}_{b}\}_{b=1}^{B}\) and probability estimates from \(\{\mathcal{D}_{b}^{b}\}_{b=1}^{B}\) denoted by \(\{\widehat{P}_{b}\}_{b=1}^{B}\). Let \(\mathrm{D}\) be any error function between an observed dataset of random variable \(Y\in\{0,1\}\) and a
Figure 1: Comparison of two visual representations both applied for a gradient boosting model trained on the _abalone_ dataset used in Section 4.2. (Left) A new visual representation, which we call the _test-based reliability diagram_. The central plot shows a violin plot of model predictions in each bin, whose estimated probability is presented by a red line. The bottom plot shows by grey bar the sample size of each bin and by red bar the percentage of model predictions that deviate significantly from the estimated probability in each bin. The right plot shows a histogram of all model predictions. (Right) The standard reliability diagram with the bin-size plot on the bottom and the histogram plot on the right added for comparison.
given probability \(Q\in[0,1]\). Any set of bins that satisfies_
\[\min_{\{\Delta_{b}\}_{b=1}^{B}\in\Pi}\sum_{b=1}^{B}W_{b}\times \mathrm{D}(\mathcal{D}_{b}^{y},\widehat{P}_{b})\\ \text{subject to }\widehat{P}_{1}\leq\cdots\leq\widehat{P}_{B} \tag{8}\]
_can be considered an optimal set of bins under the estimation error \(\mathrm{D}\), where \(W_{b}:=N_{b}/N\) is the weight associated with the error of subset \(\mathcal{D}_{b}^{y}\) of size \(N_{b}\)._
The monotonic constraint \(\widehat{P}_{1}\leq\cdots\leq\widehat{P}_{B}\) of the probability estimates \(\{\widehat{P}_{b}\}_{b=1}^{B}\) is a natural requirement because the choice of bins becomes trivial otherwise. For example, consider bins \(\{\Delta_{b}\}_{b=1}^{B}\) with \(B=N\) such that \(\Delta_{b}\) contains one single point \(y_{b}\) and the probability estimate \(\widehat{P}_{b}=y_{b}\) for each \(b\). This clearly achieves that \(\sum_{b=1}^{B}W_{b}\times\mathrm{D}(\mathcal{D}_{b}^{y},\widehat{P}_{b})= \frac{1}{N}\sum_{b=1}^{N}\mathrm{D}(\{y_{b}\},y_{b})=0\). Under the monotonic constraint, the choice of bins becomes non-trivial.
Under some choices of the estimation error \(\mathrm{D}\), the optimisation of eq.8 can be solved as a monotonic regression problem. Given an ordered dataset \(\{y_{i}\}_{i=1}^{N}\), a monotonic regression algorithm finds \(N\) monotonically increasing values \(\widehat{y}_{1}\leq\cdots\leq\widehat{y}_{N}\) that minimise some loss between \(\{\widehat{y}_{i}\}_{i=1}^{N}\) and \(\{y_{i}\}_{i=1}^{N}\). There exist algorithms for various losses, including the \(l_{p}\) loss, the Huber loss, and the Chebyshev loss (de Leeuw et al., 2009). PAVA solves a monotonic regression problem under the squared error \(\sum_{i=1}^{N}(\widehat{y}_{i}-y_{i})^{2}\). If we choose the error \(\mathrm{D}\) as the variance of each \(\mathcal{D}_{b}^{y}\), i.e.,
\[\mathrm{D}(\mathcal{D}_{b}^{y},\widehat{P}_{b})=\frac{1}{N_{b}}\sum_{i=1}^{N_{ b}}(y_{i}-\widehat{P}_{b})^{2} \tag{9}\]
the optimal set of bins under \(\mathrm{D}\) can be obtained using PAVA, which corresponds to the case of Dimitriadis et al. (2021). See Appendix A for the proof that the optimisation criterion of eq.8 is indeed minimised at bins obtained using PAVA. The approach using PAVA is a highly appealing solution to the design of bins \(\{\Delta_{b}\}_{b=1}^{B}\) because it achieves a fully-automated design of the bins based on the clear criterion of eq.8. However, such a fully-automated design can occasionally generate a bin that contains an excessively small or large number of data for the sake of minimising the aggregated estimation error over all \(\{\widehat{P}_{b}\}_{b=1}^{B}\). Imposing a certain regularisation on the minimum and maximum size of each \(\mathcal{D}_{b}\) can aid in keeping some baseline quality of the estimation of each individual \(\widehat{P}_{b}\).
Therefore, we propose a modified version of PAVA that regularises based on the given minimum and maximum size of each subset \(\mathcal{D}_{b}^{y}\). Algorithm2 summarises the full algorithm, which we call _PAVA with block constraints_ (PAVA-BC), followed by Algorithm3 that summarises how to compute bins using PAVA-BC accordingly, where \(\text{Sort}(\mathcal{D},P_{\theta})\) in Algorithm3 denotes any algorithm that sorts labels \(\{y_{i}\}_{i=1}^{N}\) in ascending order of model predictions \(\{P_{\theta}(x_{i})\}_{i=1}^{N}\). By
Algorithm 3, we can obtain bins that satisfy the given minimum and maximum size constraints \(N_{\text{min}}\) and \(N_{\text{max}}\) in each \(\mathcal{D}_{b}\), while benefitting from the automated design of bins by PAVA. A set of bins based on PAVA can be recovered by replacing PAVA-BC with PAVA in Algorithm 3. In general, the introduction of the regularisation can cause mild violation of the monotonicity \(\widehat{P}_{1}\leq\dots\leq\widehat{P}_{B}\), meaning that there may exist a few values \(\widehat{P}_{b}\) that is smaller than \(\widehat{P}_{b-1}\). See Appendix B for each example where mild violation of the monotonicity by PAVA-BC occured and did not occur. In practice, mild violation of the monotonicity can often be a reasonable cost to achieve better properties of bins. For example, Tibshirani et al. (2011) studied settings where the monotonicity is only "nearly" satisfied.
See Figure 2 for a comparison of the bins computed by three different approaches: PAVA, PAVA-BC, and binning based on \(10\)-quantiles. The bins produced by PAVA-BC interpolate between the optimal bins produced by PAVA and the well-sized bins produced by binning based on quantiles. This is further confirmed by Table 1 which shows the total estimation error in eq. (8) and the estimation error within each bin in eq. (9) for each approach. The total estimation error is minimised by PAVA, while an average of the estimation error within each bin is minimised by binning based on quantiles. In contrast, PAVA-BC takes a balance between the total and individual estimation error.
## 4 Empirical Evaluation
In this section, we demonstrate the properties of TCE via three experiments. The first experiment uses synthetic data to examine the properties of TCE under controlled class imbalance. The second experiment involves ten real-world datasets from the University of California Irvine (UCI) machine learning repository (Dua and Graff, 2017), where nine are designed as benchmark tasks of imbalanced classification, and one is a well-balanced classification task for comparison. In the second experiment, we also demonstrate that ECE and ACE may produce misleading assessments of calibration performance under class imbalance. TCE has the potential to reduce such misinterpretation risks. The final experiment uses the ImageNet1000 dataset to illustrate that TCE is applicable to large-scale settings. In all experiments, models are fitted to training data first and any calibration error metric are computed using validation data. Source code to reproduce the experiments is available in [https://github.com/facebookresearch/tce](https://github.com/facebookresearch/tce).
We compute TCE with bins based on PAVA-BC unless otherwise stated. The minimum and maximum size of each bin for PAVA-BC are set to \(N/20\) and \(N/5\) for a given dataset size \(N\). Under these constraints, the number of bins based on PAVA-BC falls into a range between 5 and 20. In addition to ECE and ACE, we include the maximum calibration error (MCE) (Naeini et al., 2015) for comparison. MCE is defined by replacing the weighted 1-norm with the supremum norm over \(b=1,\dots,B\) in Example 1. We denote, by TCE(Q) and MCE(Q), TCE and MCE each with bins based on \(B\)-quantiles. For all metrics, \(B\)-equispaced bins and \(B\)-quantiles bins are computed with \(B=10\).
### Synthetic Data with Controlled Class Imbalance
We first examine TCE using synthetic data from a simulation model considered in Vaicenavicius et al. (2019). The data are simulated from a Gaussian discriminant analysis model \((x,y)\sim P(x\mid y)P(y)\). The output \(y\in\{0,1\}\) is first sampled from a Bernoulli distribution \(P(y)\) with parameter \(\pi\) and the input \(x\in\mathbb{R}\) is then sampled from a Gaussian distribution \(P(x\mid y)=\mathcal{N}(m_{y},s_{y})\) with mean \(m_{y}\) and scale \(s_{y}\) dependent of \(y\). We set \(m_{y}=(2\times y-1)\) and \(s_{y}=2\), and change the parameter \(\pi\) for each setting below. By Bayes' theorem, the conditional probability of \(y\) given \(x\) corresponds to a logistic model: \(P(y\mid x)=1/(1+\exp(\beta_{0}+\beta_{1}\times x))\) where \(\beta_{0}=\log(\pi/(1-\pi))\) and \(\beta_{1}=4\). A logistic model is therefore capable of reproducing the probability \(P(y\mid x)\) of this synthetic data perfectly.
We consider two baseline cases of (i) well-balanced classification and (ii) imbalanced classification in this experiment. We train a logistic model for the training data simulated with the parameter \(\pi=0.5\) (i.e. 50% prevalence) in case (i) and
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **PAVA** & **PAVA-BC** & **Quantile** \\ \hline Total Error & 0.040 & 0.042 & 0.048 \\ Averaged Within-Bin Error & 0.132 & 0.077 & 0.047 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The total estimation error and an average of the estimation error within each bin for the bins in Figure 2.
Figure 2: Comparison of bins for a random forest model on the _satimage_ dataset used in Section 4.2 based on (top) PAVA, (middle) PAVA-BC, (bottom) binning based on \(10\)-quantiles. The dotted line represents the boundary of each bin and the grey bar represents the size of each bin.
with \(\pi=0.01\) (i.e. 1% prevalence) in case (ii). In each case (i) and (ii), we generate three different test datasets to create situations where the trained model is (a) well-calibrated, (b) over-calibrated, and (c) under-calibrated. We examine the performance of TCE under these senariors. Test datasets for senarios (a), (b), and (c) are generated from the simulation model with prevalences \(50\%\), \(40\%\), and \(60\%\) in case (i) and with prevalences \(1\%\), \(0\%\), and \(2\%\) in case (ii). We generate 20000 data points in total, of which 70% are training data and 30% are test data.
Table 2 shows the values of four calibration error metrics applied to the logistic regression model in each scenario. Table 2 demonstrates that all values of ECE and ACE in imbalanced case (ii) can be smaller than--or very close to--values for well-calibrated senario (a) in well-balanced case (i). For example, the ECE value for case (ii)-(b) was smaller than that for case (i)-(a). In contrast, TCE provides values with a consistent scale in both well-balanced and imbalanced cases. More simulation studies of TCE with different hyperparameters are presented in Appendix C.1.
### Imbalanced UCI Datasets
Next, we compare calibration error metrics using real-world datasets in the regime of severe class imbalance. We use nine UCI datasets that were preprocessed by Lemaitre et al. (2017) as benchmark tasks of imbalanced classification. We also use one additional UCI dataset with a well-balanced prevalence for comparison. For each dataset, 70% of samples are used as training data and 30% of samples are kept as validation data. We train five different algorithms: logistic regression (LR), support vector machine (SVM), random forest (RF), gradient boosting (GB), and multi-layer perceptron (MLP). We evaluate the calibration performance of each model by five different calibration error metrics in the following tables. Tables 3 and 4 show results for the imbalanced datasets, _abalone_ and _webpage_(Dua and Graff, 2017), respectively. Results for all the other datasets are presented in Appendix C.2. In Table 3, the best model ranked by TCE and ACE agree with each other while ECE identifies RF as the best model. It can be observed from the reliability diagram of ECE for both the datasets in Appendix C.2 that a large majority of model predictions are contained in a single bin of ECE. In such cases, ECE becomes essentially equivalent to a comparison of global averages of all labels and all model predictions. Table 4 demonstrates a situation where ECE and ACE risk misleading assessments of calibration performance. Several values of ECE and ACE are all sufficiently small in Table 4, by which one may conclude that it is reasonable to use a model with the smallest calibration error. However, the values of TCE indicate that no model has a good calibration performance. In fact, relatively large statistical deviations between model predictions and empirical probabilities can be observed from the test-based reliability diagram for the webpage dataset in Appendix C.2.
## 5 Related Work
Several calibration error metrics have been proposed, including the aforementioned ECE. MCE is a widely used variant of ECE that replaces the summation over \(b=1,\ldots,B\) in (4) with the supremum over \(b=1,\ldots,B\). [12] introduce a more general \(l_{p}\) calibration error, which includes both ECE and MCE. ACE replaces the equispaced bins in ECE with bins designed based on quantiles of model predictions, which prevents high concentration of data in one bin when data is imbalanced [13]. These calibration error metrics can be extended to multi-class classification [12]. Other than calibration error, scoring functions [11] are commonly used measurements to evaluate a probabilistic classifier. [12] reported a limitation of the Brier score for imbalanced classification, and proposed the _stratified_ Brier score that aggregates multiple Brier scores.
This paper designed a new calibration error metric based on a statistical test. While statistical tests have been used in the context of calibration, we are the first to incorporate a statistical test into the design of a calibration error metric. [12] performed a statistical test on whether ECE computed for synthetic data generated from predictive probabilities is significantly different from ECE computed for actual data. Similarly, [12] proposed a statistical test of the value of their calibration error metric built on kernel methods. In contrast to existing works which considered a test for final values of calibration error metrics, our approach incorporates a test into the metric itself.
While the use of binning is vital in the vast majority of calibration metrics, there are a few works on the _binning-free_ design of calibration error metrics. The main idea is to use an cumulative distribution function (CDF) of predictive probabilities, which can be estimated without binning, and evaluate how significantly it differs from an ideal CDF that occurs if the predictive probabilities are all well-calibrated. For example, [12] and Arrieta-Ibarra et al. [13] considered the Kolmogorov-Smirnov test for the empirical CDF, where [12] further proposed a spline interpolation to obtain a continuous approximation of the CDF. An approach proposed by [10] can also be regarded as binning-free. It uses a continuous CDF of the beta distribution produced by their calibration method, mentioned below, rather than the empirical CDF.
_Calibration methods_ refer to algorithms used to improve the calibration performance of a model \(P_{\theta}\). Usually, they learn some 'post-hoc' function \(\varphi:[0,1]\rightarrow[0,1]\) to be applied to each model predictio so that the new prediction \(\varphi(P_{\theta}(x))\) is better calibrated. Various calibration algorithms have been proposed in parallel to the development of calibration error metrics. Platt scaling uses a logistic function for the post-hoc function \(\varphi\)[13]. Alternatively, [14, 15] proposed to use a beta distribution in binary classification and a Dirichlet distribution in multi-class classification. Isotonic regression is a powerful non-parametric approach to find a monotonically increasing function \(\varphi\) that minimises the Brier score [1]. Finally, Bayesian Binning into Quantiles by [12] extends a classical histogram-based calibration [10] to an ensemble of histogram-based calibrations based on Bayesian model averaging.
## 6 Conclusion
In this paper, we proposed a new calibration error metric TCE that incorporates a novel loss function based on a statistical test. TCE has (i) a clear interpretation as a percentage of model predictions determined to deviate significantly from estimated empirical probabilities, (ii) a consistent scale that is robust to class imbalance, and (iii) an informative visual representation that facilitates a better understanding of calibration performance of probabilistic classifiers. We further introduced an optimality criterion of bins associated with a minimal estimation error of the empirical probabilities and a new algorithm to compute optimal bins approximately under the constraint of the size of each bin.
Our proposal opens up room for new research directions in the context of calibration. This paper focuses on the methodological development of TCE. There are various directions to investigate in terms of theoretical properties of TCE. These include the convergence properties of TCE in the limit of data size \(N\), understanding the minimum number of data points that should be contained in each subset \(\mathcal{D}_{b}\), and a rigorous theoretical analysis of PAVA-BC. By continuing to investigate these areas, we can refine and expand our understanding of the capabilities of TCE.
### Acknowledgements
The authors would like to thank Abbas Zaidi, Michael Gill, and Will Bullock for their useful feedback on early work of this paper. TM is supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & **TCE** & **ECE** & **ACE** & **MCE** & **MCE(Q)** \\ \hline AlexNet & 42.74\% & 0.0070 & 0.0070 & 0.1496 & 0.0528 \\ VGG19 & 23.57\% & 0.0028 & 0.0028 & 0.2148 & 0.0247 \\ Res18 & 29.93\% & 0.0042 & 0.0042 & 0.2368 & 0.0350 \\ Res50 & 24.60\% & 0.0020 & 0.0018 & 0.1911 & 0.0152 \\ Res152 & 16.09\% & 0.0012 & 0.0013 & 0.1882 & 0.0102 \\ \hline
**Time (s)** & 71.78 & 0.4873 & 0.4221 & 0.0046 & 0.0063 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison of five calibration error metrics for five different deep learning models on ImageNet1000 data.
|
2304.02358
|
Visualizing Quantum Circuit Probability -- estimating computational
action for quantum program synthesis
|
This research applies concepts from algorithmic probability to Boolean and
quantum combinatorial logic circuits. A tutorial-style introduction to states
and various notions of the complexity of states are presented. Thereafter, the
probability of states in the circuit model of computation is defined. Classical
and quantum gate sets are compared to select some characteristic sets. The
reachability and expressibility in a space-time-bounded setting for these gate
sets are enumerated and visualized. These results are studied in terms of
computational resources, universality and quantum behavior. The article
suggests how applications like geometric quantum machine learning, novel
quantum algorithm synthesis and quantum artificial general intelligence can
benefit by studying circuit probabilities.
|
Bao Gia Bach, Akash Kundu, Tamal Acharya, Aritra Sarkar
|
2023-04-05T10:49:36Z
|
http://arxiv.org/abs/2304.02358v1
|
# Visualizing Quantum Circuit Probability
###### Abstract
This research applies concepts from algorithmic probability to Boolean and quantum combinatorial logic circuits. A tutorial-style introduction to states and various notions of the complexity of states are presented. Thereafter, the probability of states in the circuit model of computation is defined. Classical and quantum gate sets are compared to select some characteristic sets. The reachability and expressibility in a space-time-bounded setting for these gate sets are enumerated and visualized. These results are studied in terms of computational resources, universality and quantum behavior. The article suggests how applications like geometric quantum machine learning, novel quantum algorithm synthesis and quantum artificial general intelligence can benefit by studying circuit probabilities.
\({}^{1}\)Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology, Viet Nam
\({}^{2}\)Joint Doctoral School, Silesian University of Technology, Gliwice, Poland
\({}^{3}\)Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Gliwice, Poland
\({}^{4}\)Independent Researcher, Bengaluru, India
\({}^{5}\)Quantum Machine Learning research group, Quantum Computing division, QuTech, The Netherlands
\({}^{6}\)Quantum Intelligence research team, Department of Quantum & Computer Engineering, Delft University of Technology, The Netherlands
\({}^{\underline{\text{s}}\underline{\text{s}}}\)[email protected]
**Keywords:** gate-based quantum computing, algorithmic probability, circuit complexity, reachability, expressibility
## 1 Introduction
Quantum computing has entered a technological readiness level where quantum processor platforms, albeit limited, are becoming accessible for experimentation. This rapid progress has encouraged researchers to study various real-world industrial/scientific applications [1] using quantum algorithms. The logical formulations of these algorithms are then processed by a quantum computing stack [2] of system abstractions into low-level quantum circuits for a gate-based quantum computing device. The so-called NISQ (noisy intermediate-scale quantum) era [3, 4] characterizes the limitations of current quantum processors in coherence time, gate errors, and qubit connectivity. This has led to explorations from the other end [5], in devising design strategies and finding use cases for these limited computing power to achieve a computational advantage. To better utilize these limited devices, it is imperative to understand the relations between quantum logic and physical resources. This motivates the research presented in this article.
Quantum computation lies at the intersection of quantum physics and computer science. It has allowed a rich exchange of concepts between these two fields. Specific to the interests of this research, (i) physical laws provide fundamental bounds to computation, while (ii) computation provides an information-theoretic explanation of many physical phenomena. The former was first explored in the context of thermodynamic limits [6], leading to the development of reversible computation [7, 8], and eventually to define the limits of quantum computation [9, 10]. Efforts of the latter come under the purview of digital physics. Some seminal contributions include cellular automaton [11] and constructor theory [12], informational axioms [13, 14] of quantum mechanics, a principle of stationary action for computing [15, 16], and tensor networks, among others. The work presented in this research is foundational and would have applications for both directions of this synergy. Our focus is on an empirical demonstration of the consequences of existing theoretical ideas for quantum computing. Specifically, we transport concepts from algorithmic information theory [17] (a sub-field of theoretical computer science and artificial intelligence [18]) to gate-based quantum computation [19].
In this work, we consider an enumeration of the space of quantum circuits (as QASM codes). This is a very small sample of the uncountable infinite quantum processes that can be defined on the Hilbert space of a given dimension. The subset of processes is based on (i) the native gate set (of a quantum processor), (ii) the maximum circuit width (total number of qubits) and, (iii) the bounds on the circuit depth (based on the decoherence time). We investigate how these circuits map a space of classical inputs to classical outputs (via
Z-axis measurements). Our formalism assumes the QASM/circuits are encoded using a discrete universal gate set. The space of quantum programs constructed in such a matter is enumerable and is thus formally countably infinite. However, in most practical implementations (with finite and circuit depth), the set of meaningful computations is finite. Even for universal gate sets with arbitrary rotation angles, a finite number of control signal configurations in practical quantum computer implementation effectively discretizes the set of native gates. While any arbitrary unitary can be decomposed to arbitrary precision given a universal gate set (e.g., \(\{Rx(\theta_{x}),Ry(\theta_{y}),CX\}\)), given resource bounds (e.g., in lines of QASM codes before the system decoheres) the space of programs cannot map to any quantum process. This limits us from exploring functional physical processes that can be efficiently described in terms of quantum gates to be simulated on a quantum computer.
Since the first quantum algorithms were formulated in the 1990s, the discovery of new algorithms [20] has progressed steadily. However, quantum algorithm design involves quantum mechanical phenomena (e.g., superposition, entanglement), which is counter-intuitive to human experience. Thus, reasoning in terms of mathematical formalism has been a barrier to entry to develop more advanced quantum logic and is thus a bottleneck for broader adoption of quantum accelerated computing. There have been some proposals to remedy these issues via genetic programming [21] and circuit synthesis [22]. In this work, we carry forward this research direction via the principled approach [23] of algorithmic information theory. A quantum program synthesis framework would require understanding the space of quantum programs and their associated resources, to implement the program search/induction. As discussed in this article, the landscape of resource-bounded quantum circuits and their corresponding classical information processing capability, lay the groundwork towards this.
The rest of the article is organized as follows. In SS 2, we describe how states are represented as symbols and transformations over these symbols, and how this affects their statistical and algorithmic complexities. SS 3 discusses the subtleties of forming Boolean and quantum circuits from gate sets. In SS 4, we present our implementation of the enumeration of state complexities using various gate sets. The results are visualized and analyzed. SS 5 concludes the article with a discussion of various applications of this research.
## 2 States and complexities
The states of a system define its observable behavior. These can be encoded in various ways. A common way to encode them is by assigning a symbol for each unique/distinguishable state. Subsequent observations of the same system, (or a larger system composed of systems of this size) are denoted by a string of this alphabet. A system with a single state is not very interesting, nothing really happens there. A very simple case of a slightly more interesting system is a coin, with two states - heads and tails. Coins can be fair or biased, can be tossed multiple times, or multiple coins can be tossed together/consecutively/conditionally. The outcome of a series of (or in parallel) coin tosses can be represented by a Boolean string. Given an ensemble of all Boolean strings of length \(n\), represented as \(\{0,1\}^{\otimes n}\), each string might be equally probable with \(1/2^{n}\). In a physical system, if each output is equally likely, the uniform distribution models that system, e.g. a communication line might need to transfer each encoding with equal probability. According to this, a fair coin tossed 8 times, would have the same probability and element of surprise for \(1111111011\) as that of a specific permutation of 5 1s and 5 0s, e.g. \(0101100101\).
If everything is equally likely and unrelated to each other, it is a very boring grey world. Thankfully it is not so. We perceive structures around us. Why that is this way is hard to answer - but a likely explanation is that our human biological/technological sensing tools are limited. So instead of parsing the underlying randomness in its full spectrum, we perceive an emergent statistical structure. These structures allow us two additional ways of enhancing our representation of states. The complexity of states can be studied from these two perspectives - statistical and algorithmic.
### The statistical emergence of entropy
The first enhancement is based on relaxing the criteria of all states being 'equally likely'. We find that we have apparent favoritism towards \(0101100101\) being a more acceptable result of a series of fair coin tosses. This is based on our ignorance towards the micro-states of the permutations. We focus on the pattern that the total possible states with 9 1s are 10, while those with 5 1s and 5 0s are \(10*9*8*7\), similar to entropy in statistical thermodynamics. States with higher entropy are more in number and this flow towards an expected higher entropy state in the universe is what gives us our perception of time. In information theory, given a discrete random variable \(X\), which takes values in the alphabet \(\mathcal{X}\) and is distributed according to \(p:\mathcal{X}\rightarrow[0,1]\), the Shannon entropy [24] of the variable sampled from this ensemble is given by \(H(X)=-\sum_{x\in\mathcal{X}}p(x)\log p(x)\). This likeness denotes the average level of statistical information, or the surprise/uncertainty inherent to the variable's possible outcomes, and is the maximum for a uniform distribution. The way to optimally encode a biased set of percepts as information is the basis of code words like Huffman coding. The encoding is designed
to tune to the bit lengths of concepts by making the most used concepts more economical. To balance it, less used concepts become more costly than their native length. E.g. the probabilities \(p(00)=0.4,p(01)=0.05,p(10)=0.2,p(11)=0.35\) is best encoded as \(00:\rightarrow\)\(0,01:\rightarrow\)\(111,10:\rightarrow\)\(110,11:\rightarrow\)\(10\). Note that, this code word is better than the original only as long as the biased probability distribution is maintained (which in turn might be an artifact of sensing emergent macro-states). If instead all strings are equally probable, these coding schemes are more costly.
We do something similar with semantics in languages, e.g. instead of having new words for every single huge animal with a large truck with a specific set of \(x,y,z,..\) features that we fail to distinguish, we call all of them 'an elephant', while that we can distinguish, e.g. your pet cat, we give them special names. Your friend might not be able to distinguish your cat from another, or to the eyes of a trained mahout, every elephant is uniquely identifiable - leading to the subjectivity of language. Similarly, using the scientific names of species or the full genetic code of an individual, would be tedious for everyday use, however is useful for biological classification or medical treatment. Thus, much the same way as Huffman coding, words in languages arise due to do different ways of ignoring details and focusing on specific emergent semantics. The compression provided by a language forms the basis of comprehension, much the same way macro states (micro states preserving certain symmetries/features) leads to emergent physical laws.
### The algorithmic emergence of universality
The second enhancement is based on relaxing the criteria of all states being 'unrelated'. While an unique encoding for all percepts under consideration is good, we can compress and comprehend better if we can find relation among these symbols. For example, we can relate bit strings by their inverses, or arrange integers consecutively on a number line. Assigning symbols to relations is merely an attempt to minimize the number of code words itself by ignoring symbols for states that can now be described by some specific syntactic composite of symbols of other states and relations. To map to every percept, the total length of the encodings using these codes is not necessarily less than the original binary or Huffman coding. Thus, again, it is subjective when these relations will be beneficial instead of adding extra complexity to the encoding. The goal is not about being the most resource-efficient way for the full spectrum of percepts, but rather using a smaller set of symbols for a biased distribution or subset of percepts. This trade-off between generality and efficiency is the reason esoteric languages like MetaGolfScript [25] are banned from code golf contests, or the RISC and CISC architectures exist in tandem.
However, surprisingly, we find that, some relations are so ubiquitous that, it can map to all percepts (often even from an infinite set) with just the encoding of the relation and that of a starting percept. For example, the successor function (i.e. add 1) is represented by 1 and the number 0 is represented by 0. With this, any natural number can be represented by nesting the successor function, e.g., 1110 is 3. While the successor function is universal over the set of natural numbers given 0, the multiplication operation with the set of all prime numbers can also span the natural numbers. Such set of inputs and transformations are called universal for the target output set. Some of these symbols (of states and relations) are so powerful that they can potentially represent an infinite set of states with a finite set of symbols and some compositional rules, e.g. any d-base numeral system with d-symbols and a positional notation can represent any integer.
From an engineering point of view, having such universal set of states and transformations helps in taming an infinitude of possibility with a finite number of these building blocks, e.g., a keyboard is made of alphabets instead of a key for every word in the dictionary. There are two subtleties to this enhancement. (i) Firstly, since we now have a finite number of block types to construct an infinite number of states, the length of the description using these blocks can potentially be infinite. If we put a bound on the number of blocks we can use, by combinatorial arguments, it bounds the number of states we can describe. States that require longer descriptions are not expressible. (ii) Secondly, while the original states represented an observable behavior, these new pseudo-states and transformations that we introduced to reduce our symbol set need not necessarily have intuitive standalone meaning. For some, it may, for example, the bit-flip operator can correspond to the action of toggling a switch, however, for others, it may not, for example, the alphabets do not have semantic meaning by themselves.
We are now equipped with a symbol set consisting of (i) some set of observed states and pseudo-states and (ii) a rich (universal) set of transforms to describe other observed percepts. As a digress, it is crucial to note that transformations can be represented as states in a higher dimension via channel-state duality [26], representing dynamics as statics. Now, can we choose an optimal encoding scheme subjective to the probabilities of the various symbols being used to describe the physical phenomena (from the set of all macroscopic percepts)? The problem is that we do not know beforehand the ways in which the blocks will be used, i.e. the distribution of the ensemble. Imagine operating a Lego factory and deciding how many of each block to manufacture for customers' needs. Most often, due to lack of any other information, encoding of the base set is chosen as the standard binary encoding, (i.e., with the assumption that all blocks and initial percepts will be required with uniform probability), e.g. the ASCII code. This is called the opcode encoding of the instruction set architecture
in computers. In this scenario (of universal computation), it can be useful to study things from the other end, i.e., what will be the resources required to represent a specific percept. Resources are typically of two flavors: (i) the computational cost in terms of cycles (time) and memory (space), and (ii) the length of the description of the percept using the language.
The computational cost is studied in the field of computational complexity. Problems (and thereby, their solutions as a sequence of instructions based on symbols), are classified into different classes [27] based on the scaling behavior of time and space with the size of the problem. Some common ones are polynomial time (P), non-deterministic polynomial time (NP) and bounded-error quantum polynomial time (BQP).
The length of description quantifies the Kolmogorov complexity [28] or algorithmic entropy of the percept. It is defined as \(K_{U}(X)=\min_{p}\{\ell(p):U(p)=x\}\), where \(\ell\) denotes the length of the (prefix-free) program \(p\) on the encoding used by the universal Turing machine \(U\) that outputs \(x\). Though it depends on the choice of the building blocks and their encodings, the dependence is only of an additive constant term (called the invariance theorem) which is the length of a cross-compiler to another language/automata. Thus, it is useful to use Kolmogorov complexity to quantify the individual complexity of a string, irrespective of an ensemble. However, finding the exact value is uncomputable. There are many ways to approach it from the upper side (lower semi-computable), for example, via compression algorithms, minimum description length and the block decomposition method.
So far we reviewed three different notions of complexity of states:
1. Statistical complexity: Shannon entropy on an ensemble of states (given its probability distribution)
2. Computational complexity: Space-time scaling behavior of a program to generate the state (given a language)
3. Algorithmic complexity: Length of the program to generate the state (given a language)
In this research, we are instead interested in the circuit complexity of a state. Circuit complexity is related to algorithmic complexity [29], which in turn is related to statistical [30] and computational complexities [31]. Computational complexities typically deal with asymptotic scaling behavior and provides lower bounds. Though families of circuits have specific complexity class hierarchy (e.g., \(AC^{i}\), \(TC^{i}\), \(NC^{i}\)) it is not of much interest for this research. We will focus on circuits with bounded size (in both space and time). Similarly, the expected Kolmogorov complexity has been shown to correspond to the Shannon entropy [30], though this relation is not of immediate importance to this work. [29] Kolmogorov complexity can be shown being very similar to circuit complexity under certain considerations [29]. Another similar relation is that truth tables of functions with small circuit complexity has small Kolmogorov complexity. Counting arguments relating circuit, algorithmic and statistical complexities has been suggested in [15, 16] in terms of Lagrangian action. Our research in another step in this rather niche field of understanding observed states via different perspectives.
It is important to note that most research on algorithmic information theory has been in the context of universal automata, e.g. Turing machines, lambda calculus, cellular automata, etc. The size of the description depends on how expressive the symbols are for the transformations. What we described so far, i.e., transformations as a relation between two states, is typically the case in the language of circuits. Program written in more abstract logical framework allow more powerful primitives, like universal and existential quantifiers in first-order or higher-order logic. Typically, an universal computation model demands a recursively enumerable language. In the Chomsky hierarchy, Turing machines are more powerful than linear-bounded automata, which are inturn more powerful than push-down automata and in turn, finite-state machines (FSM). See [32] for a comparison of these for both classical and quantum computing models. However, for less powerful automata and language models, it is possible to derive corresponding notions [33] of algorithmic complexity. This is important as programs written in Turing-complete languages eventually gets translated via the layers of the computing stack and gets executed by logic circuits. These logic circuits are however a combination of sequential (allowing memory cells) and combinatorial logic, and can be used to simulate an FSM. Purely combinatorial logic (not to be confused with combinatory logic, which is universal) is of even lower power than FSM. The former is loopless and stateless, and thereby is a direct representation of the output state based on the input. It is important to note that, program execution is typically clocked in both classical and quantum processors to prevent race-conditions, even if the circuits are purely composed of combinatorial logic elements. Thus, resources of time and space can be defined in this setting even without tracking and accessing intermediate states. By borrowing notions from algorithmic information theory (as defined on functional programs), in this work, we study the effect of circuit complexity of Boolean/quantum combinatorial logic on state complexity.
## 3 Landscape of circuits
With this background of the measures of complexity, let us now first explore the landscape of Boolean circuits. The quantum circuit model is inspired by and is a generalization of the Boolean circuit model, so, it would be natural to start with a classical model and generalize it to the corresponding quantum formulation.
### Circuit probability of states
Algorithmic information is typically studied for classical functions (e.g. for \(\lambda\)-calculus) than for combinatorial Boolean logic circuits. We intend to study the latter. Let us consider the space of n-bit strings. Given a set of gates that form a Boolean circuit, we find that, all outputs are not equally likely. This is because, while each {circuit, input} pair has only one output, there are many ways of generating the same outputs from multiple circuits. In fact, we can make our circuits arbitrarily big by dummy operations like identity or two consecutive NOT-gates.
Since there are many programs, to compare two strings, instead of finding the shortest circuit to output the string, we are interested in the probability of each circuit being generated. This is similar to the notion of the algorithmic probability [34] of the string and is defined as \(M(X)=\sum_{p\in U(p)\approx x\bullet}2^{-\ell(p)}\) when the prefix-free programs \(p\) on the universal automata \(U\) are encoded in binary. The largest contribution to this term comes from the shortest program (i.e. the Kolmogorov complexity). This connection between complexity and probability can be expressed as: a string which has a short program has many alternate ways of generating it and is thus more probable to get generated by a universal automaton programmed randomly. Note that, assigning an uniform random distribution of programs for generating the algorithm probability, or the universal distribution over the entire set of strings, is not fully justified. In SS 4.5 of [35] one of the authors proposed a more physically motivated 'nested algorithmic probabilities' that converges to constructors. In this work, we will start with a uniform distribution but will later generalize the implementation to allow any prior distribution. To distinguish the usual notion of algorithmic probability of a string on an universal automata \(M_{U}(X)\) from our case of the probability of an output string based on the distribution of equivalent circuits with varied space-time complexities, we denote our formulation of algorithmic probability as \(M_{circ}(X)\).
In the original setting, \(M_{U}(X)\) is uncomputable, as it requires running each possible program, of which there exists programs that does not halt. However, it is lower semi-computable, and can be approximated given bounds on run-time. One proposal to approximate is given in [36] by running every Turing machine in a particular enumeration, and directly using the output distribution of halting Turing machines upto the bounded run-time. In the case of Boolean/quantum circuits, the run-time bounds are predetermined, and there is no halting problems. Thus, \(M_{circ}(s)\) for a state \(s\) can be approximated by the ratio of cardinality of the sets that generate the target state from the initial state \(s_{0}\) with the total number of circuits, as:
\[M_{circ}(s)\approx\frac{|C\in\{\mathtt{gateset},\mathtt{maxspace},\mathtt{ maxtime}\},s\gets C(s_{0})|}{|C\in\{\mathtt{gateset},\mathtt{maxspace},\mathtt{ maxtime}\}|} \tag{1}\]
This can be used to estimate the quantum circuit complexity using the coding theorem by extending the relation [37] between probability and complexity to circuits as, \(K_{circ}(s)=-\log M_{circ}(s)\).
### Boolean gate sets
In the Boolean circuit form of algorithmic probability, we will consider strings of n-bits, and the probabilities of each bit string getting generated from all possible Boolean circuits on all possible inputs. The main restriction (i.e., the output not being also uniformly random) comes from the fact that we do not have primitives (1-time step gates) for all possible Boolean functions. We typically use a universal gate set that can compile any other Boolean functions down to a larger number of gates from that set. Thus in our operational implementation, we need to choose a gate set for the empirical enumeration of the circuits.
Given \(v\) input variables with a symbol set of size \(s\), there are \(s^{v}\) possible combinations of these inputs. If there is a single output variable from the symbol set of size \(d\), the total number of possible functions [38] is \(d^{s^{v}}\).
* For 1-input Boolean algebra, i.e. when \(v=1\), \(s=2\), \(d=2\), the total number of functions are \(f=2^{2^{1}}=4\). These functions are the \(\{0,1,A,\overline{A}\}\).
* For 2-input Boolean algebra, i.e. when \(v=2\), \(s=2\), \(d=2\), the total number of functions are \(f=2^{2^{2}}=16\).
These are denoted by \(\{0,1,A,B,\overline{A},\overline{B},A\bullet B,\overline{A}\bullet B,A \bullet B,\overline{A}+B,\overline{A}+\overline{B},A+B,A\bullet\overline{B}, \overline{A}\bullet B,A\bullet\overline{B},\overline{A}\bullet B,\overline{A} \bullet B,\overline{A}\bullet\overline{B}\}\)
A functionally complete set of logical connectives or Boolean operators can be used to express all possible truth tables by combining members of the set into a Boolean expression. These sets can also express any Boolean SAT, or SAT-3 formula. Some examples of such universal [39] sets are {NAND}, {NOR}, {NOT, AND}, {NOT, OR}. These gate sets are related to each other, using the following equivalences:
* NOT(A) = NAND(A,A) = NOR(A,A)
* OR(A,B) = NAND(NAND(A,A),NAND(B,B)) = NOR(NOR(A,B),NOR(A,B)) = NOT(AND(NOT(A),NOT(B)))
* AND(A,B) = NAND(NAND(A,B),NAND(A,B)) = NOR(NOR(A,A),NOR(B,B)) = NOT(OR(NOT(A),NOT(B)))
### Quantum gate sets
The classical formulation that maps the landscape of Boolean functions can now be generalized to include quantum gates and states. There is a 3-input single gate in quantum logic that is universal for classical computing,
the CCX gate (also called the Toffoli gate). It can simulate the NAND gate via CCX(A,B,1) = (A,B,NAND(A,B)). Classical computation is in general an irreversible process, thus the inputs cannot be recovered from the outputs. Quantum logic is based on unitary evolution and thus is reversible. Additionally, quantum computations allow quantum superposition and entanglement, which are not implied in reversible computation. The CCX gate can simulate the entire reversible computation by simulating a Fanout gate (or Copy gate) as CCX(A,1,0) = (A,1,A). Thus, both {NAND, Fanout} and {CCX} form universal gate sets for reversible computation. The CSWAP gate (also called the Fredkin gate) is another universal gate for reversible logic.
The generalization of reversible to quantum logic needs only one extra gate, the H gate (Hadamard). In principle, the real gate set composed of {CCX, H} is computationally universal [40]. However, it needs ancilla qubits to encode the real normalization factors and complex algebra to decompose [41] to arbitrary quantum unitary gates for a strong sense of universality. Also, it is important that the effect of the NOT gate (or, the X gate in quantum) cannot be simulated without assuming the availability of both \(|0\rangle\) and \(|1\rangle\) states. Since our enumeration of quantum programs will start will the qubits initialized to the all-zero state, we need to augment the gate set to {X, H, CCX} to reach all binary strings as output.
The principle of algorithmic probability should also hold in the quantum setting, i.e., a uniform distribution of all possible functions and all possible input states does not imply a uniform distribution of all possible output states on the Hilbert space. Nielsen's geometric quantum computing (GQC) approach [42] shows that finding optimal quantum circuits is essentially equivalent to finding the shortest path between two points in a certain curved Riemannian geometry. However, it is not possible to empirically visualize this, as we need to consider all possible input states and all possible unitary maps. Studying the landscape of program synthesis requires discretizing this space for the native gate set of the target quantum processor (or the quantum compiler). Also, the number of possible functions or processes in a quantum environment (even for a single qubit) is uncountably infinite. Thus, choosing a universal gate set gets more pronounced in the setting of quantum control.
In a way this is easy. It has been shown that if one can apply some Hamiltonian repeatedly to a few variables at a time one can in general affect any desired unitary time evolution on an arbitrarily large number of variables. As a result, almost any quantum logic gate with two or more inputs is computationally universal [43] in a way that copies of the gate can be wired together to effect any desired logic circuit, and to perform any desired unitary transformation on a set of quantum variables. We call this richer counterpart to its classical cousin [11], the ubiquity of quantum universality (UQU).
How many types of quantum gates in the gate set do we need to represent this richer set of quantum unitary operators, and how many of them do we need? Well, if we are provided with a parametric family of quantum operators, only a few types of such operators are sufficient. The quantum Shannon decomposition (QSD) [44] provides a theoretical lower bound and asymptotic optimality for an exact decomposition of quantum unitaries using the parametric family of gates {RY(\(\theta\)), RZ(\(\theta\)), CX}. It can be recursively applied to larger quantum circuits with the CX count scaling of \(O(4^{n})\).
GQC, UQU and QSD, relies on an arbitrary expressive set of gates. This is not very practical as quantum devices are manufactured and controlled to perform operations from a predefined dictionary. There is a subtle difference of using a finite set of operators with respect to the classical case. Instead of the classical setting of \(d^{s^{*}}\) being represented perfectly by a sequence of gates from the universal gate set \(G\), in the quantum setting, the aim is to approximate all possible unitary operations with a sequence of gates from \(G\) with a bound of the approximation quality. This can be understood by thinking of representing all real numbers using digits of a specific numeral base. Of course there is a trade off to taming this countably infinite space with a finite number of building blocks. Quantum Kolmogorov complexity (QKC) [45] is a measure of the information required to describe a quantum state. For any definition of quantum Kolmogorov complexity measuring the number of classical bits required to describe a pure quantum state, there exists a pure n-qubit state which requires exponentially many bits of description.
Nevertheless, the Solovay-Kitaev theorem (SKT) [46] allows an efficient classical algorithm for compiling an arbitrary single-qubit gate into a sequence of gates from a fixed and finite set. The algorithm, using a universal gate set [47] (e.g., {H, T, CX}), runs in \(O(log(1/\epsilon))\) time, and produces as output a sequence of \(O(log(1/\epsilon))\) quantum gates which approximates the desired quantum gate to an accuracy within \(\epsilon>0\). It can be generalized to apply to multi-qubit gates and to gates from \(SU(d)\).
In retrospect, there is no foundational reason known why GQC, UQU, QSD, QKC and SKT plays out in Nature in this manner. Yet, eventually, it allows us to sufficiently parse and explore the vast Hilbert space using an arbitrary choice of a small set of building blocks. In the next section, we will present a formal formulation of our enumeration procedure, the results and their analysis.
## 4 Implementation
We first describe the implementation of the classical case. The problem is formulated as follows: given (i) \(n\) bits, \(b_{i}\in\{b_{0},b_{1},\ldots,b_{n-1}\}=B\), (ii) an initial state for each bit \(s_{0}(b_{i})\) (typically set to 0), (iii) a set of gates
\(g\in G\) (not necessarily universal), and, (iv) number of lines of QASM code \(L\); find the distribution of final states given each gate is applied with probability \(\frac{1}{|G|}\) at each \(l\in L\).
In the quantum case, the gate set is now defined as a set of unitary gates, while the initial state over a set of \(n\) qubits \(Q\) is defined as \(s_{0}(Q):=\sum_{j\in\{0,2^{n}-1\}}\alpha_{j}\,|j\rangle\), such that \(\alpha_{j}\in\mathbb{C}\) and \(|j\rangle\) are eigenstates of the \(n\)-dimensional Hilbert space in the Z-basis.
### Gate sets
We consider the following gate sets:
1. {CCX} - This set is universal for classical and reversible logic, provided both the initial states of \(|0\rangle\) and \(|1\rangle\) is provided. It is not practical to provide all initial states without knowing how to create one from the other. Since all gate-based quantum algorithms start from the all-\(|0\rangle\) state and prepare the required initial state via gates, we will not consider this set for our enumeration.
2. {X,CCX} - This set is universal for classical and reversible logic by starting from the all-\(|0\rangle\) state.
3. {X,H,CCX} - This set is weakly universal under encoding and ancilla assumptions for quantum logic. The encoding, while universal, might not preserve the compututation resource complexity benefits of quantum (i.e., in the same way classical computation can also encode all quantum computation using {NAND,Fanout}). Thus, we do not consider this set for our enumeration of the quantum case.
4. {H,S,CX} - The Clifford group is useful for quantum error correction. However, it is non-universal and can be efficiently simulated on classical logic [48]. The space of transforms on this set encoded error-correction codes and is thus useful to map.
5. {H,T} - This set is universal for single qubit quantum logic. However, we will consider the generalization to multi-qubit using an additional two-qubit gate in the set in the following case.
6. {H,T,CX} - This is universal for quantum logic.
7. {P(pi/4),RX(pi/2),CX} - The IBM native gate set is used to construct this gate set. The following relations establish the relation with the previous universal gate set: T = P(pi/4), X = RX(pi/2), and, H = \(e^{i\pi/2}\)XRz(pi/2)X = \(e^{i\pi/2}\)XTTTX. We will consider additional constraints like device connectivity to apply this technique to real quantum processors.
Thus, in our experiments, we map the algorithmic probability of the final states for the following gate sets: (i) {X,CCX}, (ii) {H,S,CX}, (iii) {H,T,CX}, and (iv) {P(pi/4),RX(pi/2),CX}.
### Metrics for evaluation
We are interested in evaluating these metrics for each of the gate sets:
* Expressivity: refers to the extent to which the Hilbert space can be encoded by using an unbounded number of gates. It is not weighted by the probability as it is a characteristic of the encoding power of the gate set. We assign a 1 to a final state if it can be expressed as starting from the initial state and applying a sequence of gates from the gate set.
* Reachability: refers to a bounded form of expressibility. The length of the sequence of gates must be equal to or shorter than the specified bound. This corresponds to a physical implementation rather than the power of the gate set, and characterizes the computational complexity and thereby the decoherence time of the processor.
The expressibility is mapped primarily to understand if the reachability bound is under/over-specified. As the value of the circuit length \(L\) is gradually increased, any universal gate set will populate the full landscape of states in the expressibility criteria, and thereby remain without variation. It is at this limit, i.e. at the first instance of full expressibility, the reachability is best understood. The other instance we are interested in is the infinite limit of \(L\), and its effect on the reachability distribution.
These experimental procedures and the comparative study of the results are presented in the following sections.
### Enumeration procedure
We construct the experiment by constructing all possible QASM programs. For each gate, \(g_{i}\in G\) in the gate set, the target number of qubits \(q(g_{i})\) are known. Thereafter, given \(n\) qubits, all possible permutations \(\mathcal{P}\) of applying the gate are enumerated in a list, i.e. \(\mathcal{P}^{n}_{q(g_{i})}\). The total possible options for each line of QASM is \(\sum_{G}\mathcal{P}^{n}_{q(g_{i})}\), and thus, the total possible QASM programs for \(L\) lines of code length are:
\[\bigg{[}\sum_{G}\mathcal{P}^{n}_{q(g_{i})}\bigg{]}^{L} \tag{2}\]
Our implemented is available at github.com/Advanced-Research-Centre/QCircScap
As an example, consider the gate set \(G=\{\texttt{X},\texttt{CCX}\}\), for \(n=4\) and \(L=3\). \(q(\texttt{X})=1\) and \(q(\texttt{CCX})=3\). Thus, \(\mathcal{P}^{4}_{q(\texttt{X})}=4\), and \(\mathcal{P}^{4}_{q(\texttt{CCX})}=24\). Note that even if exchanging the assignment of the two controls of the Toffoli gate has the same effect, this is a symmetry property of this gate and not in general true for 3-qubit unitaries. Thus, the description number (program id) for these cases are treated as different computational paths. It can be appreciated that, these two options of Toffoli gates would behave very differently in present of noise characteristic of individual qubits as well as other control constraints. The total options for each line if QASM is 28, and thus for length 3, the total number of programs is \(28^{3}=21952\). This is already a large number of quantum circuits to be simulated, for a small case, and gives a preview of how large the space of programs are.
By applying all possible cases, we obtain an array of size \(n\) that represents the available number of transitions from a specific state to another. The measurement basis (here, considered to be the default Z-basis), is crucial for this research. If we consider all possible initial states of bit-strings (Z-basis state preparations) of size \(n\), we obtain a \(n\times n\) matrix. This exploration of other initial states helps us to understand the asymmetry of gates over bit values (e.g., a generalization of Toffoli gates with inverted control qubits is of 4 types: \(\texttt{CCX},\texttt{CCX},\texttt{CCX},\texttt{CCX},\texttt{CCX}\)).
In the classical scenario, (e.g. for \(\{\texttt{X},\texttt{CCX}\}\)), this corresponds to the statistics of the number of computational paths between these two states using arrangement of gates from the set, conforming to a specified length. For the quantum case, the statistics corresponds to the sum of probabilities of the computational paths collapsing on measurement to the target state. Dividing the matrix by the total number of programs, gives us the fixed-length algorithmic probability of the state on each row, conditioned on the initial state. This normalized \(n\times n\) matrix is the reachability landscape. All non-zero values corresponds to the states that are reachable by at least one route (i.e., at least one program exist to transform to that state). This gives us the Boolean \(n\times n\) expressibility matrix.
### Results
To start our enumeration, we first plot the growth of number of programs (i.e., Equation 2) with qubit count and circuit depth for various gate sets. We note that, the trend is independent of the description of the gates in the gate set. The only information that matters is how many target qubits each gate in the set acts on. This gives us two classes among our chosen gate sets, (i) with one 1-qubit and one 3-qubit gate, (ii) with two 1-qubit and one 2-qubit gate. The result is plotted in Figure 1. We find that the permutations due to a 3-qubit gate grows much faster that the other class.
Figure 1: Growth of the number of programs with qubit count and circuit depth for two types of gate sets: (i) \([1,3]\) qubits: \(\{\texttt{X},\texttt{CCX}\}\), (ii) \([1,1,2]\) qubits: \(\{\texttt{H},\texttt{S},\texttt{CX}\}\), \(\{\texttt{H},\texttt{T},\texttt{CX}\}\), \(\{\texttt{P}(\pi/4),\texttt{RX}(\pi/2),\texttt{CX}\}\)
The following figures visualizes the expressibility (top row) and reachability (bottom row) for gate set on 4 qubits with increasing depth (from 0 to 4 operations). The gate sets we consider are {X, CCX} (Figure 2), {H, S, CX} (Figure 3), {H, T, CX} (Figure 4) and {P(pi/4), RX(pi/2), CX} (Figure 5).
Figure 4: Expressibility and Reachability for gate set \(\{\mathtt{H},\mathtt{T},\mathtt{CX}\}\) on 4 qubits and of circuit depth from 0 to 3.
Figure 5: Expressibility and Reachability for gate set \(\{\mathtt{P}(\pi/4),\mathtt{RX}(\pi/2),\mathtt{CX}\}\) on 4 qubits and of circuit depth from 0 to 3.
### Analysis and discussion
Let us consider maxspace = 4 and the classical gate set, {X, CCX}. There are 28 distinct possibilities for each time step. The reachability statistics for the Z-basis states for time step \(0,1,2,3\) are:
\[R_{0}^{(\texttt{I},\texttt{CCX})}=\begin{array}{c}\frac{1}{0}\,0\,0\,0\,0\,0 \,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0 \,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0 \,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0 \,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0\,0 \,0
Next, we analyze the quantum universal gate set \(\{\mathtt{H},\;\mathtt{T},\;\mathtt{CX}\}\). We need to be careful that now
\[R_{l}^{G}\neq(R_{1}^{G})^{l}\qquad G=\{\mathtt{H},\mathtt{T},\mathtt{CCX}\} \tag{5}\]
We find the following values:
\[R_{1}^{\mathtt{(H,\;\mathtt{T},\;\mathtt{CCX})}}=\begin{array}{ccccccccc}18 &0.5&0.5&0.5&0.&0.&0.&0.5&0.&0.
Using these reachability matrices, we can now calculate the \(M_{circ}\) for the two device qubit connectivity topology. The circuit probability can thereafter be compared with each other. This gives us insight on which device is better in terms of reachability analysis. The results, as shown in Figure 9, informs that for some transformations, the L-topology is better and has a higher probability and thereby lower circuit complexity (i.e., the red ones in the middle difference plot), while for others, the T-topology is better (i.e., the green ones for which the \(M_{circ}\) for T is higher).
The Expressibility plots follow a fractal structure, which is effectively the trace of the subsystem, as:
\[E_{i}^{n}=\begin{bmatrix}A_{i}^{n}&B_{i}^{n}\\ B_{i}^{n}&A_{i}^{n}\end{bmatrix}=\begin{bmatrix}B_{i+1}^{n}&A_{i-1}^{n}\\ A_{i-1}^{n}&B_{i+1}^{n}\end{bmatrix},\quad A_{i}^{n}=E_{i}^{n-1},\quad B_{i}^{n }=E_{i-1}^{n-1} \tag{6}\]
Figure 8: Expressibility and Reachability for gate set \(\{\mathtt{P}(\pi/4),\mathtt{RX}(\pi/2),\mathtt{CX}\}\) on 5 qubits and of circuit depth from 0 to 3 on the IBM L-topology.
Figure 7: Expressibility and Reachability for gate set \(\{\mathtt{P}(\pi/4),\mathtt{RX}(\pi/2),\mathtt{CX}\}\) on 5 qubits and of circuit depth from 0 to 3 on the IBM T-topology.
Another important insight is that the reachability/expressibility analysis is independent of the gate set being strongly universal (e.g. {H, T, CX}) or not (e.g. {H, S, CX}). This is due to our focus on the existence of a path that maps between two states, without considering the how much we can control and steer the system towards a specific path. To illustrate the point, a gate set of just {H} or {X} can map between any pair of states (weakly universal) given sufficient depth, yet they clearly cannot approximate even classical ones like {NAND}, let alone all functions. To define universality in this framework, we define functions as a set of transformations between states (e.g. as a sum-of-product expression, probability mass function, or an unitary matrix) A universal gate set that can approximately represent any such transformation given sufficient depth. We leave further discussion on universal gate set for our ongoing research as an extension of this work.
## 5 Applications
The application of this research is primarily twofold. On one hand, it is an exploration via enumeration of the characteristics of Hilbert space. A visual map of the structures presented in the results would aid in an intuitive understanding of the capabilities of quantum computation. This is an extension of similar projects in classical logic [51] and algorithmic information [52]. On a more pragmatic footing, this research finds applications for various use cases. We conclude this article with a brief description of how this research connects to these use cases.
### Geometric quantum machine learning
The landscape of quantum processes is of interest for both foundational and practical aspects of quantum information. On the foundational side, quantum complexity theory [53, 54], quantum resource theories [55], categorical quantum mechanics [56] and quantum formal logic [57] rely on the properties of this landscape. The transition to practical aspects is orchestrated by the geometric formulation of quantum computation [42, 58]. Recently, this forms the basis for quantizing geometric deep learning [59]. These works have been conducted in the formalism of mathematical functions or quantum fields[60]. Circuit complexity is much less studied, and can bridge algorithmic complexity and computational complexity. By providing a perspective of the statistical/algorithmic complexity geometry of quantum logic circuits, our intention is to make these results tangible using quantum computational frameworks in the near future. On the other hand, operational distance measures between two quantum states/processes for specific use cases can be informed by these theoretical techniques.
### Novel quantum algorithm synthesis
Quantum algorithm design currently involves a careful manipulation of quantum information, harnessing quantum mechanical phenomena (e.g. superposition, entanglement, interference) to a computational advantage. This is generally counter-intuitive to human phenomenological experiences, thus requiring considerable training and often serendipitous moments [61]. Though discovery of new algorithms is an buzzing research field [20], reasoning in terms of mathematical formalism has been a barrier to wider adoption of quantum accelerated computing. Some proposals for automation of quantum programming [21, 22, 23] has been proposed to remedy
Figure 9: \(M_{circ}\) and their comparison for gate set {P(\(\pi/4\)), RX(\(\pi/2\)), CX} on 5 qubits and of circuit depth from 0 to 3 on the IBM L-topology and T-topology.
these issues. To further expand the applicability of quantum algorithms, techniques from novelty search [62] and large language models [63] can be incorporated into these automation engines. Open-ended search in the space of quantum processes can greatly benefit from a characterization of this landscape, as presented in this work.
### Quantum artificial general intelligence
Among the more rigorous methods of developing general intelligence is an active formulation of Solomonoff's theory of inductive intelligence [34], called universal artificial intelligence [18]. Universal reinforcement learning models like AIXI and KSA are capable of rational decision-making or modelling environmental dynamics, based on the shortest program that corresponds to compressing the past observations and maximizing a set reward function. These have been quantized both by using quantum algorithms, (e.g., in the AIXI-q model [64]) and by applying them to quantum environments (e.g., in the QKSA model [65]).
Another crucial aspect of intelligence [66] is the understanding of cause-effect relations. Quantum acceleration of causal inference [67, 68, 69] can benefit from the knowledge of the probability distribution of causal oracles, a subset of quantum processes that embed specific properties of the problem. Besides causal inference, similar techniques can be applied to other statistical relational learning applications like probabilistic logic networks [70] and quantum variational algorithms.
Both universal distribution and causal inference are intimately connected to the landscape of quantum programs. This landscape inturn depends on the choice of a specific gate set, as we saw in this research. Thereby, novelty seeking in the space of universal gate sets can meta-optimize quantum program synthesis for specific application algorithms. In our current research, we are exploring this direction of second-order cybernetics of automated quantum operational theory, by using the groundwork developed in this article.
## Acknowledgements
This project was initiated under the QIntern 2021 project "Reinforcement Learning Agent for Quantum Foundations". B.G.B., T.A., A.S. would like to thank the organizers of the program and QWorld Association. A.K. was partially supported by the Polish National Science Center (NCN) under the grant agreement 2019/33/B/ST6/02011. A.S. acknowledges funding from the Dutch Research Council (NWO) through the project "QuTech Part III Application-based research" (project no. 601.QT.001 Part III-C - NISQ).
## Author contributions
Conceptualization, A.S., methodology, A.S., A.K. and B.G.B.; software, B.G.B. and A.K.; writing-original draft preparation, A.S., A.K. and B.G.B.; visualization, A.K. and T.A.; supervision, A.S.
All authors have read and agreed to the published version of the manuscript.
|
2301.06938
|
The Universal Trust Machine: A survey on the Web3 path towards enabling
long term digital cooperation through decentralised trust
|
Since the dawn of human civilization, trust has been the core challenge of
social organization. Trust functions to reduce the effort spent in constantly
monitoring others' actions in order to verify their assertions, thus
facilitating cooperation by allowing groups to function with reduced
complexity. To date, in modern societies, large scale trust is almost
exclusively provided by large centralized institutions. Specifically in the
case of the Internet, Big Tech companies maintain the largest Internet
platforms where users can interact, transact and share information. Thus, they
control who can interact and conduct transactions through their monopoly of
online trust. However, as recent events have shown, allowing for-profit
corporations to act as gatekeepers to the online world comes with a litany of
problems. While so far ecosystems of trust on the Internet could only be
feasibly created by large institutions, Web3 proponents have a vision of the
Internet where trust is generated without centralised actors. They attempt to
do so by creating an ecosystem of trust constructed using decentralised
technology. This survey explores this elusive goal of Web3 to create a
"Universal Trust Machine", which in a true decentralised paradigm would be
owned by both nobody and everybody. In order to do so, we first motivate the
decades-old problem of generating trust without an intermediary by discussing
Robert Axelrod's research on the evolution of cooperation. Next, we present the
challenges that would have to be overcome in order to enable long term
cooperation. We proceed to present various reputation systems, all of which
present promising techniques for encouraging trustworthy behaviour. Then, we
discuss Distributed Ledger technologies whose secure transaction facilitating
and privacy preserving techniques promise to be a good complement to the
current limitations of vanilla reputation systems.
|
Rohan Madhwal, Johan Pouwelse
|
2023-01-17T15:01:31Z
|
http://arxiv.org/abs/2301.06938v1
|
# The Universal Trust Machine
###### Abstract
Since the dawn of human civilization, trust has been the core challenge of social organization. Trust functions to reduce the effort spent in constantly monitoring others' actions in order to verify their assertions, thus facilitating cooperation by allowing groups to function with reduced complexity. To date, in modern societies, large scale trust is almost exclusively provided by large centralized institutions. Specifically in the case of the Internet, Big Tech companies maintain the largest Internet platforms where users can interact, transact and share information. Thus, they control who can interact and conduct transactions through their monopoly of online trust. However, as recent events have shown, allowing for-profit corporations to harness so much power and act as gatekeepers to the online world comes with a litany of problems. While so far ecosystems of trust on the Internet could only be feasibly created by large institutions, Web3 proponents have a vision of the Internet where trust is generated without centralised actors. They attempt to do so by creating an ecosystem of trust constructed using decentralised technology. This survey explores this elusive goal of Web3 to create a "Universal Trust Machine", which in a true decentralised paradigm would be owned by both nobody and everybody. In order to do so, we first motivate the decendes-old problem of generating trust without an intermediary by discussing Robert Axelrod's seminal research on the evolution of cooperation in the iterated prisoner's dilemma. Next, we present the infrastructural and social challenges that a hypothetical Universal Trust Machine would have to overcome in order to enable long term cooperation in a decentralised setting. We proceed to present various reputation systems, all of which present promising techniques for encouraging trustworthy behaviour in a decentralised network through indirect reciprocity. After this, we discuss the family of emerging Distributed Ledger technologies whose secure transaction facilitating and privacy preserving techniques promise to be a good complement to the current limitations of vanilla reputation systems. Finally, we conclude by discussing a future roadmap for creating the desired Universal Trust Machine.
## I Introduction
Humans in a society rely on trust in every stage of their life, in every action they perform. Children trust that their parents will nurture and guide them, adults trust that their family and loved ones won't deceive them. When crossing the street on a zebra crossing, we trust that motorists will obey the traffic laws, when buying items at the market, we trust in the quality of the goods being provided to us. Regardless of whether one believes that society is a function of divine order or of a social contract, trust between its members is the very fabric of its organising foundation. [1]
More generally, consider an agent, such as a human or a robot, who is required to use limited agency to navigate and take actions in a world with limited direct information available to it at any given moment. In such a world, trust is an important social heuristic that allows the agent to make wagers on the predictive benevolence of other agents. [2]
Ecologist Garett Hardin defines trust as "encapsulated interest", since it facilitates peaceful and stable social relations that form the basis of collective behavior and productive cooperation. Thomas Hobbes, considered by many to be one of the founders of modern political philosophy, argues that the natural state of humans is nasty and brutish, however, trust helps to convert that into something peaceful and efficient. In his book "A treatise of human nature", enlightenment philosopher David Hume discusses the importance of trust to the functioning of a society. According to sociologist and philosopher Niklas Luhmann, trust effectively reduces complexity and risks, allowing for coordination with increased performance. [3] This is easy to understand intuitively since trusting individuals and groups reduces the effort one would spend in constantly monitoring the actions of others in order to verify their assertions. It is easy to conclude that a society without a notion of trust would find it hard to function effectively, or to exist at all. [4]
The growth of human civilization from small-scale hunter-gatherer societies to thriving economies of nation states is testament to the benefits provided by the growth of trust and cooperation inside societies. However, history reminds us that the requirement of trust for facilitating cooperation also leads to the growth of large centralized institutions since these institutions historically provided the best defense in economic transactions against the untrustworthy. [5]
While trust might be fundamental to cooperation in a soci
ety, underlying every social transaction is the desire to further one's personal gain by abusing the trust of an unsuspecting opponent and defecting against the expected trustworthy action. [1] For example, in a transaction where a merchant pre-pays a farmer for their produce at the end of the year, the farmer may be tempted to keep the payment and not provide the promised crops, or provide crops of a lower quality than was agreed upon.
According to Margaret Levi, "good defenses make good neighbors". Hence, the need for such defenses in economic transactions necessitated institutional bases of reaching agreement and resolving disputes that might result from them. Institutions that were able to provide third party enforcement in a transaction were hence able to ensure personal security and the security of the transaction. Thus, they were able to encourage cooperation and grow immensely as a result of their importance in doing so. [5]
However, allowing profit driven institutions to amass so much power comes with its own set of problems. The financial crisis of 2008 which was primarily attributed to failure of trusted institutions such as banks and other financial institutions has led to a growing distrust in such institutions. [6] This was most notably witnessed by the recent growth of blockchain technology and adoption of cryptocurrencies such as Bitcoin and Ethereum as a decentralised alternatives to large financial institutions.
The Internet is the most remarkable addition to how social capital can be built in the world. Collaborative work performed on the Internet is continuously changing how humans think about social interaction. To understand how trust is built on the Internet, it is worth considering the similarities and differences between trust on the Internet and trust in general.
Since users on the Internet often possess virtually no knowledge about each other, all they can rely on is the immediate record of the other party's behaviour in past interactions with them to decide whether they can be trusted. However, this inability to directly judge different providers of services on the Internet is not very different from the general inability to directly judge the quality of services that are required in the real world, such as doctors or lawyers. Similar to the real world, providers of service on the Internet need to care not only about their current interaction, but also the result of the interaction on their future reputation. Hence, building and maintaining one's reputation by acting in a trustworthy manner is a requirement both in the real world and on the Internet. [7]
On the other hand, the most notable difference between the two is caused by the Internet's unique capacity to allow collaboration and interactions at a global level. Take for example the case of buying an item from a local store, while doing so, trust is generally not an issue and most often, all that matters is the perceptible quality and price of the goods being provided in the store. However, buying an item from a seller on an online marketplace like eBay requires a markedly different level of trust to allow the transaction to occur. Since, in addition to simple quality and price of the advertised goods, the buyer would also require that reliable behaviour from the seller is guaranteed. Given two online sellers that sell the exact same item at the same price, a buyer would prefer the seller that has a large number of reviews/testimionials. Therefore, even in simple transactions, due to the global scale of the Internet, the risk of fraud is substantial and hence, additional methods of generating trust are required. [7]
Even though the Internet was built on distributed protocols, large scale cooperation was consolidated around a few centralised services where social trust was created and enforced by large profit driven institutions. [8] Specifically, in two key functions of the web, web-publishing and discovery of content, technological institutions such as Google, Meta and Twitter slowly became curators and gatekeepers for the information being published on the Internet and people who were allowed to interact with it. As a result of this, the platforms accrued the power to control and own a large share of the information published and consumed on the Internet.
Recently however, abuses of information and communication technology by such institutions for surveillance, spreading of disinformation and coercion of the public have come to light. Notable examples include Google's deepening involvement with Egypt's repressive government and Twitter enabling the Chinese government to promote disinformation on the repression of Uighurs. [9]
Such propensity of Big Tech organisations to abuse their ecosystems of trust for their own profit through privacy violations and misinformation is leading to a shift in the general attitude towards large centralised information platforms. The presence of large centralised authorities or platform owners to maintain and enforce trust in sociotechnical systems is increasingly being viewed more as a hindrance rather than a help. [9]
A growing alternative to the existing model of the platform driven Internet is the idea of Web3 which is motivated by the idea of using decentralised technologies such as blockchain. It is hard to exactly define Web3 since there is a lack of consensus even among researchers on what the idea of Web3 means. In section IV we attempt to clearly define what Web3 refers to in the context of the paper. On a high level, Web3 can be thought of as an ecosystem of applications which aims to generate trust purely through decentralised technology and mathematical primitives. We posit that one of the aims of Web3 is to produce a "Universal Trust Machine", a machine that is able to produce trust in any ecosystem, enabling long term cooperation. Thus, eliminating the need for profit driven organisations and allowing for the creation of a "commons" [10] where everybody is free to publish, read, react, and interact with content.
However, as shown in section V, fostering cooperation in a community with the presence of bad actors is not a trivial problem. In a centralised system, it is possible to govern in an ad-hoc manner, altering rules of the system as new problems and trust issues arise. This is obviously not possible in decentralised systems since no one single party can instruct everyone how to act. Therefore, all rules of interactions
among the independently acting, self-interested parties must be explicitly and clearly defined before any interactions occur. Further, these rules should reasonably incentivise cooperation and disincentivise cheating/undesirable behaviour to foster long-term cooperation.
This problem of cooperation has been studied in the field of game theory and analysing studies in this field could help motivate how to develop systems where the best course of actions for neighbours is to cooperate for mutual good.
Plethora of research also exists on models and mathematical primitives for generating trust in decentralised systems, most notably, reputation systems have gained prominence as a way to create safe and trustable communities in decentralised networks. [11]
This survey attempts to explore such mechanisms for generating trust in Web3. In section II we discuss some principles in the work of Evolution of Cooperation which help motivate how long term cooperation could come about naturally. Next, in section III we attempt to define what decentralised networks are and how the decentralised movement came about. In section IV we explain the motivation behind Web3 and the technologies associated with it. After this, we discuss problems one faces when designing a decentralised system which fosters long term cooperation in section V. In section VI, we discuss reputation systems for decentralised systems and present some promising systems in literature and the techniques they utilise for generating trust. We proceed to discuss the limitations of reputation systems and present Distributed Ledger Technologies in section VII which potentially remove a lot of the discussed limitations. Finally, we conclude with a future roadmap for the construction of a Universal Trust Machine.
## II Evolution of cooperation
The history of humanity is one filled with conflict, destruction and war. The pursuit of peaceful cooperation is more than just a hippie dream, it has attracted a great deal of research across multiple fields. We believe that the goal of Web3 and the desired "Universal Trust Machine" is to build a digital utopia where such peaceful cooperation can occur and persist over a long-term time period.
One of the foundational works investigating how cooperation can emerge and persist without a third party is "The Evolution of Cooperation", a 1984 book written by political scientist Robert Axelrod which expanded upon the highly influential paper he co-authored with evolutionary biologist W.D. Hamilton [12]. The book's central question is "Under what conditions will cooperation emerge in a world of egoists without central authority?".
Axelrod held two computer simulation tournaments where multiple strategies for playing an iterated two-player Prisoner's Dilemma game were solicited from professionals across multiple disciplines. The Prisoner's Dilemma is a popular game analyzed in game theory where two rational agents are faced with a dilemma, they are arrested by the police and have to individually decide to either cooperate with the police or stay silent. The dilemma was originally framed by Merrill Flood and Melvin Dresher in 1950. A key requirement of the game is that: \(t>r>p>s\) and \(2\times r>t\) where \(t\), \(r\), \(p\) and \(s\) represent payoffs for the different outcomes of the game. If both players choose to stay silent i.e. they cooperate with each other, they are each awarded \(r\), on the other hand if both players defect, they are each awarded \(s\). If one player stays silent while the other defects, the player who defects is rewarded \(t\) while the player who chose to stay silent is paid \(s\). Fig. 1 demonstrates this payoff matrix visually. Hence, although the decision to collectively stay silent is overall the most optimal, individually, the best decision is to defect.
Further, in an iterated Prisoner's Dilemma game there is a probability \(w\) that two players will interact in the next round. [13] Contestants who submitted algorithms to play the tournament accrued points in each round according to the shown payoff matrix by playing against other strategies. The tournament consisted of five iterated prisoner's dilemma games in total with each game consisting of 200 rounds each.
The Darwinian theory of evolution would suggest that the most selfish strategy would perform the best and while indeed, in a single iteration defecting is always the best strategy, in the iterated Prisoner's Dilemma the strategy that ended up performing the best in both rounds was a simple "Tit For Tat" strategy. As the name suggests, this strategy was based on the concept of direct reciprocity, the next move of an agent following the strategy is determined by the last move of the opposing agent, if the opposing agent cooperated, the agent following Tit For Tat would cooperate too and vice versa.
Based on the results of the tournament, Alexrod identified four characteristics that he believed led Tit For Tat to perform the best of all strategies:
1. **Niceness** By being nice, Tit For Tat can benefit from long term mutual cooperation with other strategies that are also nice. However, it is important to note that niceness alone would lead to exploitation from other strategies who are not nice
2. **Forgiveness** Strategies that are not forgiving are doomed to be locked into mutual destruction after a single defection from an opponent. Tit For Tat allows an opponent to start cooperating again after defecting initially which makes it forgiving
3. **Retaliation** As pointed out earlier, niceness alone leads to exploitation by uncooperative strategies. By retaliating when the other strategy doesn't cooperate as expected, Tit For Tat avoids being exploited by such strategies
4. **Certainty** By being easy to understand, Tit For Tat makes it easy for other strategies to understand what it's doing thus allowing them to come to a mutually beneficial strategy much faster
Axelrod's analysis thus provides an interesting set of prescriptions for designing strategies for nodes on a decentralised network. Keeping in mind that not all interactions needs to be zero-sum and it may be possible for all cooperating parties to
benefit on the long term by cooperating and not being the first to defect seem to work as good principles which suggest that cooperation could indeed organically grow in a pool of egoistic nodes. However, being too nice also has its downsides and any effective strategy should be quick to retaliate to prevent exploitation. Finally, keeping it simple seems to be effective advice otherwise the strategy might risk confusing potentially cooperative neighbours.
Further, there are lessons for designers of Web3 applications, the most important being having a large "shadow of the future", i.e. a sufficiently large \(w\) which guarantees that nodes interact with each other more durably and frequently so they have time to develop a mutually cooperative strategy and since they are more likely to defect it the probability of meeting a node again is low. This can be done in many ways including using spatiotemporal structures e.g. clustering of small groups in space [14]
However, there are limitations to Axelrod's results when considering a strategy to use as a "Universal Trust Machine":
1. **Assumptions are too simplified** Not all real word interactions are as simple as an Iterated Prisoner's Dilemma game. Often participants can communicate with each other and hence collaboration through other means may be a better strategy. Further, it may not be possible for real-world participants to necessarily perceive credible threat, or respond to it rapidly and accurately. Finally, interactions between peers are often one-time transactions, in this case, when dealing with a peer using a Tit For Tat policy, there is no incentive to behave in a trustworthy manner.
2. **Results may not hold in some populations** In his 2000 paper "Twenty Years on: The Evolution of Cooperation Revisited", Hoffman [15] showed that Axelrod's tournament was sensitive to the initial population composition and the potential for strategies to make mistakes. Under different initial compositions and assumptions, other strategies were shown to perform better than Tit For Tat.
3. **Does not consider indirect reciprocity** While direct reciprocity is a powerful mechanism, it relies on repeated encounters between individuals. However this is too simplifying an assumption to model human interactions where exchanges are often asymmetric and fleeting. Indirect Reciprocity is more representative of real human exchanges where we help people even if they've never directly helped us before based on some indirect exchange and a desire to increase our reputation in society. [16] For example, a large-scale experiment on the prevention of blackouts found that permanent house owners (as opposed to temporary renters) and people residing in apartments were more likely participate in a demand response program to prevent blackouts when others would know their behavior and identity. This is because they were more likely to consider indirect costs and benefits to their reputation since they are more likely to have future interactions with others in the living area. [17]
## III Decentralisation and Decentralised Networks
Before considering more contemporary solutions to the problem of enabling long-term cooperation, it is important to clarify what it means to create "decentralised" trust, what the aims of the Web3 movement are and to identify the main problems that a Universal Trust Machine should solve in order to be considered successful. In this section we discuss what decentralisation is, then in the following two sections we proceed with a similar discussion of Web3 and the inherent problems in creating decentralised trust.
Decentralisation is not a novel concept and has been prevalent in research even outside the sciences. In the social sciences, it boasts a 200 year history and has been a popular concept across multiple disciplines. Examples include concepts such as subsidiarity, democracy, liberty and equality in political science, systems theory and self determination in management and decision science, fiscal decentralisation in economics. [18]
In technology, the concepts of technological decentralisation have been evolving for over half a century. [18] A popular example of a decentralised IT movement is the open source
Fig. 1: A typical payoff matrix of a 2 player prisoner’s dilemma [14]
software movement which represents a radical retake on copyright law and involves developing and sharing software in a decentralised and collaborative way, relying on peer review and community production.
The importance and the success of this movement is demonstrated by the domination of multiple areas of software by open source projects. Popular examples are the the open source Apache projects which dominates the market of server software over commercial alternatives from Microsoft, Sun etc and the Linux operating system which has seen popular use being embedded in a range of devices from mobile phones, recording devices to large scale servers in data centers. [19]
The concept of a "decentralised network" was first coined by Paul Baran, one of the inventors of packet switching. In general, networks can be classified as two components, "star" or centralized and "grid/"mesh" or distributed. In a star/centralized network, all nodes are connected to a single node, hence, each participant needs to go through a central component to interact with each other. While in a distributed network on the other hand, there is no such central node and each node can communicate with each other without going through a centralised point. In practice, a combination of these components is used to form a network, Baran called such a mixed network "decentralised" because there was no single, central point of failure. [20] Fig. 2 demonstrates these networks visually.
In contemporary modern literature, the term _decentralised network_ is used to refer to networks where the technology, content and infrastructure on the network is controlled by participants and contributors rather than large central platforms. This control is manifested in various ways, such as participants controlling parts of the infrastructure like servers and routers, collaborators owning data in their own private data silos which is queried by the network during discovery, participants possessing the autonomy to decide the operational details of the network, what content needs to be publicised and what needs to be deleted etc. [8] In this context, a popular example of a centralised network would be Twitter which owns all the content that users publish on it, while an example of a decentralised network is Tribler, a peer to peer file sharing system which improves upon the BitTorrent protocol which enables users to share content with keyword search and boasts a reputation-management system to encourage collaboration. [21]
Over the past decade, decentralised networks have received a reinvigorated interest due to the emergence in popularity of cryptocurrencies such as Bitcoin and Ethereum. In his whitepaper proposing Bitcoin, Satoshi Nakamoto proposed a novel decentralised peer-to-peer network protocol which facilitates an electronic payment system. [22] The popularity of these cryptocurrencies has also resulted in explosion of blockchain and decentralised technologies and projects. Proponents and developers of these technologies wish to see a shift in the publishing and discovery of content and information over the Internet away from a few profit-driven Big Tech corporations and into the hands of the users who generate them, guaranteeing the privacy of their data and also ensuring that everyone has a fair and equal voice.
For example, "78 days", a collaborative project between the Starling Lab and Reuters uses decentralised ledgers to preserve historical data important to humanity. The goal of the project is to curb misinformation. It achieves so by ensuring the integrity and authenticity of the information as it captured and stored using a system called Content Authenticity Initiative. It also uses a storage system built on blockchain called Filecoin that requires data providers to prove that they are holding the authentic data and not a tampered version. Most importantly it ensures that the contributors of the information have a way to maintain their creation of the content through the records stored with the data. [23]
## IV Web3 - Decentralised Web Platforms
The term "Web2.0" was first coined by Tim O'Reilly in 2007 to describe an Internet where platforms enabled users to publish, consume and interact with content, and with each other. [24] It was supposed to expand upon the first iteration of the Internet or "Web1.0" which largely consisted of static pages meant only to display information. So while "Web1.0" was "the read web", "Web2.0" aimed to be the "the read-write web" (coined by Richard McManus in 2003).
Critics of Web2.0, such as the inventor of the World Wide Web, Tim Berners-Lee feel that Web2.0 failed to achieve the vision of the Internet as a secure, decentralised exchange of public and private data, with users' data being increasingly stored in corporate data silos. Instead, to guarantee security of their data, they want users to own their own data. [25]
The term "Web3.0" was coined by Polkadot and Ethereum co-founder Gavin Wood in 2014, he used it to describe an Internet that is decentralised, open and transparent. [26]
The current Web3 movement aims to transform the platform oriented Web2.0 Internet into a decentralised web ecosystem which: 1) avoids monopoly of content discovery and propagation by large centralised actors 2) prevents the spread of misinformation and fake news 3) provides its users the ability to create, exchange and react to information in a secure, private and free manner 4) supports immersive web development [18]
Liu et al [27] define Web3 as a movement which agnostic of any specific overarching applications or underlying infrastructures will user in "an era of computing where the critical computing of applications is verifiable", that is, an application that conforms to the idea of Web3 is one where all stakeholders are able to verify the execution of the application based on predetermined terms without the presence of an intermediary.
Packy McCormick defines Web3 as "the Internet owned by the builders and users, orchestrated with tokens". [28] Defining Web3 with its key property being user ownership is a common approach taken by a majority of research papers on the topic. Hence, Web3 is positioned as the "read, write, own" web. While Web2.0 was a frontend revolution that allowed users to create and interact with created content online, Web3 is instead a backend revolution which aims to change how the created content is stored. Instead of keeping data on centralised
data silos, Web3 aims to provide data storage to users in a distributed manner in a way that users can own and moneties the content they created. Thus, it aims for the disintermediation of existing parties such as large big tech companies in data governance. [29]
Finally, in addition to personal ownership of data, many Web3 proponents also believe in the concept of _"Self-Sovereign Identity"_ i.e. that identity holders on the Internet should also be owners of their identities. Centralised identity solutions require holding many plastic cards and username/passwords, leaving individuals with little control of their identity and prone to privacy theft. A Web3 with _Self-Sovereign Identity_ would allow users to have a persistent, transparent identity which they can control fully e.g. decide which platforms have access to their identity and what information they can view. [30]
## V Threats to long term cooperation
In order to enable the dream of Web3, it is fundamental to be able to create a commons with communities of users interacting with each other through decentralised networks, free to read, publish and interact with content. However, two broad classes of threats make creating long term cooperation in decentralised networks a non-trivial task: Social and Infrastructural threats. In the following sections we briefly cover these threats and establish why they pose a problem to cooperation.
### _Infrastructural Threats_
In section I, we motivated why trust is fundamental to achieving cooperation inside communities. Since in a Web3 application based on a decentralised network there are no third parties for enforcing trust, before using a service to cooperate with other nodes in the network, users look for assurance that the other party is trustable. This is especially true for applications that depend on blockchain technology due to the immutable nature of transactions making it incredibly hard to punish bad actors. [31] Therefore, in addition to social problems, there are also several infrastructural problems stemming from the presence of bad actors who wish to abuse the trust of their neighbours for their own benefits makes the problem of achieving long term cooperation in a decentralised network a non-trivial task. A system that will be able to achieve the stated dreams of Web3 should be effectively able to tackle these problems, below is a brief description of a few of these problems:
#### V-1 Sybil Attack
In a distributed network, if an entity can control a large number of nodes and hence obtain a large number of node identifiers, they can use this dominance of identities to control the network and undermine the mechanisms of the network which results in a network with less robustness and freedom. Such an attack is often referred to in literature as a _Sybil Attack_, where a _Sybil_ is the fake identity of an entity. [32]
The Sybil Attack was first mentioned by Doucer in [33]. In this paper, Doucer argues that only a central authority can prevent a Sybil Attack under realistic assumptions of resource distribution and coordination.
While the intuitive solution to making a network robust against a Sybil Attack seems to be to make it expensive to create new identities in the network, doing so increases the social cost of the network by making it hard for new users to join it.
In the context of reputation systems in decentralised networks, a colluding group of malicious nodes could also increase the reputation of its nodes by itself and hence threaten the integrity of the network.
Fig. 2: a) Centralised network b) Decentralised network c) Distributed network
Cardinal architectural insight from Baran’s 1964 paper [20]
#### Iii-A2 Free riding
In order to encourage successful long term cooperation, it is important that enough peers are providing sufficient resources for the system to become large and truly useful. In the absence of a third party monitoring each user, it is possible that some users stop contributing and only consume resources being generated by other users. _Free riders_ are peers that eagerly consume resources without reciprocating any in return. It is easy to see how free riders diminish the quality of service for other peers, but more importantly, by making contributing peers feel exploited they disincentives cooperation in the system and thus threaten the existence of the whole system, especially systems that are predicated on the foundation of sharing.
However, in the context of a decentralised network the most important problem created by free riding is that if only a few users are providing resources, they end up acting as centralised servers, this threatens the security of the network and defeats the very goal of the Web3 application.
Gnutella is a popular peer-to-peer file sharing platform which allows users private access to information. In their paper "Free Riding on Gnutella", Eytan Adar and Bernardo A. Huberman [34] showed that 70% of Gnutella users were not sharing any files and nearly 50% of responses for file discovery were being returned by the top 1% of sharing hosts.
Similarly, Locher at al [35] were able to create "BitThief", a free riding BitTorrent agent that was able to achieve high download rates even without seeding any data in return. They were also able to demonstrate that sharing communities which originally intended to promote cooperation among peers ultimate provide many incentives to cheat.
#### Iii-A3 Pollution Attack
In 2005, Liang et al [36] showed that it was possible for an attacker in a decentralised network to corrupt certain targeted content, rendering it unusable and then making it available to the network in a large quantity. Since users on the network are unable to distinguish between the polluted and the original content through content discovery alone, users download the polluted content and further share it with other peers, resulting in the polluted content spreading through the network.
In their analysis of the FastTrack peer to peer sharing system, it was found that as many as 50%-80% of copies of popular content were polluted.
#### Iii-A4 Index Poisoning
Often resource sharing in decentralised networks is conducted through indices, which allow users to conveniently discover the location of their desired content. Depending on the architecture of the system, the index could be distributed over a fraction of the file sharing nodes (as in FastTrack) or over all the nodes.
In an Index Poisoning attack an attacker inserts bogus records into the index, for example, by inserting random identifiers that do not correspond to any address into the index. This way, when a user attempts to download a file they are unable to locate its content, leading to them finally abandoning the search. [37]
While the Pollution attack described earlier requires the attacker to obtain high-bandwidth to make sufficient versions of the corrupted copies available in the network, the Index Poisoning attack is easier in that it requires less resources to pull off.
#### Iii-A5 Slandering
Under Sybil Attack, we discussed that it may be possible for a colluding group of malicious nodes to do _self promotion_ to increment their own reputation in a reputation based decentralised system. On the other hand, it may also be possible for a group to coordinate to reduce the reputation of a victim, such an attack is called _slandering_[38]
#### Iii-A6 White Washing
Nodes that have accrued a bad reputation by acting in an undesired manner can "clean" a bad reputation through _white washing_ to avoid the negative effects of having a bad reputation [38]
#### Iii-A7 Denial of service
Cooperating nodes can work to block the functioning of a decentralised system, preventing other peers from utilizing its services
### _Social Threats_
As seen by recent events, the rise of populist movements stands to be the biggest threat to the state of democracy worldwide. Many observers, especially journalists have suggested that the rise and spread of these movements has been massively aided through social media. [39] While social media can be a powerful tool for spreading information, when left unregulated, it can also lead to multiple social issues which greatly threaten long term cooperation. Some of these issues are:
#### Iii-B1 Echo Chambers and Polarisation
"Echo Chambers" are used to describe the mechanism by which people on sociotechnical platforms are exposed to large or exclusively pro-attitudinal communication. Such grouping of like minded people on social networks ('homophily') is believed to arise from preferential connection to like minded individuals when creating/breaking bonds and also from peer influence which results in connected individuals growing more similar. [40] The presence of an Echo Chamber could support populist messages that support rejection of expertise and reasoned debate among different views and lead to the emphasis of popularity of people or ideas over substance of their views. Therefore, Echo Chambers can lead to an insulation of users from the truth and even more perniciously, to be exposed to fake news.
In their study on Echo Chambers in the context of COVID-19 discussions on Twitter, Jiang et al [41] found strong evidence of political echo chambers on the topic on both ends of the political spectrum, but particularly so in the right-winged community. They found that tweets by right leaning users were almost exclusively retweeted by users who were also right leaning. Further, from random walk simulations, it was found that information in right leaning bubbles rarely travelled out of that bubble, forming a "small, yet intense political bubble". In another study on Climate Change discussions on Twitter, Williams et al "found a high degree of polarisation in attitudes, consistent with self selection bias" [40]
Studies have suggested that echo chambers could lead to
polarisation of users and thus to users retreating into like-minded networks [42], which creates segmentation in networks and thus poses a large challenge to long term cooperation.
#### V-B2 Inequality and Social Divide
While the idea of a digital democracy is appealing, it is hampered by findings of socioeconomic inequality which prevent usage of the platforms by certain strata of society. Beyond inability to access platforms, it is possible that members of society lack the skills to express their views or consume information that is being shared by other members. [43]
A lack of participation by different members of society could lead to the propagation of biased views or misinformation against the underrepresented members. Thus, it constitutes a credible threat to long term cooperation.
However, diffusion theories predict inequality at the outset of any innovation which is narrowed as time progresses and adoption rate spreads.
## VI Reputation Systems
As motivated at the end of section II, instead of only relying on direct reciprocation in decentralised systems, we can allow users that help each other out to establish a good reputation which can be used to reward them in some other way. After all, this is more representative of real social interactions, while we are interested in how people interact with us, we are also interested in the actions of others which we learn about from social channels such as gossip. In taking actions, we don't only take into account our direct experiences but also experiences we've learnt about from indirect sources. Similarly, when choosing to assist someone we also consider how it affects our reputation in society.
Although animals possess simple mechanisms for indirect reciprocity, only humans engage in complex reputation systems. [16] This seems to be because such systems require a substantive cognitive load, not only does it require a memory of all transactions but also requires the ability to monitor the dynamically changing social network of the group. Hence, the strategies required to succeed in indirect reciprocity are also understandably a lot more complex than the simple Tit For Tat strategy that succeeds in direct reciprocity.
In their paper on reputation systems, Resnick et al [11] define a reputation system as one that "collects, distributed and feedback about participants' past behavior... these systems help people decide whom to trust, encourage trustworthy behavior, and deter participation by those who are unskilled or dishonest."
As mentioned before, users on decentralised networks look for some form of assurance that their transactions on the network will be successful. The reputation of a user in reputation systems serves as a "shadow of the future" to each transaction, creating an expectation for what a user can expect when dealing with another user.
Consider the example of one of the first reputation system in eBay, the "Feedback Forum": after a transaction is completed, a buyer or seller can rate each other (1, 0 or -1) and leave comments. A participant in eBay accumulates such points over time which are displayed next to their screen name. A buyer can view a seller's points and comments left by other users to create a "shadow of the future" into the transaction they can expect to have if they buy an item from the seller. Many other online forums and marketplaces such as Amazon and Stack Overflow rely on similar reputation systems.
According to Resnick, a reputation system must meet three challenges: [44]
1. Provide information that should allow users to distinguish between trustworthy and non trustworthy users,
2. Encourage users to be trustworthy, and
3. Discourage participation from users who aren't
In addition to the above, a successful reputation system should also be able to avoid issues mentioned in V
The following are a few notable reputation systems which attempt to accomplish the objectives stated above:
### _PageRank_
One of the most widely known reputation systems in the world is Google's PageRank. PageRank determines a rough estimate of the relative importance of a website by computing a ranking for every web page. The underlying assumption of PageRank is that a website that is more important is more likely to receive links from other websites than a website that is less important. PageRank is an interesting example of a Reputation Mechanism since while it may not be the exclusive algorithm used by Google, it has inspired many other reputation algorithms.
The calculation of PageRank of a website can be simplified to the below equation:
\[\sum\frac{PageRank\ of\ Inbound\ Link}{Number\ of\ Outgoing\ Links\ on\ that\ Page} \tag{1}\]
Hence, if a website \(a\) with a high PageRank has a link to another website \(b\), website \(b\) will receive a large boost to its PageRank. However, the contribution of \(a\)'s PageRank to \(b\)'s PageRank will be reduced if \(a\) has a lot of outgoing links, this is ensured by dividing the contribution of each inbound link by the number of outgoing links on that page.
Through this simple idea, Google was able to very successfully rank websites in terms of relevance. The idea was so revolutionary that PageRank is still used in Google today (along with 200 other more complex algorithms). [45] However, PageRank relies on a Trusted Oracle model which requires a centralised service, dependency on such an oracle to provide reputation introduces points of failure and does not scale well. Further, the original version of PageRank is susceptible to Sybil attacks. [46]
### _WikiTrust_
_WikiTrust_[47] is the reputation system used for one of the largest collaborative applications known to mankind: the writing of articles on Wikipedia. It is a content-driven reputation system, that is, it relies on automated analysis of the content generated by the user and the collaboration process to derive the reputation of the user, rather than explicit feedback
provided by users on other users. It is possible to use such a reputation system since the applications it's catered for is entirely content driven.
The goals of _WikiTrust_ are to incentivise lasting, meaningful contributions from users, help increase the quality of content being produced, spot vandals and to offer users an indicator of the quality of the content they are consuming. To achieve these goals, WikiTrust maintains different reputations for users and the content they create.
If a user makes a contribution that is meaningful and its content is preserved in future edits, they gain reputation, on the other hand, if their contributions are wholly or partially undone by future edits, then they lose reputation. Content starts with no reputation, if they are revised by users with high-reputation, it gains reputation. On the other hand, if the text is disturbed by too many edits, indicating that the content may not be trustworthy, it loses reputation.
In order to estimate how much each contribution is preserved or removed as required for the above, WikiTrust relies on an edit distance function \(d(r,r^{\prime})\) which is computed based on how many words have deleted, inserted, replaced and displaced from the edit that led from \(r\) to \(r^{\prime}\). Relying on such a distance functions allows the reputation system to be language independent. Finally, the value of an edit is calculated using the function:
\[q(b|a,c)=\frac{d(a,c)-d(b,c)}{d(a,b)} \tag{2}\]
Where \(b\) is the edit being evaluated, \(a\) is the revision before the edit and \(c\) is the revision after it. \(q(b|a,c)\) outputs a value between -1 and +1; it is equal to -1 if \(a=c\) and hence implying that \(b\) was entirely reverted, on the other hand, it is equal to +1 if the change from a to b was entirely preserved. However, a limitation of this approach is that since it requires subsequent revisions, it is unable to judge newly created revisions.
WikiTrust only considers not negative reputation values, new users are assigned a reputation very close to 0, this ensures that vandals cannot white wash themselves since their new identities would have a similar reputation to their vandal identity. Also, due to the content driven nature of the system, creating Sybils is harder than in a system where identities can simply be used to promote each other.
### _EigenTrust_
While the reputation systems listed so far possess many interesting properties, both of them require a centralised "oracle" which acts as an intermediary for all nodes in the network, aggregating and providing trust values when a node requests them. Such an oracle is antithetical to the design of a decentralised application.
EigenTrust [48] could be described as a distributed version of PageRank. The algorithm allows the calculation of a unique _global trust value_\(\overrightarrow{t}\) for each peer \(i\) in the network which reflects the experience of all peers in the network with peer \(i\). \(\overrightarrow{t}\) provides \(i\) a trust value for each peer in the network which it can refer to in order to establish how much it can trust another peer, ensuring it only conduct transactions with trustworthy peers. Further, it also has mechanisms to ensure that a malicious group of cooperating peers cannot lie for their own benefit.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Year** & **Reputation Mechanism** & **Trust Function** & **Decentralised** & **Sybil Tolerant** \\ \hline
1999 & PageRank & Trustworthiness of a node is determined by counting & & \\ & & \(PageRank=\sum\frac{PageRank\wedge\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}}{Number\ of\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t}\overrightarrow{t} \overrightarrow{t}\
Similar to eBay's reputation system, the system requires each peer to rate another peer after it conducts a transaction with them. This results in the creation of a local trust value \(s\) for each peer where \(s_{ij}\) reflects how much \(i\) trusts peer \(j\) based on its transactions with them. It is suggested that one way of calculating \(s_{ij}\) is using:
\[s_{ij}=sat(i,j)-unsat(i,j) \tag{3}\]
Where \(sat(i,j)\) and \(unsat(i,j)\) represent the number of satisfactory and unsatisfactory transactions that \(i\) had with \(j\) respectively. These localised trust values \(s_{ij}\) are further normalised to produce \(c_{ij}\) to ensure that the trust values are between 0 and 1, to ensure that malicious peers can't simply assign arbitrarily high local trust values to other malicious peers and low values to other peers, abusing the system.
The local trust values are then aggregated in each peer to produce \(t\) as below:
\[t_{ik}=\sum_{j}c_{ij}c_{jk} \tag{4}\]
This is equivalent to \(i\) asking its acquaintances how much they trust their acquaintances. However, so far \(t\) only reflects the experience of \(i\) and its acquaintances. This process needs to be repeated again in order to reflect \(i\)'s acquaintances' acquaintances experience and so on. The authors of the paper are able to prove that the final trust vector \(\overrightarrow{t_{i}}\) will converge to the same vector \(\overrightarrow{t}\) for every peer \(i\) in the network, namely the left principal eigenvector of the matrix \([c_{ij}]\)
\(\overrightarrow{t}\) is calculated in a distributed manner. The authors prove that in a network with where the number of active peers are small, this can be done relatively efficiently since each peer has limited transactions. In a network with a large number of peers, the algorithm can be performed efficiently by limited the local number of local trust values \(c_{ij}\) (lat each peer can report. Further, a decay factor \(a\) can be used to reduced the influence of malicious cooperating peers ensuring that the same group of nodes vouching for each other is not as significant in the resulting trust values as a diverse group.
The main challenge facing the design of distributed reputation systems is the aggregation of local trust values into global trust values. EigenTrust's translated the notion of _transitive trust_ where if a peer trusts another peer it also trust its trusted network into a distributed algorithm which can run efficiently without congesting the network.
While EigenTrust is a powerful method, it requires the presence of a prior notion of trust i.e. a group of peers that are known to be trustworthy. The authors suggest that this could be the first few peers that join the network since the designers and early users of a P2P network are less likely to want to cheat in a network that they helped create. However, this assumption is a significant disadvantage of this Reputation Mechanism.
### _HonestPeer_
EigenTrust's assumption of a static group of trusted peers marginalises other peers, resulting in them being ranked much lower despite them potentially being honest. It also leads to potential poisoning vulnerabilities since if a trusted peer downloads a poisoned file from a malicious peer, it could result in the network also downloading the file. Finally, relying on a selected group of nodes comes with a lot of the same problems as relying on a single central entity.
HonestPeer [49] is an enhanced version of EigenTrust which tackles this problem by giving peers with a high reputation value a role in calculating the global reputation of other peers. Hence, instead of solely relying on a static group of peers, the algorithm selects a group of highly trusted peers dynamically, making it more robust and less centralised. While several improvements have been suggested to the original EigenTrust algorithm, HonestPeer is notable because it is able to reduce the algorithm's dependency on pre-trusted peers without sacrificing the simplicity of the algorithm, an important requirement for effective cooperation strategies as seen in section II
The implementation follows the same approach of calculating trust values for each peer as EigenTrust. The trust values calculated in each run are used to find a honest peer \(h\) for each node, where:
\[t_{h}^{k}=\max_{i}{(t_{i}^{k})} \tag{5}\]
This honest peer, \(h\) is then used in the calculating the proliferation parameter \(a\), where:
\[a=\begin{cases}t_{h}^{k}&\text{if }t_{h}^{k}>0.5\\ 1-t_{h}^{k}&\text{if }t_{h}^{k}\leq 0.5\end{cases} \tag{6}\]
Based on this, the current reputation of peer \(i\) is then calculated as:
\[t_{i}^{k+1}=\begin{cases}a\times p_{i}+(1-a)\times\sum_{x=1}^{x=n}c_{xi}t_{x}^ {k}&\text{if }h\in P\\ (1-a)\times p_{i}+a\times\sum_{x=1}^{x=n}c_{xi}t_{x}^{k}&\text{if }h\notin P \end{cases} \tag{7}\]
Where \(P\) is the group of pre-trusted peers. As a result of this modification, the influence of the pre-trusted peers is high if \(h\) is a part of them, otherwise their effect on the reputation is marginalised. Through simulation in a p2p file sharing network, the paper's authors were able to demonstrate that HonestPeer reduced the percentage of invalid files and increased the success rate of good files downloaded. Further, by allowing \(h\) to be dynamically replaceable, the algorithm ends up more scalable too, which was also demonstrated in simulation.
### _BarterCast_
BarterCast [50] is a lightweight, fully distributed reputation system that aims to prevent _lazy freeriding_ in P2P file-sharing systems. Lazy freeriding in this context is differentiated from _die-hard freeriding_ which consists of nodes employing sophisticated methods to subvert the reputation system. The authors of the paper argue that only a small fraction of users in any application actually use sophisticated measures and hence, it
is prudent to create an efficient reputation system that can at least prevent lazy freeriding.
To establish the local subjective reputation of peer \(j\) at node \(i\), \(i\) builds a local representation of the network which is used to derive the reputation of \(j\) based on: a) direct experience of \(i\) with \(j\) b) information about \(j\) that \(i\) receives from other peers. The local representation of the network is constructed using a directed graph where edges to other nodes exist if there is a direct interaction between two nodes. After the local graph has been constructed, a _maxflow_ algorithm is run on the graph where given a graph \(G\), the capacity \(c(i,j)\) represents the _total number of bytes_ transferred from one peer to another and hence the flow, \(f(i,j)\) is a measure of "indirect service" received by \(i\) from \(j\) in the network. The subjective reputation \(R_{i}(j)\) of peer \(j\) at node \(i\) is then calculated as:
\[R_{i}(j)=\frac{\arctan(maxflow(j,i)-maxflow(i,j))}{\pi/2} \tag{8}\]
\(R_{i}(j)\) calculated this way has a value between -1 and 1. The _arctan_ function is used in the equation to ensure that even modest contributions by new peers can result in a significant reputation. In order to ensure practical efficiency, the implementation only regards paths with a maximum length of two. The paper's authors claim this to be a reasonable assumption given the _small-world effect_, where 98% of peers exchange data directly or with a common third party.
The local representation of the network is built in twoways: a) Using the private history of peer \(i\) where in an entry \((j,up,down)\)\(up\) is the number of bytes \(i\) has uploaded to \(j\) and \(down\) the number of bytes \(i\) has downloaded from \(j\) and b) Using an exchange of private history between peers through messages.
The reputation calculated using BarterCast can then be used to prioritise upload bandwidth to peers with a high reputation or to rate limit peers with a low reputation.
### _PeerTrust_
Like all the reputation systems mentioned above, PeerTrust [51] evaluates a node's trustworthiness by taking in consideration the feedback a peer has obtained from other peers. However, in addition to simply considering the feedback, PeerTrust also takes into account certain other factors:
1. **Credibility of Feedback** It may be possible for a peer to lie in their feedback due to malicious motives. Therefore, the credibility of a node is taken into account when deciding how much to value their feedback of another node.
2. **Transaction Context Factor** Not all transactions are equal, for example, in an application that involves transactions between users, transactions of greater economic value should influence a user's trust more than transactions of small value since otherwise a user could behave in a trustworthy manner in a lot of transactions of small value but cheat in one transaction of large value and still end up with a positive reputation. Beyond this, just because a node can provide good services in a certain context, this does not necessarily imply that they can provide comparable services in a completely different context, for example, a node that provides good information about tourism should not also automatically be trusted to provide equally good medical advice. Therefore, the context of the transaction is made a factor when calculating trust.
3. **Community Context Factor** In order to deal with community specific issues and vulnerabilities such as lack of incentive to provide feedback and collaboration of malicious peers to manipulate their trust scores, community contexts are added as a factor when calculating trust.
Hence, the trust value \(T(u)\) of a node \(u\) is defined as:
\[T(u)=\alpha\times\sum_{i=1}^{I(u)}S(u,i)\times Cr(p(u,i))\times TF(u,i)+\beta \times CF(u) \tag{9}\]
Where,
* \(I(u)\) denotes the total number of transactions of \(u\) with all other nodes in a recent time window
* \(p(u,i)\) denotes the other node that participated in peer \(u\)'s \(i\)th transaction
* \(S(u,i)\) is the normalised amount of satisfaction that \(u\) received from node \(p(u,i)\) in the \(i\)th transaction
* \(Cr(v)\) is the credibility of the feedback submitted by \(v\)
* \(TF(u,i)\) is the adaptive transaction context factor for node \(u\)'s \(i\)th transaction
* \(CF(u)\) is the adaptive community context factor for node \(u\)
### _MeritRank_
_MeritRank_[52] uses a merit based tokenomics model which aims to bound the benefits of Sybil attacks instead of preventing them altogether. The system is based on the assumption that peers observe and evaluate each others' contribution, similar to the reputation system used in eBay. Each peer's evaluation is stored in a personal ledger and modelled in a feedback graph where the feedback to each user is modelled as a special token value which accumulates over time. It is also assumed that each peer is able to discover the feedback graph, for example, through a gossip protocol. MeritRank manages to achieve this Sybil tolerance by imposing the following constraints on how reputation can be gained inside the feedback graph:
1. **Relative Feedback** This constraint places a bound on how much feedback a single entity can provide to another entity by the degree of the entity i.e. the size of the set of its neighbours. This constraints assists in limiting a single entity from creating multiple parallel Sybils
2. **Transitivity \(\alpha\) decay** This constraint limits the ability of an entity to create a
serial Sybil attack by terminating random walks in the feedback graph with a probability \(\alpha\)
3. **Connectivity \(\beta\) decay** Sybil attack edges in a feedback graph are often bridges i.e. their cut creates two separates components. This constraints introduces a punishment for a node for being in a separate component
A trust graph modelled using these MeritRank's constraints will satisfy:
\[\lim_{|S|\rightarrow\infty}\frac{w^{+}(\sigma_{s})}{w^{-}(\sigma_{s})}\leq c \tag{10}\]
where, \(w^{+}(\sigma_{s})\) is the profit gained by the Sybil Attack \(\sigma_{s}\), \(w^{-}(\sigma_{s})\) is the cost of the Sybil attack, \(S\) is the set of Sybils and \(c\) is some constant value such that \(c>0\). Thus MeritRank is able to provide a reputation system with feedback which is Sybil tolerant.
Table I summarises the reputation systems discussed so far. The survey of reputation systems above in not an exhaustive list, for a more comprehensive treatment of the subject the reader is referred to the following surveys: [53, 54, 55, 56]. However, the reputation systems stated above provide a decent summary of how indirect reciprocity can be used to create trust in a decentralised network while also tackling a lot of trust issues inherent to the decentralised setting.
While indirect reciprocity through reputation systems is a powerful tool for tackling the infrastructural threats in V such as Sybil attacks, whitewashing etc., in of themselves, reputation systems cannot serve as an "Universal Trust Machine" due to the following limitations:
1. **Not Privacy Preserving** While a lot of reputation systems presented so far may be confidentiality preserving, they are not privacy preserving, i.e. they do not prevent the discovery of users who contributed to a reputation rating. For example, if a node goes offline between two reputation queries, the difference in the aggregated reputation score across the two queries can reveal the user's contribution. [57]
2. **Do not provide a mechanism to carry out transactions** While reputation systems are great for applications such as P2P file exchange, enabling cooperation in other domains such as e-commerce and IoT requires mechanisms beyond simple provision of trust such as the ability to carry out transactions.
3. **Lack of Flexibility** Guaranteeing trust in the real-world is often not as simple as simply identifying trustable users, centralised institutions provide bespoke functionality such as financial contracts and escrows in order to enable cooperation between users who want to perform transactions with each other. In section VII, we show how Smart contracts can allow for such functionality in the distributed world.
4. **Requires all entities to remain online** Most of the decentralised solutions suggested above suffer from the problem of requiring trusted entities to always be online and connectable in order to enable network discovery and furthermore, have a valid address where they can be contacted. This is too large an assumption in a lot of domains such as IoT where nodes are constantly going offline.
5. **Do not solve social threats** It is important to note that reputation systems only tackle infrastructural threats. Due to their limited scope, they are unable to tackle the social threats listed in section V.
In the next section, we present Distributed Ledger Technology (DLT), a technology that offers a solution to a lot of the limitations listed above.
## VII Distributed Ledger Technology
Any technology that leverages a ledger in order to store data distributed across multiple nodes in referred to as a DLT. The recent rise in popularity of Bitcoin has led to prominence of Blockchain technology and multiple other technologies that leverage Blockchain such as Ethereum, Hyperledger, Cardano etc. However, Blockchain is not the only DLT, some other examples of DLTs include: Tangle, Hashgroup and Sidechain. [58] presents an extensive comparison of these technologies.
At their core, DLTs are data structures that allow recording of transactions and functions for their manipulation. Generally, all DLTs are based on three well-known foundational technologies: [58]
1. **Public Key Cryptography** Since DLTs operate in insecure, distributed environments, Public Key Cryptography allows for the establishment of secure digital identities and communications between nodes. The digital identities also allow for the enforcement of ownership of resources in the network and thus helps facilitate transactions.
2. **Distributed Peer to Peer Network** Operating in a distributed network allows for a highly scalable network without a single point of failure (as in a centralised network).
3. **Consensus Mechanism** The presence of such a mechanism allows for all nodes in the network to converge on a single version of global truth without a trusted intermediary.
DLTs are similar to reputation systems in that often, their main goal is to allow interactions between users that do not trust each other without a trusted third party. [59] By design, DLTs offer a high degree of transparency, traceability and security which allows them to offer security, privacy and trustworthiness inside a diverse set of applications. Below is a survey of four DLT technologies which helps to demonstrate the benefits provided by them:
### _Bitcoin_
Bitcoin [22] is a cryptocurrency that leverages Blockchain technology to offer a distributed and immutable ledger that stores transaction history. The core technology consists of a linked list of blocks that are connected together, with each block referencing the previous block in the chain. Transactions
are continuously being appended in blocks to the chain and are visible to all participants in the network.
Bitcoin uses Proof of Work (PoW), a form of cryptographic proof as a consensus mechanism which allows a party to prove to other nodes in the network that a specific amount of computational power was expended. The nodes in the network are able to verify this expenditure with minimal effort. The purpose of using PoW as consensus mechanism is to deter manipulation of data in the ledger by imposing an infeasible energy and hardware requirement in order to do so, thus guaranteeing security of the ledger and allowing nodes in the distributed network to conduct transactions with each other.
However, multiple papers have criticised the inefficiency of PoW and Bitcoin [60, 61], notably the requirement of wasting enormous amount of electricity through expensive mining equipment. Further, transactions on the network require large transaction fees which have to be paid to miners and a large confirmation time for transactions, making Bitcoin a poor choice for applications that require a large amount of transactions, quickly.
### _Ethereum_
Similar to Blockchain, Ethereum [62] is based on Blockchain technology, however, as of 2022, Ethereum differs from Bitcoin in that it uses Proof of Stake (PoS) as a consensus mechanism instead of PoW. In PoS, the node with the highest stake and not the highest computing power obtains the right to book-keeping, where the stake is a reflection of a node's ownership of a specific amount of currency. [63] Therefore, it solves the problem of wasted computing power present in PoW to a certain extent and can reduce the time required to reach a consensus. However, a drawback of a stake based consensus mechanism is that it can result in centralisation in extreme cases.
Ethereum is also different from Bitcoin in that it is a programmable blockchain platform; using Smart Contracts, users on the Ethereum Platform can not only perform simple transactions, but can also create complex transactions.
The term _Smart Contract_ was first termed by N. Szabo who defined it as a "_computerized transaction protocol that executes the terms of a contract_" [64]. In Ethereum, a smart contract represents a deterministic, Turing-complete program which consists of a collection of code (functions) and data (state) which are deployed to the Ethereum network and run as programmed. Smart Contracts allow users on the network to define complex rules for facilitating interactions. In Ethereum, Smart Contracts are programmed using the Solidity programming language.
However, it is worth noting that Smart Contracts in Ethereum suffer from certain limitations:
1. **Vulnerability Prone** Smart Contract code is prone to vulnerabilities which can be costly when exploited. In a famous example, $50 million was stolen from a DAO in an attack that exploited a concurrency-based vulnerability. A trivial error is bound to be exploited by hackers, though this limitation is true for all open source code in general.
2. **Expensive** Code in a Smart Contract running on the Ethereum network is significantly costlier per instruction than the same code running on a typical cloud server.
Further, even though Ethereum uses PoS, it still suffers from high transaction fees similar to Bitcoin.
### _Iota_
IOTA is a cryptocurrency meant for the IoT industry. IOTA is different from the technologies listed so far in that it does not use Blockchain as its underlying ledger and instead uses the _Tangle_[65], a directed acyclic graph (DAG) for storing transactions. Tangle is both a decentralised data storage architecture and a consensus protocol where each node in its DAG represent a transaction and the connections between nodes represent validations of the transactions.
Unlike Bitcoin and Ethereum, IOTA removes the dichotomy between transaction miner and validator, instead requiring users who are adding transactions to the network to validate other transactions. Thus, in practice, in order to add a transaction to the network, a user needs to choose and validate two other transactions, using Hashcash as the PoW algorithm with a lowered difficulty. The rationale behind using PoW is to prevent spam transactions. Through this mechanism, IOTA is able to reduce or get rid of transaction costs and allow a much higher transaction per second count than Bitcoin and Ethereum. Further, since it relies on network users to validate transactions, as the number of users in the network increases, the validation time of submitted transactions also decreases, rendering it much more scalable than both Bitcoin and Ethereum.
However, as of the time of writing this paper, IOTA relies on a centralised coordinator to assist in adding transactions to the network and further, it does not have support for smart contracts. (Though this is planned to change in a future release). Further, since the technology is still relatively new, it is still experimental and may contain security threats. [66]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Year** & **DLT** & **Ledger** & **Low Fees** & **High Scalability** & **Smart Contracts** & **Sybil Resistance** \\ \hline
2008 & Bitcoin & Global Blockchain & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ \hline
2014 & Ethereum & Global Blockchain & \(\times\) & \(\times\) & \(\checkmark\) & \(\times\) \\ \hline
2018 & IOTA & Tangle & \(\checkmark\) & \(\checkmark\) & \(\times\) & \(\times\) \\ \hline
2018 & TrustChain & Locally Stored Linked Blockchains & \(\checkmark\) & \(\checkmark\) & \(\times\) & \(\checkmark\) \\ \hline \end{tabular}
\end{table} TABLE II: Overview of mentioned DLTs
### _TrustChain_
A large drawback of the solutions listed so far is that none of them tackle the infrastructural threats presented in section V, most notably, they are not Sybil resistant, hence, while they are useful in isolation for facilitating transactions, they cannot be used for the use case of creating trust. TrustChain [67] is unique in this sense, since it is a Sybil-resistant, scalable blockchain. TrustChain relies on _NetFlow_ a Reputation Mechanism, to calculate reputation using an interaction graph and the max-flow algorithm, thus allowing it to be Sybil resistant.
In TrustChain, each node is responsible for storing their own Blockchain and hence, transaction history. In addition to containing a single transaction's data, each block in the Blockchain also contains two references, one to the last block in its own Blockchain and another to the last block in the transacting party's Blockchain. While this mechanism doesn't allow the prevention of double spending, it allows detection of it since all blocks must have two outgoing and two incoming links. Further, the additional pointer to the counterparty's chain makes it hard to remove or alter blocks in one's chain, resulting in the Blockchains being tamper proof. TrustChain blocks are also exchanged between nodes using gossiping mechanisms and hence, replicated network wide, allowing the network to be resilient against nodes going offline.
TrustChain's local storage also allows it to be highly scalable since there is no global consensus mechanism, also removing the need for transaction fees and leading to a large amount of transaction per second.
Though the lack of a global Blockchain and hence, lack of a currency earned by validators in the network could also be seen as a drawback since a large reason for growth behind popular cryptocurrencies is the investment of the users in it since they possess a vested interest in it. Further, unlike Ethereum, TrustChain does not possess a mechanism for creating complex contracts and hence, lacks flexibility in trust generation.
Table II provides an overview of the DLTs covered in this section. [57] provides an extensive survey of how DLTs, like Blockchain are being used in Distributed Trust and Reputation Management systems.
## VIII Other Mechanisms
Besides direct reciprocity and indirect reciprocity, there are also other mechanisms that should be considered when understanding how cooperation could evolve in a decentralised network.
### _Network Reciprocity_
While the analysis so far relies on a well-mixed population, in reality the spatial structures of social connections are not well mixed, instead certain groups interact with each other more often than others. In such a setting, it may be able to form network cluster of cooperators who help each other out resulting in a "Network Reciprocity" which is a generalisation of "Spatial Reciprocity". [16]
In their paper "The WebEngine - A Fully Integrated, Decentralised Web Search Engine", Mario M. Kubek and Herwik Unger [68] suggest an idea idea of constructing "content overlay networks". This involves creating social graphs with nearby and distant neighbours, where nearby neighbours are neighbours that share similar content.
### _Machine Learning based reputation systems_
[69] suggests a reputation system that utilizes SVMs in order to establish trustworthiness of nodes in a decentralised network and demonstrates its effectiveness
## IX Future Roadmap
Reputation systems are a powerful mechanism for preventing infrastructural threats to long term cooperation such as Sybil Attacks, however, in isolation, they possess multiple limitations that prevent them from providing a decentralised trusted ecosystem. These limitations allow Distributed Ledger Technologies to be a strong complement to reputation systems as seen in TrustChain.
However, none of the solutions in the existing literature for generating trust attempt to tackle Social Issues inherent in decentralised systems. While in a centralised system, it is possible to solve these issues in an ad-hoc manner, decentralised systems require all rules of the system to be explicitly stated upfront and hence a "Universal Trust Machine" would have to tackle these issues in order to create an ecosystem that can rival a centralised system.
## X Conclusion
This survey presents the progress of Web3 on its road towards creating a "Universal Trust Machine", a hypothetical ecosystem where decentralised trust is generated without the aid of any trusted third parties. In order to do so, we first motivate the problem of creating cooperation in a sea of adversaries by discussing Robert Axelrod's research on the Evolution of Cooperation. Next, we clarify terminology by presenting context on the concepts of _Decentralisation_ and _Web3_. We then present issues that such a machine would have to tackle in order to foster long term cooperation. Finally, we present contemporary technologies such as reputation systems and Distributed Ledger Technologies which in conjunction could be used to construct the desired machine.
|
2303.10923
|
Dynamic Object Removal for Effective Slam
|
This research paper focuses on the problem of dynamic objects and their
impact on effective motion planning and localization. The paper proposes a
two-step process to address this challenge, which involves finding the dynamic
objects in the scene using a Flow-based method and then using a deep Video
inpainting algorithm to remove them. The study aims to test the validity of
this approach by comparing it with baseline results using two state-of-the-art
SLAM algorithms, ORB-SLAM2 and LSD, and understanding the impact of dynamic
objects and the corresponding trade-offs. The proposed approach does not
require any significant modifications to the baseline SLAM algorithms, and
therefore, the computational effort required remains unchanged. The paper
presents a detailed analysis of the results obtained and concludes that the
proposed method is effective in removing dynamic objects from the scene,
leading to improved SLAM performance.
|
Phani Krishna Uppala, Abhishek Bamotra, Raj Kolamuri
|
2023-03-20T07:47:36Z
|
http://arxiv.org/abs/2303.10923v1
|
# Dynamic Object Removal for Effective SLAM
###### Abstract
This research paper focuses on the problem of dynamic objects and their impact on effective motion planning and localization. The paper proposes a two-step process to address this challenge, which involves finding the dynamic objects in the scene using a Flow-based method and then using a deep Video inpainting algorithm to remove them. The study aims to test the validity of this approach by comparing it with baseline results using two state-of-the-art SLAM algorithms, ORB-SLAM2 and LSD, and understanding the impact of dynamic objects and the corresponding trade-offs. The proposed approach does not require any significant modifications to the baseline SLAM algorithms, and therefore, the computational effort required remains unchanged. The paper presents a detailed analysis of the results obtained and concludes that the proposed method is effective in removing dynamic objects from the scene, leading to improved SLAM performance.
Phani Krishna Uppala, Abhishek Bamotra\({}^{\dagger}\), Raj Kolamuri\({}^{\dagger}\)+
Footnote †: dagger}\)College of Engineering
SLAM, Image Inpainting, Dynamic Object detection.
## 1 Introduction
Since dynamic objects pose a challenge to effective motion planning and localization, we study the extent of this challenge and explore potential solutions. This solution works by removing the dynamic objects from the scene. Towards this goal, we follow a two-step process, (i) First, we aim to find the dynamic objects in the scene using a Flow-based method. (ii) Following that, we use deep Video inpainting [1, 2, 3] algorithm for removing the dynamic actors & objects from the scene. To test this method's validity, we use two state of the art slam algorithms, ORB-SLAM2 and LSD. Comparing our approach with baseline results, we seek to understand the impact of dynamic objects and the corresponding trade-offs.
Our approach transforms the data without any significant modifications to the baseline SLAM algorithms. Hence computational effort required is unchanged. Deep learning based approaches have established new state of the art in various tasks [4, 5, 6] have We use the recent advancements in the deep learning based network architectures
## 2 Literature Review
### Inpainting
Inpainting has wide-ranging applications, most well known among them being photo editing, re-targeting, object removal, etc. In recent years a wide variety of inpainting techniques have been explored. Generative approaches like deep image prior [7] use convolutional neural networks to reconstruct the image by masking out the inpainted part from the image. The feature reuse of the conv nets [8] forces the inpainted part of the image to be close to the rest of the original image. Inpainting approaches have also incorporated attention to improving the network focus on the area to be inpainted [5, 9, 10]. Recently Nvidia is offering inpainting as a web service, in which non-technical users can directly upload the image and draw lines to mark the part to be inpainted. This clearly shows production-level performance that can be achieved using the state of the art inpainting techniques. We want to exploit these developments in Inpainting to study the effects of dynamic objects on localization and mapping problems.
Category-specific inpainting methods have also been developed towards inpainting [11], this can be especially useful for the removal of the vehicles/cars from the image as they form a majority of the dynamic objects.
As the datasets for SLAM may not contain the necessary supervision required for training an inpainting algorithm, we also researched the unsupervised approaches for inpainting [1]. These unsupervised approaches alleviate the requirement of training data and are usually trained in an adversarial manner.
Since video data has temporal information, using video data for inpainting compared to image data provides a much richer set of cues. Especially temporal cues that are completely missing in the image level inpainting [1]. These video approaches use flow-based information to recover the missing portions.
### Orb-Slam
ORB-SLAM [12] is considered a state-of-the-art algorithm in the field of visual SLAM using monocular cameras. It can handle all the major tasks of SLAM: tracking, mapping, re-localization, and loop closing. It runs in real-time, in small as well as large environments, and both in indoor and outdoor environments. The following are some of the distinctive
features of this approach:
* It uses ORB, a 256-bit feature descriptor in its analysis. ORB finds the right balance for real-time processing speeds and rotational invariance required in the subsequent parts of the analysis.
* It can process the data in real-time while accomplishing the tasks of tracking, local mapping, and loop closing in parallel.
* A covisibility graph is used to contain information of similarity of mapping points between two frames observed.
* A bag of words place recognition module is first created out of offline data and is used to perform loop detection and relocalization tasks.
* It also has the distinctive feature of automatic map initialization without human intervention by selecting one out of either planar homography or non-planar fundamental matrix calculation heuristically.
* In the thread of tracking, first ORB features are extracted, then the pose of the camera is calculated either from the previous frame using a motion model or using global relocalization techniques, and a local map is then constructed and revised if certain conditions are met.
* In the thread of local mapping, new keyframes are created and inserted, and map points are created by triangulation techniques and then added to the map after bundle adjustment optimization.
* In the thread of loop closing, the keyframes are put through several tests to check if they qualify for loop closing; similarity transformation is carried on the selected candidates to calculate the error being accumulated, and then loop fusion is carried out along with essential graph optimization technique to obtain better results.
* The proposed algorithm is evaluated on 27 sequences from the most popular datasets, and its effectiveness in regular SLAM tasks is proved.
### Orbslam2
ORBSLAM2[13] builds upon ORB-SLAM by introducing new functionalities and works with stereo and RGBD cameras. Several changes have been suggested to the original monocular based ORB slam to incorporate and exploit stereo/depth information.
* Monocular, close and far stereo keypoints: Initially, stereo and RGBD key points are considered and then classified into close and far keypoints based upon thresholding depth information.
* Close keypoints can be safely triangulated from one frame, and it takes multiple frames to triangulate far points. Monocular keypoints, however, are defined at the points where depth information cannot be obtained both from stereo or RGBD.
* This paper uses depth information from stereo or RGBD cameras to create a keyframe right from the first frame itself and use all stereo points to create an initial map.
* The obtained stereo keypoints are added in bundle adjustment for optimizing the camera pose in the tracking thread.
* During loop closing, unlike monocular slam, the scale is now observable in ORBSLAM 2, and this information can help in geometric validation and pose-graph optimization without dealing with scale-drift.
* During Keyframe Insertion, the information of distinction between close and far stereo points is used to introduce another new condition for keyframe insertion, which is basically keeping a threshold for insertion based on the number of close and far keypoints.
* A new kind of localization is introduced, which tried to find matches between ORB in the current frame and 3D points created in the previous frame from the stereo/depth information making the localization of camera robust to unmapped regions.
* The proposed algorithm delivers good results on 29 popular datasets with reduced translation RMSE. The authors also claim that this algorithm is the best SLAM solution for the KITTI visual odometry benchmark.
### Lsd-Slam
* LSD-SLAM[14] is another state-of-the-art algorithm in the stream of visual SLAM that also uses monocular cameras.
* It uses direct image intensities instead of feature descriptors to perform tracking and mapping operations.
* The camera's pose is estimated using direct image alignment, and semi-dense depth maps are used to estimate 3D geometry.
* A pose-graph of keyframes approach is used to build scale-drift corrected, large-scale maps and loop-closures.
## 3 Theory/method
We propose an integrated framework for the stated objective of dynamic object removal for effective SLAM. First we generate masks in each frame, of the dynamic moving objects using Yang et al. [15] We then use Inpainting algorithm to remove these dynamic objects from the scene. We test the implementation of this removal by comparing the results obtained on standard SLAM systems, namely ORB SLAM2 and LSD SLAM with and without dynamic objects.
### Dynamic Object Detection
As an alternative to ORB feature-based localization of dynamic objects, we used Unsupervised Moving Object Detection via Contextual Information Separation [15]. This is an optical flow-based method that works on a non-static camera motion. When we ran the algorithm on the benchmarking dataset for video object segmentation, Davis dataset [16]. We were able to replicate qualitative and quantitative results, as reported in [1]. This is a recent CVPR '19 publication; we made changes to the official code and approach to get results on the Kitti dataset. These modifications are primarily required for two main reasons. First, the lack of data augmentations proposed by this approach on the KITTI dataset. We resolved this by adding our own custom implementation. Second, official implementation assuming ground truth availability for the dataset. This is not the case with the KITTI dataset; hence, we reformulated the testing graph without the ground truth dependency by modifying the existing code. The results for the same are provided in section [1].
Based on the dynamic object detection algorithm results, we are taking the generated masks for the Kitti dataset and performed inpainting using Deep Video Inpainting [1]. The results for the inpainting are shown in section [1] as well.
### Dynamic Object Removal: InPainting
As a way to remove the dynamic objects from the input sensor data, we did a thorough literature survey on various inpainting algorithms and presented them in section 2.1. Of these algorithms, we tested the two most promising approaches.
* We tried two approaches: 1) Image based inpainting approach, 'Deep image prior' 2) video based inpainting, 'Deep video inpainting'.
* Upon evaluation of these two algorithms, the time taken by deep image prior using a single 1080Ti GPU is quite higher than that of deep video inpainting. Also, as the input modality of KITTI and TUM RGBD is video, we use deep video prior from here on.
* We independently tested our inpainting results on a standard inpainting benchmarking dataset(Davis) by inpainting over the segmentation masks.
* For dynamic object removal, we faced a technical challenge of high sparsity in features (ORB) to achieve localization.
* We explored different approaches to resolve this and localize dynamic objects. Moreover, we used the Contextual Information Separation [15]. This approach is elaborated more below.
### ORB SLAM 2 and LSD SLAM
We did an in-depth literature review of ORB SLAM2 and LSD SLAM. We give a brief overview of these papers in the Literature Review section 2. We performed baseline implementations of both ORB SLAM2 and LSD SLAM. We were able to obtain visualizations of the same. We implemented ORB SLAM2 on the TUM RGBD and Kitti dataset and implemented LSD SLAM on the TUM RGBD and TUM room sequence datasets.
## 4 Datasets
We have performed our experiments on KITTI [17]and TUM-RGBD datasets [18]. We used the same datasets and the same algorithmic setting for comparison across algorithms. Thus performing a controlled & fair experiments on all the datasets. Description of various datasets in our pipeline are detailed below.
### Davis 2016
Densely Annotated Video Segmentation (DAVIS) [19] is a video object segmentation dataset; this is released as part of the DAVIS challenge. We used the most common variant of DAVIS 2016 for both of our inpainting and dynamic object segmentation tasks. This public bench marking dataset contains dynamic scenes with complex camera movements, making it realistic. This dataset also provides rich annotation in terms of pixel-wise segmentation mask. Our trained models for inpainting [1] used the segmentation as mentioned above the mask. Following [1] we used the annotations available on this dataset to independently test the inpainting model before using the results for SLAM. These results are reported in the results section. We followed the same approach of independent evaluation for dynamic object segmentation and presented the results.
### KITTI dataset
The odometry benchmark from the KITTI dataset consists of 22 stereo sequences, saved in loss less png format. It provides 11 sequences (00-10) with ground truth trajectories, and we will use those for our project. The data is collected from a car driven around a residential area with accurate ground truth from GPS and a Velodyne laser scanner. This is a very
challenging dataset for monocular vision due to fast rotations, areas with much foliage, which make more difficult data association, and relatively high car speed, being the sequences recorded at 10 fps. [17]
### Tum-Rgbd dataset
We will also test our results with the same parameters on the TUM-RGBD data. The dataset contains the color and depth images of a Microsoft Kinect sensor along the ground-truth trajectory of the sensor. The data was recorded sensor resolution (640x480) at full frame rate (30 Hz). The ground-truth trajectory is provided with the dataset and was obtained from a high-accuracy motion-capture system with eight high-speed tracking cameras with high frequency (100 Hz). The data also comes with accelerometer data recorded by the Kinect sensor. [18]
## 5 Code: Technical implementation
Since the odometry split of the KITTI dataset is without segmentation, existing dynamic objection detection approaches do not benchmark on this dataset [4]. The approach we used [4] did the same. The workaround this we implemented our own dataloader in the format that is expected as of [4]. This implemented code is combined with the existing dynamic object detection code. Along with the above challenge, the KITTI odometry split does not have the ground truth corresponding to dynamic objects. To address this, we re-implemented the testing graph by taking parts of the existing code. The inpainting algorithm, deep video inpainting, uses a deep network for effective inpainting. Towards this code setup, we used GPU packages, including Cuda toolkit, PyTorch, and gcc5. We used 4 Nvidia 1080Ti for our inference. Similarly, our dynamic object segmentation approach is a neural network based model as well. Towards setting up this codebase, we installed GPU compatible TensorFlow, Keras, Cuda toolkit, and cudnn. We used 2 Nvidia 1080Ti for the inference.
## 6 Results
### Dynamic object detection
The results for dynamic object detection algorithm are shown in Fig. 1. From the results, we can see that the dynamic object detection algorithm, based on an optical flow of features can get good results and can be visually confirmed by the examples shown. However, there can be failure cases as shown in Fig. 1 (last example). In that example, no dynamic object is detected; still, the algorithm produces the resultant mask. Such issues are observed when the car is taking a turn or suddenly changes its direction.
### Inpainting
The inpainting result for the benchmarking DAVIS dataset is shown in Fig. 2 and produces very good inpainting results using the algorithm mentioned before. We applied the same learned model on the KITTI dataset to remove the dynamic objects detected by the algorithm and were able to achieve good results for removing the object. There are some artifacts in some cases and result in distorted images, but most of the time, it works well. This is because the original model we are using to perform the inpainting is trained on RGB images, and the KITTI odometry dataset we are using is grayscale. Some examples are shown in Fig. 3. Following the previous inpainting algorithms, we measured the performance on Davis using the FID score, an inception based score to measure how realistic the generated images are.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Network** & **FID Score** \\ \hline VINet (agg + T.C.) (in use) & **0.0046** \\ \hline \end{tabular}
\end{table}
Table 1: FID score for Davis dataset of 20 videos
Figure 1: Results for dynamic object detection on the Kitti dataset. Images on the left show the original images. Images on the right show the corresponding original images with detected mask overlay.
Figure 2: Inpainting result on the benchmarking video object segmentation data, Davis dataset. Fig. 2 A) Shows the original image from which the cyclist is removed. Fig. 2 B) Shows the mask for the object to remove. Fig. 2 C) Showcases the inpainting result after removing the object.
### Orb-Slam2
The results for the ORB-SLAM2 on the KITTI data grayscale odometry data sequence 05 with loop closure is shown in Fig. 8. The result for TUM-RGBD frb1xyz using ORB-SLAM2 is shown in Fig. 9. Reported results are estimated using the code and resources provided through the official implementation. These results for both the datasets look promising qualitatively as well.
### Lsd-Slam
After struggling with so many issues because of ROS implementation, we were able to get results for LSD-SLAM on LSD-SLAM's TUM room sequence (Fig. 10) and also on TUM-RGBD frb1xyz (Fig. 11) are shown below. However, we faced numerous issues with integrating our pipeline into LSD SLAM as there were multiple compatibility issues esp. since the SLAM system needs older versions of ROS and Linux. This made it very challenging to work and integrate. We hence made a trade-off to pursue to complete major objectives of the project instead of spending weeks on sorting out the issues.
Figure 4: Results for inpainting on the TUM-RGBD dataset using the mask generated from the dynamic object detection algorithm. The images in the first row show the original image with a red bounding box for the dynamic object to be removed. The images in the bottom row show the inpainted results for the corresponding images.
Figure 5: Absolute pose error color map KITTI 05 sequence without dynamic object for ORB-SLAM2
Figure 3: Results for inpainting on the KITTI dataset using the mask generated from the dynamic object detection algorithm. Images on the left show the original images with the green as mask for the dynamic object found in the frame. Images on the right show the corresponding inpainted version of the original images.
Figure 6: Absolute pose error with respect to the translation for KITTI 05 sequence without dynamic object for ORB-SLAM2
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline
**Sequence** & \begin{tabular}{l} **APE** \\ **(Baseline)** \\ \end{tabular} &
\begin{tabular}{l} **RPE** \\ **(Baseline)** \\ \end{tabular} & **APE** & **RPE** \\ \hline
00 & 2.669527 & 0.301004 & 2.706672 & 0.302460 \\ \hline
01 & 5.843165 & 0.763168 & 6.505854 & 0.795611 \\ \hline
02 & 1.856006 & 0.024530 & 2.445231 & 0.037193 \\ \hline
03 & 3.059412 & 0.529403 & 2.987005 & 0.528847 \\ \hline
04 & 2.370536 & 0.397730 & 2.360059 & 0.414609 \\ \hline
05 & 1.564191 & 0.016376 & 1.574255 & 0.018023 \\ \hline
06 & 1.981206 & 0.239740 & 2.000956 & 0.250131 \\ \hline
07 & 1.726958 & 0.030662 & 1.798538 & 0.037644 \\ \hline \end{tabular}
\end{table}
Table 2: RMSE Absolute Pose Error and Relative Pose Error for baseline (with dynamic objects) vs without dynamic objects for KITTI Sequences 00-07 with ORB-SLAM2
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline
**Sequence** & \begin{tabular}{l} **APE** \\ **(Baseline)** \\ \end{tabular} & \begin{tabular}{l} **RPE** \\ **(Baseline)** \\ \end{tabular} & **APE** & **RPE** \\ \hline
\begin{tabular}{l} Sitting\_Pry \\ \end{tabular} & 3.258594 & 0.012794 & 3.241519 & 0.010996 \\ \hline Sitting\_Static & 3.477002 & 0.005230 & 3.477901 & 0.005188 \\ \hline Sitting\_halfsphere & 2.693894 & 0.007857 & 2.910348 & 0.008991 \\ \hline Walking\_halfsphere & 3.126076 & 0.025338 & 2.997350 & 0.022527 \\ \hline Walking\_xyz & 3.685020 & 0.053205 & 3.490609 & 0.050013 \\ \hline Walking\_pry & 3.536281 & 0.051050 & 3.486296 & 0.041311 \\ \hline \end{tabular}
\end{table}
Table 3: RMSE Absolute Pose Error (APE) and Relative Pose Error (RPE) for baseline (with dynamic objects) vs without dynamic objects for TUM-RGBD dataset with ORB-SLAM2
Figure 8: Result demonstrating loop closure for sequence 05 from Kitti dataset using ORB SLAM2.
Figure 7: Relative pose error with respect to the translation for KITTI 05 sequence without dynamic object for ORB-SLAM2
Figure 9: Resultant trajectory from the TUM-RGBD frb1xyz using ORB-SLAM2. Left image shows the camera image with orb features overlay and right image shows depth reconstruction and resultant trajectory.
### Error Metrics
Table 2 summarizes the error metrics, namely Absolute Pose Error(APE) and Relative Pose Error(RPE), with and without dynamic object removal on the KITTI dataset. We have shown the results for sequences 00-07. As can we be seen from the table, removing dynamic objects did not change the error metrics significantly. We believe this is happening because the relative no. of frames with dynamic objects in them is way smaller when compared to the TUM-RGBD dataset. For example, the ratio is \(\approx\) 80:2800 frames which is way less for a good result in our pipeline. These errors can be visualized in Fig. 5, 6 and 7.
Table 3 summarizes the error metrics, namely Absolute Pose Error(APE) and Relative Pose Error(RPE), with and without dynamic object removal. The table corresponds to the results on ORB SLAM2 system on TUM-RGBD dynamic sequences. It can observed from the table that both APE and RPE error metrics showed improvement generally. However, in cases like Sitting_halfsphere dataset, we can see the error to be increased. When looked at the data, we found that it corresponds to a person sitting almost idle(with minute movements) with the camera having rigorous movement. We believe this affected the dynamic object detection algorithm with a noisy mask being generated resulting in higher error.
### Results against Expectations
Initially, we planned to use ORB features for dynamic object detection and removal. However, we found it difficult to deal with these features due to their sparsity. Hence, we explored optical flow methods for dynamic object detection and removal(inpainting). We saw that removing dynamic objects from the scene showed minimal improvements in the error metrics like RMSE and RPE errors. Also, we observed that the proposed method is more effective for datasets which has more dynamic objects. This is intuitive because our method only deals with the removal of dynamic objects and doesn't deal with other facets in the scene.
## 7 Discussion
### Sync with timeline
As promised in our proposal, we were able to achieve majority of the stated objectives. Here are the specific objectives stated and their details of implementation.
#### 7.1.1 ORB SLAM2 Implementation
We completed doing baseline implementation of ORB SLAM2 on TUM RGBD & KITTI datasets according to the schedule.
#### 7.1.2 LSD SLAM Implementation
We were able to complete the baseline implementation for LSD SLAM on TUM RGBD and TUM room sequence; according to the schedule.
#### 7.1.3 Dynamic Object Detection
Although we faced challenges for dynamic object detection through our initial proposed approach. We quickly explored other ways to localize dynamic objects. Through a thorough literature review in this sub field and few trails, we successfully identified a dynamic object detection that fits best for our task [15]. Using this method mentioned in Unsupervised Moving Object Detection via Contextual Information Separation [15], we were able to proceed experiments and attain both qualitative and quantitative results.
#### 7.1.4 Inpainting
We are able to successfully test two inpainting algorithms, Deep image prior [7] and Deep video inpainting [1]. We were also able to do qualitative and quantitative evaluations of made both in terms of dynamic object detection.
#### 7.1.5 Dynamic Object Removal on ORB SLAM2
We integrated the dynamic object removal into the ORB SLAM2 system on various datasets and compared how the errors metrics change when the dynamic objects are removed from the scene.
Figure 11: Result for LSD-SLAM on room sequence from TUM data is shown. Image on the right shows the point cloud generated and the trajectory in green.
Figure 10: Result for LSD-SLAM on room sequence from TUM data is shown. Image on the right shows the point cloud generated and the trajectory in green.
#### 7.1.6 Dynamic Object Removal on LSD SLAM
There were multiple challenges with implementing LSD SLAM on the recent versions of Ubuntu and ROS(LSD Slam being a ROS dependent system). Even though we could solve a lot of them, integrating our developed framework posed several complex challenges that rendered us not to pursue it further taking into account the investment of time vs. reaching project objectives.
### Technical Challenges
#### 7.2.1 ORB-SLAM2-Implementation
For the ORB-SLAM2, we used the official implementation by the authors available on Github. We downloaded the source code for ORB-SLAM2, installed pre-required packages like Eigen3, OpenCV, Pangolin (used for visualization), DBoW2, and g2o (third party), and C++11 or C++0x Compiler setup. We faced some issues with aligned and usleep while building CMake. To fix the issues, we included #include \(<\)unistd.h\(>\) in the system.h folder under include folder in ORB-SLAM2 master directory. The building of packages was successful, and we were able to get our results, as shown in the later section.
#### 7.2.2 LSD-SLAM-Implementation
Initially, the LSD SLAM implementation given by TUM is ROS dependent. However, this implementation is based on either ROS Indigo or ROS Fuerte and with previous versions of Ubuntu like 12.04, 14. However, all our systems are configured to Ubuntu 18.04 and ROS Melodic. We initially tried to install it using some guidelines from others on Github using rosmake system on ROS Melodic itself. We faced many issues with versions of dependencies, and then after all of that, when the building started without errors, it kept building for hours(nearly 5 hours) without completion. We kept it building it for hours but with no luck. We then tried another implementation of LSD SLAM without the use of ROS. We also found a docker based implementation, but after doing all the steps mentioned in that, during the make phase, we got numerous errors. A number of them were correlated to the g20 package. Even after installing g20, multiple errors persisted. We then had to install and set up ros indigo on a new ubuntu 14.04 setup with no way around it. We were finally able to run the LSD-slam implementation on the above setup for 2 of the standard sequences. However the issues lingered and multiplied when we tried to integrate the entire framework and in the interest of the time, we focussed on other facets of the project.
#### 7.2.3 Inpainting
Inpainting approaches work by taking a binary mask for the region to be inpainted. Since dynamic objects need to be inpainted for our task, a binary mask for each frame representing the dynamic objects would be required. Using these frame by frame masks, inpainting can be performed. However, no such mask data was available for dynamic objects in both TUM RGBD and Kitti datasets. As a way to resolve this, we explored approaches to localize dynamic objects. Moreover, we were able to obtain them and then use them for inpainting successfully.
#### 7.2.4 Dynamic object detection
Initially, we planned to use optical flow and our ORB features to generate a mask for our inpainting algorithm. After implementing the baseline for ORB SLAM2, we realized that the ORB features are quite sparse on the dynamic objects. There was no method to triangulate or find a convex hull over these dynamic objects. Therefore, we had to explore some other approaches to deal with this problem.
We started with a literature review of existing approaches that localize dynamic objects. Most of these approaches assume a static camera or minimal camera motion, which is not the case for the datasets we are dealing with. To overcome this, we used one of the recent CVPR'19 works targeted to video object segmentation and re-purposed it to solve our problem.
We couldn't do a specific validation of the dynamic object detection as the ground truth labels are not available. However, we did perform a human visual inspection of the same as a validation procedure.
## 8 Conclusion
We integrated the dynamic object removal from the scene into ORB SLAM2 on TUM RGBD Dataset and KITTI Dataset. We showed how the evaluation parameters RMSE and Relative Pose Error(RPE) get affected by the dynamic object removal. However, due to multiple technical challenges involved, we could fully integrate the dynamic object removal into the LSD SLAM system. Overall, we observed that removing dynamic objects helps. Also, sequences with higher no. of dynamic objects show more sensitivity to their removal than the ones with less number of objects. We also observed that the dynamic object removal showed minimal improvement on the error metrics even though it didn't deteriorate.
One possible extension of our work can be to check specifically how separate threads of tracking, mapping and localization get affected by dynamic object removal. We can also test our approach on other state of the art SLAM algorithm esp. those that use dense features. It would be interesting to see how several error metrics get affected when the dynamic objects are removed.
|
2310.19181
|
From Chatbots to PhishBots? -- Preventing Phishing scams created using
ChatGPT, Google Bard and Claude
|
The advanced capabilities of Large Language Models (LLMs) have made them
invaluable across various applications, from conversational agents and content
creation to data analysis, research, and innovation. However, their
effectiveness and accessibility also render them susceptible to abuse for
generating malicious content, including phishing attacks. This study explores
the potential of using four popular commercially available LLMs, i.e., ChatGPT
(GPT 3.5 Turbo), GPT 4, Claude, and Bard, to generate functional phishing
attacks using a series of malicious prompts. We discover that these LLMs can
generate both phishing websites and emails that can convincingly imitate
well-known brands and also deploy a range of evasive tactics that are used to
elude detection mechanisms employed by anti-phishing systems. These attacks can
be generated using unmodified or "vanilla" versions of these LLMs without
requiring any prior adversarial exploits such as jailbreaking. We evaluate the
performance of the LLMs towards generating these attacks and find that they can
also be utilized to create malicious prompts that, in turn, can be fed back to
the model to generate phishing scams - thus massively reducing the
prompt-engineering effort required by attackers to scale these threats. As a
countermeasure, we build a BERT-based automated detection tool that can be used
for the early detection of malicious prompts to prevent LLMs from generating
phishing content. Our model is transferable across all four commercial LLMs,
attaining an average accuracy of 96% for phishing website prompts and 94% for
phishing email prompts. We also disclose the vulnerabilities to the concerned
LLMs, with Google acknowledging it as a severe issue. Our detection model is
available for use at Hugging Face, as well as a ChatGPT Actions plugin.
|
Sayak Saha Roy, Poojitha Thota, Krishna Vamsi Naragam, Shirin Nilizadeh
|
2023-10-29T22:52:40Z
|
http://arxiv.org/abs/2310.19181v2
|
From Chatbots to PhishBots? - Preventing Phishing scams created using ChatGPT, Google Bard and Claude
###### Abstract
The advanced capabilities of Large Language Models (LLMs) have made them invaluable across various applications, from conversational agents and content creation to data analysis, research, and innovation. However, their effectiveness and accessibility also render them susceptible to abuse for generating malicious content, including phishing attacks. This study explores the potential of using four popular commercially available LLMs - ChatGPT (GPT 3.5 Turbo), GPT 4, Claude and Bard to generate functional phishing attacks using a series of malicious prompts. We discover that these LLMs can generate both phishing emails and websites that can convincingly imitate well-known brands, and also deploy a range of evasive tactics for the latter to elude detection mechanisms employed by anti-phishing systems. Notably, these attacks can be generated using unmodified, or "vanilla," versions of these LLMs, without requiring any prior adversarial exploits such as jailbreaking. As a countermeasure, we build a BERT based automated detection tool that can be used for the early detection of malicious prompts to prevent LLMs from generating phishing content attaining an accuracy of 97% for phishing website prompts, and 94% for phishing email prompts.
## 1 Introduction
In recent years, Large Language Models (LLMs) have heralded a transformative era in natural language processing, effortlessly producing responses that closely emulate human-like conversation across an increasingly diverse array of subjects. LLMs have been utilized for various applications such as content creation for marketing [94], troubleshooting in software development [49], and providing resources for digital learning [17, 86], to name a few.
The vast utility of LLMs has also caught the attention of malicious actors aiming to exploit their capabilities for social engineering scams, including phishing attacks. While these models are designed with safeguards to identify and reject potentially harmful or misleading prompts [69, 76], some attackers have skillfully bypassed these protective measures. This has led to the generation of malevolent content, including deceptive emails [54, 57, 47], fraudulent investment and romantic schemes [98], and even malware creations [25, 92]. Moreover, underground hacker forums are rife with discussions centered around manipulating LLMs for more advanced malicious endeavors [12], thus further encouraging newer attackers to adopt LLMs for their purposes.
Although open-source LLMs can be modified to produce malicious content, deploying local models demands significant hardware, time, and technical expertise [58]. In contrast, commercially available LLMs like ChatGPT, Claude, and Bard are readily accessible to the public at no cost. These models are not only more convenient to access, but they're also backed by superior architectures that are proprietary [33] and/or too resource-intensive for an individual to operate locally. In this landscape, our work aims to explore the extent to which commercially available LLMs can be leveraged for generating phishing attacks. Phishing attacks, once created, are disseminated widely through several online channels, with email being the most common form of transmission [93]. Attackers craft emails that imitate a popular organization or familiar personality, with attempts to incentivize or imitate the potential victim into clicking on a website link [20, 35, 37]. The link, which is the phishing website, is used as a medium to collect sensitive information (such as bank details, account credentials, and Social Security numbers) from the victim, which is then transmitted back to the attacker, who then can utilize it for nefarious purposes [11]. The potential damage of phishing attacks is enormous, with reported financial losses of $52 million during the last year alone [34]. As a countermeasure, _anti-phishing measures_ - both commercial solutions [62, 6] and open-source implementations [80, 4] continuously strive to take these attacks down quickly [73]. However, attackers constantly innovate, employing various techniques to evade detection [73, 108] enabling attacks to remain active [75].
Creating phishing attacks demands both effort and advanced technical knowledge [10]. Over time, savvy users have
learned to recognize telltale signs of fake emails and websites, including grammar errors, subpar design, and shoddy execution [8]. To circumvent these telltale signs, attackers employ phishing kits [95]--automated tools that craft these malicious attacks with little manual intervention. Anti-phishing strategies often zero in on these kits because detecting one can dismantle numerous attacks stemming from the same source [16, 43, 74]. However, Large Language Models (LLMs) present an innovative alternative, leveraging natural language processing. LLMs have already demonstrated prowess in generating source code across various programming languages [109, 64]. This means attackers could potentially prompt LLMs to craft phishing websites and emails and then use this content to orchestrate and unleash their attacks.
The paper is structured as follows: In Section 3.1, we start with determining the general Threat Model that can be utilized to generate phishing attacks using commercial LLMs (Section). We then focus on generating phishing websites using LLMs in Section 4.2. Recognizing that these tools are adept at denying prompts with overt malicious intent, we have crafted a framework that provides multiple seemingly benign prompt sentences, either combined as a single prompt or given sequentially. Together, the final output of these prompts can result in creating phishing websites. In Section 4.3 and Section 4.4, we test the capabilities of the LLMs at generating both regular and seven widely recognized evasive phishing attack vectors by manually designing malicious prompts. In Section 4.5, we investigate the recursive nature of LLMs in generating phishing content, illustrating how they can be repurposed to churn out an increasing array of phishing prompts. In a cyclic manner, feeding these prompts back into the LLM results in generating the source code of the phishing website. We assess the utility of these automated prompts in creating convincing phishing websites across all LLMs, judging them on both appearance and functionality.
We then shift our attention to generating phishing emails using these LLMs in Section 5. Using the recursive nature of using LLMs to generate prompts, as mentioned in the previous paragraph, we generate prompts inspired by live phishing emails sourced from APWG eCrimeX [14]. In a manner akin to our analysis of phishing websites, we also compare the proficiency of the LLMs in generating phishing emails using several text generation metrics in Section 5.1. Finally, in Section 6 we design a machine learning model that can be used to detect malicious prompts in real-time, thus preventing the LLMs from generating such phishing content. We primarily focus on the early detection of the phishing prompts such that the LLM can prevent the user from providing further prompts when phishing intention is detected
The primary contributions of our work are:
1. We evaluate and compare how ChatGPT 3.5 Turbo, GPT 4, Claude, and Bard can be leveraged to produce both conventional and evasive phishing attacks, including large-scale phishing email campaigns. Our investigation reveals the potential for attackers to manipulate prompts which not only allows evasion of the content moderation mechanisms of these tools but also enables the LLMs to generate malicious prompts. These prompts can then be further exploited to create phishing attacks that are not only visually and functionally convincing but also as resistant to anti-phishing detection measures as those crafted by humans.
2. We curate the first dataset of malicious prompts specifically crafted to produce phishing websites and emails using Large Language Models. This includes 1,255 individual phishing-related prompts, which cover regular as well as seven evasive phishing strategies and 2,109 phishing email prompts.
3. We design a machine-learning model aimed at early detection of phishing websites and email prompts to deter the LLM from generating malicious content. our model achieves an accuracy of 96% for phishing website prompt detection and 94% for phishing email detection.
4. Meanwhile our model can be tested on Hugging Face at [https://huggingface.co/phishbot/Istitphish](https://huggingface.co/phishbot/Istitphish) where users can try out different prompts to check if they can be used towards creating phishing websites or emails using commercial large language models.
## 2 Related work
**Applications of Commercial LLMs discussed in Research:** LLMs have been widely used across different disciplines. Several studies have delved into ChatGPT's content moderation capabilities, e.g., for subtle hate speech detection across different languages [27], for discerning genuine news from misinformation [21] and responding to common health myths, such as those surrounding vaccinations [29]. In addition to ChatGPT, other commercial LLMs like Claude [13], LLama [97], Bard [40] have emerged. These models were utilized and evaluated for their suitability across different domains. Recent works like ChatDoctor [107] and Pmc-llama [105] used LLama for finetuning it with real world patient-doctor interactions to improve the models ability in understanding patient inquires and providing efficient advice. LLMs are also evaluated for software testing, towards predicting code coverage without execution [99].
**Misuse of Large Language Models:** Despite the innovations and benefits of commercial LLMs, there are significant concerns surrounding their misuse. Specifically, ChatGPT has been misused to produce malicious content with jailbreaking prompt attacks [60][91]. Prompt Injection is another type of attack which seems to be prevalent with ChatGPT [66], which can lead to full compromise of the model [41]. Other types
of prompt injection include code injection, which uses instruction following capability of an LLM like ChatGPT [51]. Investigations by Gupta et al. [42] and Derner et al. [30] have unveiled vulnerabilities in ChatGPT that can be harnessed to generate malware. Another study [28] emphasizes ChatGPT's potential role in propagating misinformation, leading to the alarming rise of an "AI-driven infodemic." Our work focuses on generating phishing scams, not only using ChatGPT but also three other popular commercial LLMs.
**Detection of Phishing attacks** Over the years, many researchers have focused on devising effective strategies to understand and counteract phishing attacks. Initially, traditional machine learning algorithms laid the groundwork for detecting these attacks, e.g., by extracting TF-IDF features from text and training a random forest classifier [22, 46]. Recent works treat phishing email and spam detection as a text classification task and utilize pre-trained language models, such as BERT [32], to detect phishing emails [82, 55] and spam [88, 81]. Some works also showed that BERT and its variants like DistilBERT [90] and RoBERTa [67] can be fine-tuned with an SMS Spam dataset and perform well detecting SMS spam. A couple of works have also utilized pre-trained language models for detecting phishing websites from the URLs [103, 44]. However, our approach focuses on a more preventive strategy. Instead of concentrating on detecting malicious content after its generation, our main objective is to obstruct the generation of harmful codes by the LLMs. We aim to examine and filter the prompts by hindering the creation of malicious content before it starts.
## 3 Methodology
### Threat model
Our threat model for attackers generating phishing scams using commercial LLMs is illustrated in Figure 1. Attackers utilize commercially available LLMs by submitting multiple prompts to craft a comprehensive phishing attack comprised of a phishing email and its corresponding phishing website. The phishing email's aim is to impersonate a reputable brand or organization while also devising text that, through prevalent phishing strategies like generating confusion or urgency, persuades users to engage with an external link. Concurrently, the associated phishing website is conceptualized to achieve several objectives. Firstly, it aims to closely mimic the aesthetic and functional elements of a well-recognized organization's platform. Secondly, it utilizes regular and evasive tactics to deceive users into sharing sensitive information. Lastly, it integrates mechanisms that ensure the seamless transmission of collected data back to the attacker. After the LLM generates the phishing content, the attacker hosts the phishing site on a chosen domain, embeds the site's link within the phishing email, and then shares the deceptive email with their targets.
The adoption of LLMs to create these phishing scams presents attackers with a slew of advantages. LLMs not only allow for the rapid and large-scale generation of phishing content, but also their user-friendly nature ensures accessibility to a wide range of attackers, irrespective of their technical prowess. This inclusivity enables even the less tech-savvy to employ intricate evasion methods, such as text encoding, browser fingerprinting, and clickjacking.
### Prompt design and replication
Asking these commercial LLMs to directly generate a phishing attack or any similar language indicating malicious intention results triggers a content filter warning, as illustrated in Figure 2. Thus, to subvert this for phishing website generation, we show that it is possible for the attackers to design prompts that subtly instruct the model to produce _seemingly benign functional objects_ containing the source code (HTML, CSS, JS scripts) for regular and _seven_ evasive phishing attacks. When assembled, these objects can seamlessly constitute a phishing attack, concealing the underlying malicious intent. Manually designing such prompts can be a meticulous and time-consuming process, thereby necessitating an investigation into how attackers can exploit these models to manufacture prompts efficiently. We find that manually crafted prompts can subsequently fed into the models to create more such prompts automatically. On the other hand, for phishing emails, we utilize a sample of phishing emails from APWG's eCrimeX database [14], asking the model to generate prompts that can be utilized to generate the same emails.
### Effectiveness of generated content
We explored the proficiency of commercial LLMs in generating both phishing websites and emails. To assess phishing websites, we began with a brief case study on the effort necessary to craft prompts manually. These prompts are designed to guide each of the four commercial LLMs in producing functional phishing websites with their respective attack vectors. While manual prompt generation is insightful, the potential for scalable attacks hinges on automatically created prompts. Thus, we conducted a qualitative evaluation of the quality of websites produced by such automated prompts. To further gauge the efficacy of these LLM-generated attacks, we contrasted the reactions of popular anti-phishing blocklists to traditional phishing attacks and those generated by LLMs, focusing on coverage and detection speed. For assessing phishing emails, we employed four text generation metrics: BLEU, Rouge, Topic Coherence, and Perplexity. Using these metrics, we compared the email text generated by each commercial LLM model to the original human-crafted versions.
### Automated detection of phishing prompts
After assessing the potential exploitation of commercial LLMs in generating phishing scams at scale, we designed a machine learning-based detection model to prevent LLMs from producing such malicious content. To build our ground truth, we manually labeled prompts associated with phishing website generation. To explore the best detection method, we tested our finetuned model using three different approaches: individual prompt detection, entire collection detection, and prompt subsets detection. In all these approaches, we finetuned a pre-trained RoBERTa [67] using a groundtruth dataset with individual prompts and tested its capability across individual prompts, entire collections, and prompt subsets. For phishing email detection, we combined malicious emails from eCrimeX [14] with benign samples from the Enron dataset [56].
## 4 Generation of phishing websites
This section identifies how commercial LLMs can be used to generate both regular and evasive phishing websites. These attacks, as described in Table 1, range from both client-side and server-side attacks and those obfuscating content from the users' perspective, as well as automated anti-phishing crawlers. The motivation behind implementing these attacks is to cover a diverse range of phishing websites that have been detected and studied in the literature. By investigating the capability of the LLMs to generate these attacks, we aim to demonstrate its potential impact on the security landscape and raise awareness among security researchers and practitioners.
### Choosing the attacks
To offer an expansive exploration into the potential of Large Language Models (LLMs) in generating phishing threats, we meticulously selected a diverse range of phishing attack types that range from both client-side and server-side attacks and those obfuscating content from the perspective of users as well as automated anti-phishing crawlers. Table 1 illustrates the eight phishing attacks covered in this study.
### Structure of the prompts
As illustrated in Figure 2, commercial LLMs refuse to comply when directly asked to generate a phishing attack due to its built-in abuse detection model. Our goal is to identify how an attacker can engineer prompts so that they do not indicate malicious intention, allowing the LLM to generate functional components that can be assembled to create phishing websites. Our prompts have four primary _functional components:_
**Design object:** Firstly, the LLM is asked to create a design that was _inspired_ by a targeted organization (instead of imitating it). LLMs can create design style sheets that are very similar to the target website, often using external design frameworks to add additional functionality (such as making the site responsive [7] using frameworks such as Bootstrap [19] and Foundation [38]). Website layout assets such as icons and images are also automatically linked from external resources.
**Credential-stealing object:** Emulation of the website design can be followed by generating relevant credential-taking objects such as input fields, login buttons, input forms, etc.
**Exploit generation object:** The LLM can be asked to implement a functionality based on the evasive exploit. For example, for a Text encoding exploit [39, 101], the prompt asks to encode all readable website code in ASCII. For a reCAPTCHA code exploit, the prompt can ask to create a multi-stage attack, where the first page contains the QR Code, which leads to the second page, which contains credential-taking objects.
**Credential transfer object**: Finally, the LLM can be asked to create essential JS functions or PHP scripts to send the credentials entered on the phishing websites to the attacker
Figure 1: Threat model to generate phishing scams using commercial LLMs
Figure 2: Claude refuses to generate output for a prompt implying phishing intention
by using email, sending it to an attacker-owned remote server or storing it in a back-end database.
These _functional_ instructions can be written together as a single prompt or as a sequence of prompts - one after the other. Using this method, we show that an attacker is able to successfully generate both regular and evasive phishing attacks. The prompts are brand-agnostic, i.e., they can be used to target any brand or organization. Figure 3 illustrates this framework can be utilized to generate the phishing website.
### Constructing the prompts
We examined the number of iterative prompts required by three independent coders (two graduate students and one undergraduate student in Computer Science) to create each of the phishing attacks described in Table 1. The coders possessed varying levels of technical proficiency in Computer Security: Coder 1 specialized in the field, Coder 2 had good experience, and Coder 3 had some familiarity through academic coursework. Table 2 presents the average number of prompts required across the three coders to generate the phishing functionality (attacks) across all four LLM models. Each coder created their own set of prompts for designing the website layout and for transmitting the stolen credentials back to the attacker, which they reused for multiple attacks.
\begin{table}
\begin{tabular}{c l|c|c|c} \hline
**Attack No.** & **Attack Type** & **Attack Description** & \\ \hline
1 & Regular phishing attacks & \multicolumn{3}{c}{Phishing attacks that incorporate login fields directly within the websites to steal users’ credentials. [9, 11, 100, 17]} \\ \hline
2 & ReCAPTCHA attacks & \multicolumn{3}{c}{An attack that presents a fake login page with a reCAPTCHA challenge to capture credentials [18, 31, 52, 53, 72, 83]} \\ \hline
3 & QR Code attacks & \multicolumn{3}{c}{An attacker shares a website containing a QR code that leads to a phishing website [50, 70, 87, 102]} \\ \hline
4 & Browser-in-the-Browser attacks & \multicolumn{3}{c}{A deceptive pop-up mimics a web browser inside the actual browser to obtain sensitive user data [71].} \\ \hline
5 & iFrame injection/Clickjacking & \multicolumn{3}{c}{Attackers use iFrames to load a malicious website inside a legitimate one [15, 85, 96].} \\ \hline
6 & Exploiting DOM classifiers & \multicolumn{3}{c}{Phishing websites designed to avoid detection by specific anti-phishing classifiers [61].} \\ \hline
7 & Polymorphic URL & \multicolumn{3}{c}{Attacks that generate a new URL each time the website is accessed [24, 59].} \\ \hline
8 & Text encoding exploit & \multicolumn{3}{c}{Text in the credential fields is encoded such that it is not recognizable from the website’s source code [101, 39].} \\ \hline \end{tabular}
\end{table}
Table 1: Summary of Phishing Attack Types
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline
**Attacks** & **GPT 3.5** & **GPT 4** & **Claude** & **Bard** \\ \hline Design & 9 & 8.33 & 8 & 9 \\ Credential transfer & +2 & +1.33 & +2 & +4 \\ Captcha phishing & +3 & +2.33 & +2 & +5 \\ QR Code phishing & +3 & +2 & +3 & +6 \\ Browser fingerprinting & +2 & +1.33 & +2 & +5 \\ DOM Features & +4 & +3.33 & +4 & +7 \\ Clickjacking & +5 & +4 & +5 & +8 \\ Browser-in-the-Browser & +6 & +5.67 & +6 & +9 \\ Punycode & +2 & +1.67 & +2 & +4 \\ Polymorphic URLs & +3 & +2.33 & +3 & +5 \\ \hline \end{tabular}
\end{table}
Table 2: Average prompts required by the coders to generate phishing attacks using different commercial LLM models.
Figure 3: Breaking down the prompt into _functional objects_ to trick LLMs into generating the attack
### Observation of prompt generated attacks
The models were able to generate all phishing attacks successfully, albeit with various degrees of effort required on a model-to-model basis. **Regular phishing attacks** could be created, which comprised both designing the layout and generating the credential-stealing and credential-transfer objects. For the former, both GPT 3.5 and Bard required an average of 9 prompts, with GPT-4 and Claude requiring 8.33 and 8 prompts, respectively, with Bard requiring a maximum of 4 prompts for generating credential transferring objects.
For **ReCAPTCHA evasive attacks**, the models were able to generate a benign webpage featuring a ReCAPTCHA challenge that would lead to another regular phishing website. Claude outperformed the other models in this area, requiring only two additional prompts on top of the regular phishing design, beating out GPT 4 (2.33) and GPT 3.5T (3). Bard, on the other hand, required five additional prompts. The situation was similar to **QR Code phishing attacks**. All models generated a QR code that embedded the URL for a regular phishing attack via the QRServer API. These attacks pose a challenge for anti-phishing crawlers since the malicious URL is hidden within the QR code [50, 102, 70]. Figure 4 illustrates an example of Claude generating a QR-Code phishing attack.
**Browser-in-the-Browser attacks (BiTB)** could be emulated by exploiting single sign-on (SSO) systems and creating deceptive pop-ups that mimic genuine web browser windows. All models notably struggled with this attack, averaging nearly six prompts for GPT 4, GPT 3.5T, and Claude, while Bard required an average of 9 additional prompts to be able to construct the attack. This trend was further identified for **click-jacking attacks** as well. An example of GPT 4 generating a BiTB attack is illustrated in Figure 5. However, all models ensured that the iFrame object adhered to the same-origin policy to avoid triggering anti-cross-site scripting measures.
For **attacks that exploited Document Object Model (DOM) classifiers**, specifically those that can circumvent features evaluated by Google's phishing page filter, Bard again underperformed compared to other models, requiring up to seven additional prompts. The models had a comparatively easier time with **Polymorphic URLs** that use server-side PHP scripts to append random strings at the end of the URL. Additionally, text encoding exploits were carried out by obfuscating text in the source code, making it difficult for text-detection algorithms to identify malicious intentions. Lastly, we created **browser fingerprinting attacks** that only render the phishing page for users visiting through specific agents or IP ranges, thereby evading detection by anti-phishing bots. Figure 6 provides a snippet of a Browser fingerprinting attack generated by Bard [3]. In our assessment, the three coders demonstrated comparable effort levels in creating phishing websites across all models, with Bard standing out as an exception, where coders had to intervene more often to generate the attack. Although the capability of all models to generate such attacks does not directly speak to the quality of the individual attacks (which we explore later in Section ), it underscores the potential exploitability of these LLMs in phishing website creation. We also found that all coders, regardless of their expertise in Computer Security, demonstrated similar performance when generating exploit prompts. This observation may suggest that crafting phishing attacks using ChatGPT does not necessitate extensive security knowledge, although it is important to note that all coders were technically proficient. Since prompt creation can be labor-intensive, we further explore the feasibility of leveraging the LLM to produce prompts, aiming to streamline the process autonomously.
### Automating prompt generation
As evident from Table 2, the majority of the prompts generated for a particular attack were dedicated to designing the layout of the phishing websites. However, manually designing these prompts can be time-consuming. As it is shown in Figure 7, we found that attackers can instead input their handcrafted prompts into the LLMs, and ask the LLMs to generate similar kinds of prompts. LLMs then can rapidly generate an extensive array of prompts. Subsequently, these
Figure 4: Initial landing page generated by Claude which contains a QR code created automatically using _QRServer API_. Scanning the QR code leads to a different AT&T phishing page (Also designed by Claude).
Figure 5: An example of a Browser in the Browser attack generated by GPT 4. Here clicking on the ‘Login with Amazon’ button leads to the rogue popup imitating the design and URL of the real Amazon login page.
prompts, when reintroduced to the LLM, can produce the corresponding phishing attack source code.
**Evaluating effectiveness of LLM generated phishing websites:** To assess the capabilities of the commercial LLMs in creating phishing websites, we examined the outputs generated when these models were fed prompts they themselves had produced. Our method involved three independent coders who scrutinized each generated phishing attempt based on two principal criteria. First, the appearance criterion gauged how closely and convincingly the content resembled the intended target, both in the phishing website and email. This was quantified using a 5-point Likert scale known as the _Website Appearance Scale (WAS)_, with each level's attributes detailed in Table 3. Conversely, the _Functionality criterion_ delved into the LLM's adeptness at encompassing every functionality that was provided in the prompt and was calculated by a binary variable--assigning a score only if the website incorporated every requested functionality.
In total, the coders reviewed 80 samples for each of the four LLMs, with 10 samples for each type of attack. The final WAS score for each website was the average of the individual coder scores, and the distribution of these scores across models is illustrated in Figure 8. We find that GPT-4 consistently stands out in performance, producing sites that closely resemble the original. Approximately half of GPT-4's samples scored above an average WAS of 4. In contrast, Chat-GPT 3.5T and Claude required nearly 90% of their samples to reach this mark, indicating that the median performance of GPT-4 is significantly higher. Conversely, 80% of Bard's samples scored around 2.8 or lower, which implies that only its top 20% of outputs achieved or surpassed this score. Thus, GPT-4 not only excels in average performance but also has consistently high-quality results. ChatGPT 3.5T and Claude fall into the middle range, producing satisfactory phishing websites. However, Bard predominantly performs at a lower tier, with only a small portion of its outputs reaching higher score ranges. All models, when assessed for functional components, as illustrated in Table 4, excelled in creating standard phishing attacks. GPT-4 and Claude achieved success in every sample. This trend persisted for ReCAPTCHA and QR-based attacks, except in the case of Bard, which managed successful outcomes in only six scenarios for each type. Bard's capability was notably limited across all evasive attacks, particularly evident in the _Browser Attacks_ category where it only succeeded with two samples. Other models also faced hurdles with Browser Attacks but still outpaced Bard. The models found Clickjacking attacks (Attack 5) challenging as well. Despite these challenges, GPT-3.5T, GPT-4, and Claude showed strong performance against various other evasive attacks. Evaluating under the WAS metric, GPT-4 shone as the top performer, closely trailed by GPT-3.5T and Claude. In contrast, Bard's difficulties in producing functional components and its lower WAS scores indicate that it might not be the ideal model for designing phishing websites, unlike its counterparts.
### Anti-phishing effectiveness
To further identify the effectiveness of LLM-generated phishing attacks, we compared how well anti-phishing tools can detect them when compared to human-constructed phishing websites. To do so, we selected 160 websites produced by Language Learning Models (LLMs) with the highest average WAS and functionality scores. Our decision to focus on these high-scoring websites stemmed from the assumption that attackers would likely deploy sites that both looked appealing and operated effectively. We deployed these websites on Hostinger [2], a popular web-domain. It is important to highlight that we strictly refrained from capturing any data from interactions on these dummy sites. Moreover, these sites were terminated shortly after our experiment concluded. When considering _human-generated phishing websites_, we manually extracted 140 designs from APWG eCrimeX, ensuring a balanced representation with 20 samples for each
\begin{table}
\begin{tabular}{l|c c c c} \hline
**Attack/Model** & **ChatGPT 3.5** & **GPT 4** & **Claude** & **Bard** \\ \hline Regular phishing attack & 9/10 & 10/10 & 10/10 & 8/10 \\ ReCAPTCHA attacks & 8/10 & 10/10 & 9/10 & 6/10 \\ QR Code attacks & 10/10 & 9/10 & 9/10 & 6/10 \\ Exploiting DOM classifiers & 7/10 & 10/10 & 8/10 & 4/10 \\ Frame injection/Clickjacking & 6/10 & 8/10 & 5/10 & 4/10 \\ Browser-in-the-Browser attack & 6/10 & 8/10 & 6/10 & 2/10 \\ Polymorphic URL & 9/10 & 8/10 & 8/10 & 6/10 \\ Text encoding exploit & 10/10 & 9/10 & 9/10 & 5/10 \\ \hline \end{tabular}
\end{table}
Table 4: Functionality scores across models and attacks
Figure 6: Sample of Server-side script generated by Bard to evade crawling by Google Safe Browsing
\begin{table}
\begin{tabular}{l|l} \hline
**WAS** & **Description** \\ \hline
1 & Hardly resembles the desired appearance. Fundamental elements like color scheme, layout, and typography are completely off. \\ \hline
2 & Some minor similarities. The basic structure might be present, but many details are off. \\ \hline
3 & Moderate resemblance. Discrepancies in details, alignment, or consistency. \\ \hline
4 & Very close to desired appearance. Minor tweaks \\ needed. \\ \hline
5 & Almost indistinguishable from the desired appearance. Practically perfect. \\ \hline \end{tabular}
\end{table}
Table 3: Website Appearance Scale (WAS) Descriptions
attack category. Recognizing the elusive nature of Browser-in-the-Browser attacks and their rare presence in blocklists, we directly constructed 20 of these attacks. This brought our count of human-generated phishing sites to 160. Like the LLM-produced sites, these were made harmless, ensuring they could not collect or forward data.
After setting up these dummy phishing sites, both LLM and human-generated, we reported them to APWG eCrimeX [14], Google Safe Browsing [3], and PhishTank [4]. Many anti-phishing tools depend on these repositories to identify emerging phishing threats [73]. Upon reporting, we monitored their anti-phishing detection rate by periodically scanning the URLs with VirusTotal [5] every hour. VirusTotal is an online tool that aggregates detection scores from 80 distinct anti-phishing tools. This gave us a comprehensive view of the detection breadth. We measured the detection scores of the websites for up to seven days or until removed by the domain.
Figure 9 provides a comparative analysis of the average detection score for each attack for both LLM and human-generated sites. We find that the detection scores between the two did not vary significantly, indicating that the LLM-generated phishing attacks were, on average, just (or almost) as resilient, if not more. To further solidify our findings, we also conducted a paired T-test, which revealed that the difference in detection scores between the two categories was not statistically significant (p=0.305). Thus, our findings further confirm the potential of scaling phishing attacks using the recursive approach of generating phishing websites from prompts that the LLM also generated.
## 5 Phishing email generation
Phishing websites are usually distributed by attackers using emails [48], and thus, we dedicate this section to studying how an attacker can generate phishing emails using the commercial LLM models. Our method to generate these emails is similar to generating phishing attacks using LLM-generated prompts in Section 4.5, where we ask GPT-4 to design prompts using some human-created phishing emails. These prompts are then fed back to the LLMs to design an email that entices users to sign up for a service or provide sensitive information. To generate the email prompts, we collected 2,109 phishing emails from the APWG eCrimeX feed [14]. This feed combines phishing emails reported by various brands and cybersecurity specialists. These emails encompassed several attack vectors, including banking scams, account credential fraud, fake job offers, etc. Figure 15 illustrates the distribution of the attack vectors. To ensure the quality and authenticity of our dataset, we randomly selected 100 emails for manual inspection. Notably, we found no evidence of misclassification within this subset. Parallelly, we extracted the same number of benign emails from the established Enron dataset [56]. The phishing and benign emails were then provided to GPT-4, which was tasked with formulating prompts needed for replicating the
Figure 8: Cumulative Distribution of Average Website Appearance Scale for each model (n=80 per model).
Figure 7: LLMs can generate malicious prompts that can be provided back to the LLM to generate phishing websites.
Figure 9: Average detection scores for each attack type, comparing Human and LLM generated phishing attacks.
original emails. To further validate the accuracy of the generated prompts, we manually assessed 100 phishing prompts alongside 100 benign ones and found that GPT-4 had a perfect score for generating such prompts. We then introduced these prompts to different LLMs, GPT-3.5T, GPT-4, Claude, and Bard, to analyze their respective outputs. An example of a phishing email generated by Claude can be viewed in Figure 11.
### Evaluation of LLM-generated emails
The complexity of LLM-generated phishing websites required manual evaluation in Section 4.4. On the other hand, email generation, being a more conventional domain of text generation tasks, provides the opportunity for algorithmic evaluation. We compared the phishing emails generated by the LLMs (using the prompts that they themselves had generated) with the human-constructed phishing emails from eCrimeX. We employed four popular metrics utilized for text generation tasks: BLEU [84], Rouge [63], Perplexity [36], and Topic Coherence [89] to measure and compare the performance of the LLMs in generating phishing email text. A short description of the metrics is provided in Section 8.4 in the _Appendix_.
As illustrated in Table 5, we find that GPT-4 outperforms the other models across all metrics, showcasing the highest BLEU (0.54), Rouge-1 (0.68), and Topic Coherence (0.72) scores, and the lowest Perplexity (15). Claude closely follows, with competitive scores in all metrics, demonstrating its effective balance in generating coherent and contextually appropriate emails. On the other hand, GPT 3.5T exhibits moderate performance, with BLEU and Topic Coherence scores lagging behind GPT-4 and Claude but outdoing Bard. Its Rouge-1 score is only slightly behind Claude and GPT-4, indicating its competency in information retention. Bard, presents slightly lower metrics compared to the rest but still showcases proficiency, unlike its performance towards generating phishing websites as seen earlier. In summary, all LLMs, despite exhibiting varying competencies, appear to be proficient in generating phishing emails.
## 6 Phishing Prompts Detection
Findings from the previous sections indicate that commercial LLMs can be utilized for generating phishing websites using malicious prompts. Thus there is a need for the swift detection of these prompts to safeguard the integrity and security of these models. To address this issue, we propose a framework, as illustrated in Figure 12, for detecting phishing prompts with three different detection schemes. We examine the prompts individually, as an entire collection, and as subset of prompts to accommodate real-time scenarios. For each detection scheme, we explain the groundtruth creation and model performance at each stage, along with the rationale behind transitioning across different detection schemes.
### Data Collection
As illustrated in section 4.5, a series or a collection of prompts can be automatically generated using these LLMs, which could result in code capable of creating a phishing website. Among the chatbots we investigated, we found that ChatGPT was the only LLM with an API that facilitates data collection for our purposes. Due to this limitation, we chose only OpenAI API [79] to proceed with data collection. Two models, GPT-3.5T [77] and GPT-4 [78], have been generating
Figure 11: Email generated by Claude with prompt generated in Figure 10 as input.
\begin{table}
\begin{tabular}{l|c c c c} \hline Model & BLEU & Rouge-1 & Perplexity & Topic Coherence \\ \hline GPT 3.5T & 0.47 & 0.60 & 22 & 0.63 \\ GPT 4 & 0.54 & 0.68 & 15 & 0.72 \\ Claude & 0.51 & 0.65 & 18 & 0.69 \\ Bard & 0.46 & 0.58 & 20 & 0.62 \\ \hline \end{tabular}
\end{table}
Table 5: Effectiveness of LLM-generated emails (n=2,109)
Figure 10: Example of a prompt generated by GPT 4 to replicate the phishing email provided in the input. (Email message is truncated for brevity)
these prompt collections. We focused on generating prompt collections that can incorporate all potential attacks, thus enhancing the model's capability to efficiently detect prompts related to any attack type listed in Table 1. With the help of the prompt generation method outlined in Section 4.5, we generated 117 prompt collections using the model GPT-3.5 and 141 prompt collections using the model GPT-4. From the collections generated, we observed that the average number of prompts within each unique collection is approximately 9.27. To have a balanced dataset regarding collections, we generated 258 benign prompt collections using OpenAI API. We apply the same method mentioned in Section 4.5 to generate these collections using models GPT-3.5 and GPT-4, with an input benign in nature.
### Codebook Creation
To train our models using a groundtruth dataset, Coder 1 and Coder 2 utilized an open-coding technique. They manually labeled 2,392 prompts--sourced from GPT-3.5 and GPT-4 as either "Phishing" or "Benign." Given the large size of the dataset, Coder 1 took the initiative by randomly selecting 40 prompts from each of the eight attack categories. This initiative aimed to discern underlying themes crucial for developing a detailed codebook. The codebook then classified elements as "Phishing" or "Benign," contingent upon the inherent risk and intent related to phishing activities. Alongside each categorization, the codebook provides descriptions and examples for clarity. It is noteworthy that the codebook emphasized several techniques with a malicious inclination often associated with phishing. For instance, "Data Redirection" and "URL Randomization" were marked as "Phishing," whereas legitimate web design elements like "Typography and Font" were labeled "Benign."
Both coders utilized this codebook to label the entire dataset. Initially, Coder 1 identified 29 unique themes. The first pass on the dataset yielded a Cohen's Kappa inter-reliability score of 0.71, signifying a substantial agreement between the coders. As they tried to resolve their disagreements, six additional themes were identified for the codebook, expanding the size of the codebook to 35 features. Disagreements between the coders were successfully addressed. We provide our codebook in Table 10 in the in the Appendix.
### Common Groundtruth Creation
To create a common groundtruth dataset for all the detection schemes, initially, we extracted the prompts from each prompt collection and stored them in the form of individual prompts. Upon inspecting these prompts, we frequently observed the presence of extraneous elements such as bullet points, numerical values, and descriptors like step-1 or prompt-1. As these elements were irrelevant to the core content of the prompts, we removed them by preserving the fundamental sentences in the prompts. We stored them with attributes like collection number and prompt number to preserve the order of prompt, prompt, and version to specify which GPT model has generated these prompts. Leveraging the codebook, two independent coders manually assigned labels to each prompt across all the prompt collections. Each prompt was labeled either as _malicious_ or _benign_. This process resulted in the labeling of nearly 2,392 prompts in total, where the number of prompts labeled as malicious was 1,255, and the number of prompts labeled as benign was 1,137. As you can see, not all prompts in a phishing prompt collection are malicious and this data labeling helped us to identify them.
We combined these prompt collections with additional benign prompt collections. In a similar fashion, we extracted benign prompts from the benign prompt collections and labeled all of them as benign. This resulted in 1,986 benign prompts across benign prompt collections.
#### 6.3.1 Analysis of Annotated Prompts
We generated heatmaps to visualize human annotators' evaluations of prompts generated by GPT-3.5T in Figure 12(a) and GPT-4 in Figure 12(b). In each, the x-axis represents prompt numbers, indicating the prompt position within a collection, while the y-axis corresponds to 8 different attacks listed in Table 1. The color gradient represents the average label the annotators assign to each prompt position in a collection. Darker colors indicate more prompts in this position are labeled as
Figure 12: Framework showing three Detection Schemes
malicious and lighter colors indicate fewer prompts in this position are labeled as malicious.
These two heatmaps show the distribution of malicious prompts in our phishing collections. We observe that in some attack types generated by GPT-3.5 (Figure12(a)), such as attacks 1-4, most of the prompts in collections are labeled as _malicious_, whereas in other attacks, including attacks 5-8, only a small portion of prompts in each prompt collection are labeled as _malicious_ and interestingly, they tend to appear close to the end of the collection. Interestingly, however, Figure 12(b) shows a more uniform distribution of malicious prompts in collections generated by GPT-4, as in most attack types, the malicious prompts could have appeared in almost any position. We observe two exceptions, attacks 5 and 6, where many prompts and positions are labeled as benign.
### Individual Prompt Detection
The first step towards tackling the challenge of detection involves categorizing an individual prompt as either malicious or benign. To achieve this, we designed a binary classifier using pre-trained language models.
**Groundtruth for Individual Prompt Detection:** We selected both malicious and benign prompt collections, ensuring each individual prompt is labeled, from our common groundtruth dataset. Upon merging the two collections, we observed a total of 1,255 malicious prompts and 3,123 benign prompts. Recognizing this imbalance in the distribution of prompts, we opted to exclude some benign prompt collections. In total, we considered 50 benign prompt collections along with 258 malicious prompt collections. We split this dataset into 70% for training, 20% for testing, and 10% for validation, by maintaining the balance mentioned above.
**Model Selection and Experiments:** We acknowledge the effectiveness of traditional ML algorithms, such as Naive Bayes [26] and SVM [68] in similar domains. However, these algorithms often demand large datasets with a substantial number of features to perform optimally. In our case, we have constraints of limited data and a lack of extensive features, which steers us towards selecting pre-trained language models for accomplishing this task.
Moreover, pre-trained language models like BERT [32], RoBERTa [67], etc., are trained on vast amounts of data, facilitating them with a broad understanding of language, which is crucial in detecting nuanced and occasionally hidden malicious intent in prompts. We could use several pre-trained language models such as BERT-based and Generative Pre-trained models. Generative models are unidirectional and more suitable for tasks that involve generating text. On the other hand, BERT-based models are bidirectional, allowing them to take both left and right context when making predictions. This feature makes them more suitable for text classification tasks.
Based on these advantages, we experiment with BERT-based models, including BERT [32], DistilBERT [90], RoBERTa [67], Electra [23], DeBERTa [45] and XLNET [106]. Each model has its own advantages and disadvantages, which we consider along with their performance metrics. A brief description of different models, and their details related to size and parameters, are provided in Section 8.3 in the _Appendix_.
**Training Details:** We used pre-trained versions of all the listed models from Hugging Face Transformers Library [104]. We finetuned these models on our ground truth dataset for 10 epochs with a batch size of 16. We used AdamW optimizer, and the learning rates were set to 2e-5. The maximum sequence length is set to 512. We finetune these models using V100 GPU and used the last model checkpoint for evaluation. For obtaining embeddings for input sequences, we used their respective tokenizers.
**Performance Evaluation:** To select the best model, we scrutinize the metrics such as average F1 score, Accuracy, Precision and Recall. Furthermore, we compute the Total Time for predicting 100 samples and Median Prediction Time across 100 samples. Given our objective to deploy our model in real-time scenarios, where users prompt with a high speed, these metrics are necessary for evaluation. Table 6 shows the performance of the models on our test set. We observe that RoBERTa shows slightly better performance, with an average F1 score of 0.95. Although there are lighter models such as DistilBERT and ELECTRA, which have slightly lesser
Figure 13: Heatmaps showing distribution of malicious prompts in collections
median prediction times compared to RoBERTa, we noticed that their F1 scores are slightly lower, hovering around 0.93. Considering RoBERTa's powerful training approach and best performance across all the models, we select RoBERTa as our final model for individual prompt detection.
**Challenges with Individual prompt classification:** There are several scenarios where individual prompt classification might not be sufficient. For example, an individual prompt might not provide complete information about user's intent. _Adaptive attackers_ may engage in deep conversations with the models to effectively accomplish their task. And scenarios may appear where individual prompts look benign, but the entire conversation can lead to malicious outcomes. Depending solely on an individual prompt classifier in such cases might offer a leeway for malicious users to elude detection. Such scenarios strongly demand the need for a solid detection mechanism that goes beyond analyzing individual prompts. To acheive this, we perform classification on whole collection of prompts, using the classifier trained on individual prompts.
that despite the model being trained on individual prompts, it exhibits a strong performance in identifying malicious nature within the subsets of prompts.
After evaluating all the outcomes from three detection schemes, it is evident that model effectively categorizes entire collections and prompt subsets. However, due to practical challenges associated with processing entire collections in real time, prompt subsets detection emerges as the best choice for early and efficient real time detection.
### Detecting Phishing email prompts
To automatically detect phishing email generation prompts, we utilized the RoBERTa architecture and trained it on the sample of 2,109 phishing prompts that were generated by GPT-4 from the eCrimeX phishing dataset and 2,109 Benign email prompts generated by the same from the Enron dataset, partitioning the dataset into a 70:30 Train:Test split. The performance of our model is as illustrated in Table 9. The model achieved an accuracy of 94%, with precision standing at 95%. Overall, these metrics highlight the model's robust capability in the early detection of prompts that attempt to generate phishing emails using LLMs.
## 7 Conclusion
### Implications
Our research indicates that readily available commercial LLMs can effectively generate phishing websites and emails. These LLMs are not only capable of being manually directed to initiate attacks but can also autonomously generate phishing prompts. These AI-created prompts are adept at producing phishing content that can evade current anti-phishing tools as effectively as human-generated content. Moreover, phishing emails derived from LLM-generated prompts can mimic authentic human phishing attempts with high accuracy. The potential misuse of LLMs for phishing poses a serious threat, as attackers can refine and reuse a small set of prompts to create a vast number of sophisticated phishing attacks. However, we have developed a machine learning model that can detect these malicious prompts early on, which is crucial in preempting the production of harmful content by LLMs. Our model, which demonstrates strong performance in identifying phishing prompts for both websites and emails, could be integrated with LLMs as a third-party plugin.
### Ethics and Data Sharing
Since ChatGPT 3.5T and 4 were used to generate the phishing prompts, we have disclosed such prompts to their developer, OpenAI [79], and we plan to publicly disclose them after OpenAI's mandatory 90 day period of vulnerability disclosure [1]. We also disclose the vulnerabilities to the developers of Claude and Bard i.e.Anthropic, and Google and are awaiting their feedback. Meanwhile our model can be tested on Hugging Face at [https://huggingface.co/phishbot/Istitphish](https://huggingface.co/phishbot/Istitphish) where users can try out different prompts to check if they have phishing intention towards creating malicious websites or emails. Our dataset and framework is also available upon request.
|
2306.12732
|
Fermi surface reconstruction due to the orthorhombic distortion in Dirac
semimetal YbMnSb$_2$
|
Dirac semi-metal with magnetic atoms as constituents delivers an interesting
platform to investigate the interplay of Fermi surface (FS) topology, electron
correlation, and magnetism. One such family of semi-metal is YbMn$Pn_2$ ($Pn$ =
Sb, Bi), which is being actively studied due to the intertwined spin and charge
degrees of freedom. In this Letter, we investigate the relationship between the
magnetic/crystal structures and FS topology of YbMnSb$_2$ using single crystal
x-ray diffraction, neutron scattering, magnetic susceptibility,
magnetotransport measurement and complimentary DFT calculation. Contrary to
previous reports, the x-ray and neutron diffraction reveal that YbMnSb$_2$
crystallizes in an orthorhombic $Pnma$ structure with notable anti-phase
displacement of the magnetic Mn ions that increases in magnitude upon cooling.
First principles DFT calculation reveals a reduced Brillouin zone and more
anisotropic FS of YbMnSb$_2$ compared to YbMnBi$_2$ as a result of the
orthorhombicity. Moreover, the hole type carrier density drops by two orders of
magnitude as YbMnSb$_2$ orders antiferromagnetically indicating band folding in
magnetic ordered state. In addition, the Landau level fan diagram yields a
non-trivial nature of the SdH quantum oscillation frequency arising from the
Dirac-like Fermi pocket. These results imply that YbMnSb$_2$ is an ideal
platform to explore the interplay of subtle lattice distortion, magnetic order,
and topological transport arising from relativistic quasiparticles.
|
Dilip Bhoi, Feng Ye, Hanming Ma, Xiaoling Shen, Arvind Maurya, Shusuke Kasamatsu, Takahiro Misawa, Kazuyoshi Yoshimi, Taro Nakajima, Masaaki Matsuda, Yoshiya Uwatoko
|
2023-06-22T08:17:48Z
|
http://arxiv.org/abs/2306.12732v1
|
# Fermi surface reconstruction due to the orthorhombic distortion in Dirac semimetal YbMnSb\({}_{2}\)
###### Abstract
Dirac semi-metal with magnetic atoms as constituents delivers an interesting platform to investigate the interplay of Fermi surface (FS) topology, electron correlation, and magnetism. One such family of semi-metal is YbMn\(Pn_{2}\) (\(Pn\) = Sb, Bi), which is being actively studied due to the intertwined spin and charge degrees of freedom. In this Letter, we investigate the relationship between the magnetic/crystal structures and FS topology of YbMnSb\({}_{2}\) using single crystal x-ray diffraction, neutron scattering, magnetic susceptibility, magnetotransport measurement and complimentary DFT calculation. Contrary to previous reports, the x-ray and neutron diffraction reveal that YbMnSb\({}_{2}\) crystallizes in an orthorhombic \(Pnma\) structure with notable anti-phase displacement of the magnetic Mn ions that increases in magnitude upon cooling. First principles DFT calculation reveals a reduced Brillouin zone and more anisotropic FS of YbMnSb\({}_{2}\) compared to YbMnBi\({}_{2}\) as a result of the orthorhombicity. Moreover, the hole type carrier density drops by two orders of magnitude as YbMnSb\({}_{2}\) orders antiferromagnetically indicating band folding in magnetic ordered state. In addition, the Landau level fan diagram yields a non-trivial nature of the SdH quantum oscillation frequency arising from the Dirac-like Fermi pocket. These results imply that YbMnSb\({}_{2}\) is an ideal platform to explore the interplay of subtle lattice distortion, magnetic order, and topological transport arising from relativistic quasiparticles.
Magnetic Dirac/Weyl semimetals deliver a promising platform, where the novel coupling between magnons and relativistic fermions could be exploited to manipulate the quantum transport phenomena using various parameters like chemical substitution, pressure, strain, etc. In this context, the collinear antiferromagnetic (AFM) ternary \(A\)Mn\(Pn_{2}\) (where \(A\) = rare earth elements like Eu, Yb or alkali earth elements like Ca, Sr, Ba; \(Pn\) = pnictides Sb or Bi) have attracted increasing attention due to the presence of anisotropic Dirac cones close to the Fermi level, \(E_{\rm F}\)[1; 2; 3; 4; 5; 6; 1; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16].
The 112-type \(A\)Mn\(Pn_{2}\) consists of a stacking of two-dimensional (2D) \(Pn\) conduction layers, \(A\)-layers, and insulating Mn\(Pn_{4}\) layers as shown in Fig.1(a). In Mn\(Pn_{4}\) layers, each Mn atom is surrounded by four \(Pn\) atoms forming a tetrahedron, whereas 2D-\(Pn\) layers are responsible mostly for the exotic properties like quantum magnetoresistance [1; 2; 3; 4; 5; 6] and bulk quantum Hall effect [17; 18; 16]. Band structure calculations revealed that the electronic density of states at \(E_{\rm F}\) is primarily composed of the \(Pn\)-\(p_{x/y}\) and Mn-\(d\) orbitals, suggesting a close relationship between the Mn moment direction and underlying electronic structure [19; 20; 21; 22; 23]. Interestingly, these calculations further predict that the FM component arising from a canting of Mn moments breaks the time-reversal symmetry, thus playing a vital role in producing different topological states depending on the Mn moment direction [19; 20; 21; 22; 23; 24].
Among the 112 materials, YbMn\(Pn_{2}\) are particularly unique due to the coupling between magnetism and Dirac quasiparticles [25; 26; 27; 28], unusual interlayer quantum coherent transport [1; 5], including promising attributes required for energy conversion technology like large thermoelectric power [32; 33] and giant anomalous Nernst effect [19]. Band structure calculations [1; 20; 23; 30; 32] identified YbMn\(Pn_{2}\) as nodal line semimetals, where Fermi surface (FS) consists of two Dirac-like bands and a heavy 3D-parabolic band. Despite these results pointing similar FS topology, experiments indicate differences between the FS of two compounds. Quantum oscillation studies [1; 6] in YbMnBi\({}_{2}\) detected two frequencies with Dirac-like dispersion and large carrier density (\(\sim 10^{21}\) cm\({}^{-3}\)) comparable to other Weyl semimetals like Cd\({}_{3}\)As\({}_{2}\) and NbP. In contrast, observation of a single quantum oscillation frequency [1; 5] and two orders of magnitude diminished carrier density [1; 5; 32] in YbMnSb\({}_{2}\) is difficult to reconcile within existing theoretical results.
In this Letter, we have used neutron and x-ray diffraction to characterize the proper magnetic and crystal structures of YbMnSb\({}_{2}\) and investigate the correlation between the crystal/magnetic structures and FS topology via magnetic susceptibility, magnetotransport measurements and first principles calculation. We have identified that YbMnSb\({}_{2}\) crystallizes in an orthorhombic \(Pnma\) structure with notable anti-phase displacement between the neighboring layers of the magnetic Mn ions. Band structure calculation, utilizing the newly obtained structural parameters, yields a reduced Brillouin zone (BZ) and more anisotropic FS topology of YbMnSb\({}_{2}\) than the
Bi-based sister compound.
Earlier studies [1; 5; 6; 27; 28; 30; 32; 33] reported that YbMnSb\({}_{2}\) has a layered tetragonal \(P4/nmm\) structure [Fig. 1(a)]. However, the neutron scattering studies of YbMnSb\({}_{2}\) single crystal taken at \(T\) = 300 K clearly shows the presence of notable reflections which should be absent in the tetragonal \(P4/nmm\) space group [6]. The origin of those forbidden reflections was not identified, partly due to the limited observable peaks. To reveal the proper crystal structure, we use the white beam neutron diffractometer Corelli covering a large volume in reciprocal space [35]. Figs.1(c) and (d) illustrate the contour map of the neutron scattering pattern obtained at \(T\) = 390 K and \(T\) = 100 K. Note that Bragg reflections both in the (\(H\),\(K\),0) and (\(H\),0,\(L\)) planes are observed due to the twinning of the crystal. Reflections at (\(2n+1\),0,\(m\)) position, which corresponds to (integer, 0, half-integer \(L\)) in the \(P4/nmm\) cell, show considerable intensities and clearly indicate the doubling of \(c\)-axis parameter of the \(P4/nmm\) cell. In addition, a close examination of the reflection condition shows difference between equivalent reflections in the tetragonal space group, implying a lowering of the crystal structure symmetry, which is further confirmed by the single crystal x-ray diffraction. Both neutron and x-ray diffraction data can only be modeled and refined using an orthorhombic \(Pnma\) structure (space group No. 62) instead of a tetragonal \(P4/nmm\) structure. For analysis of structural refinements see the supplemental material (SM) Fig. S1 and Fig. S2 [36]. In the revised structure, Mn, Yb, and Sb ions are located at the \(4c\) sites, all showing notable displacement along the \(c\)-axis [Fig. 1(b)]. The magnitude of the displacement increases as the system is cooled and leads to enhancement of the characteristic (\(2n+1,0\),m) reflections.
After establishing the crystal structure, we now discuss the spin arrangement in YbMnSb\({}_{2}\). Fig. 2(a) shows the
Figure 1: (a) The reported tetragonal \(P4/nmm\) crystal structure of YbMnSb\({}_{2}\) as in Ref. [1; 5; 6; 27; 28; 30; 32; 33]. (b) The orthorhombic \(Pnma\) structure reported in this work. Arrows show the displacements of each atom along the \(a\)-axis of the tetragonal \(P4/nmm\) structure. The blue-shaded regions illustrate the Sb\({}_{1}\) layers. Blue rectangles highlight the unit cell. The contour plot of the neutron scattering intensity pattern of YbMnSb\({}_{2}\) at (c) \(T\) = 390 K and (d) \(T\) = 100 K in the (\(H\),\(K\),0) and (\(H\),0,\(L\)) planes using the lattice parameters (21.58 Å, 4.3 Å, 4.3 Å).
Figure 2: Temperature dependence of (a) the (0,1,0) reflection peak as a magnetic order parameter, (b) zero-field cooled magnetization measured at 3 T, and (c) heat capacity, \(C_{p}\), showing the AFM ordering temperature. Arrows indicate AFM transition temperature. (c) shows background subtracted \(C_{p}\) near the AFM transition. The yellow solid line in (a) represents a fit \(I=I_{0}(1-T/T_{\rm N})^{2\beta}\) to the data. \(\beta\sim 0.24\) indicates a quasi 2D nature of magnetism. (d) The refined spin structure of the YbMnSb\({}_{2}\). The spin carries dominant \(a\)-axis component forming a \(C\)-type magnetic structure with finite \(c\)-axis component forming FM sheets coupled antiferromagnetically between the layers. (e) The extracted canted FM moment along the \(bc\) plane and the \(a\)-axis after subtracting the magnetic contribution from FM impurity and AFM order.
temperature dependence of peak intensity of the \((0,1,0)\) magnetic reflection; it decreases sharply upon warming and becomes featureless background above the \(T_{\rm N}\sim\)350 K, consistent with the transition determined from the magnetization and heat capacity [Figs. 2(b)-2(c)]. This implies a spin structure with a unit cell identical to the nuclear one and moments predominately perpendicular to the basal plane. For magnetic ions located at 4\(c\) site with propagation wavevector \(q_{m}=(0,0,0)\), there are eight compatible magnetic space groups. Half of them can be excluded since the spin moments in those configurations are constrained within the basal plane, which contradicts the bulk magnetization data. For the remaining magnetic space groups \(Pn^{\prime}m^{\prime}a^{\prime}\), \(Pnm^{\prime}a^{\prime}\), \(Pn^{\prime}m^{\prime}a\), and \(Pnm^{\prime}a\), the refinement reveals that the magnetic space group \(Pn^{\prime}m^{\prime}a^{\prime}\) provides the most satisfactory description of the diffraction data [Fig. 2(d)]. The Mn spin direction lies along the longest crystal axis, the \(a\)-axis, with a size of \(\sim 3.17(3)\)\(\mu_{\rm B}\) in a collinear \(C\)-type AFM arrangement. Although the size of the estimated magnetic moment is similar to that previous report [6], it is smaller than the expected value for a full ordered Mn\({}^{2+}\) (5 \(\mu_{\rm B}\)).
Spin canting, which is allowed from the magnetic space group, could be present since finite intensities were observed at \((2n+1,0,0)\). However, our polarized neutron experiments performed at room temperature revealed that a majority of the intensities originate from the nuclear component [Fig.S3 in SM[36]]. To estimate the canted moments accurately, magnetization measurements were performed. In Fig. 2(e), we plot the contribution of the canted FM moment to the magnetization after subtracting the contribution of the FM impurity and the AFM ordered state from the magnetization as described in the SM [36]. The maximum moment of \(\sim\)0.001 \(\mu_{B}\) is comparable with previous reports in YbMnSb\({}_{2}\)[6] and YbMnBi\({}_{2}\)[19] with a canting angle \(\theta\sim\) 0.018\({}^{\circ}\). This indicates a negligible canting of Mn moment away from the \(a\)-axis.
The orthorhombic \(Pnma\) space group enforces a zig-zag arrangement of the Sb atoms [as in Fig. 1(b)] along the \(b\)-axis leading to a distorted Sb\({}_{1}\) layer similar to (Ca/Sr/Ba/Eu)MnSb\({}_{2}\)[9; 13; 17; 18; 37]. Despite the in-plane orthorhombicity \((b-c)/c\sim\) 0.31% in YbMnSb\({}_{2}\) being several times smaller than that in \(A\)MnSb\({}_{2}\) materials, it is sufficient to drive an anisotropic FS compared to a tetragonal structure. In Fig. 3(a), we show the band structure for YbMnSb\({}_{2}\) with the collinear AFM arrangements of Mn spin and \(Pnma\) space group calculated using density functional theory (see SM for calculation details [36]) without considering spin-orbit coupling (SOC). The low energy band structure consists of heavier regular bands near the \(\Gamma\)-point and a linearly dispersing Dirac-like band at the \(X\) point. The former arises from Mn \(d\)-orbitals and Sb \(p\)-orbitals, whereas the latter mainly originates from the Sb \(p\)-orbitals. When SOC is taken into account, it has little effect on the bands near the \(\Gamma\) point but dramatically increases the gap size at the \(X\) point [Fig. S5 in SM [36]]. Figs. 3(b) and 3(c) compare the BZ of YbMnSb\({}_{2}\) in orthorhombic and tetragonal structures, respectively. The FS in the undistorted tetragonal phase composes of two Dirac-like pockets; one electron-like near the \(X\) points and another hole-like along \(\Gamma\)-\(M\) line, and a big 3D hole pocket at the \(\Gamma\) point, in good agreement with several previous reports [19; 20; 30; 32]. However, due to in-plane orthorhombicity, the FS no longer exhibits the \(C_{4}\) rotational symmetry. Moreover, the hole pocket along the \(\Gamma\)-\(M\) direction becomes gapped and the 3D hole pocket at \(\Gamma\) stretches along the \(\Gamma\)-\(X\) direction.
Fig. 4(a) shows the in-plane resistivity \(\rho_{xx}\) and the Hall coefficient \(R_{H}\) at 7 T in the temperature range 2 K to 390 K. With decreasing temperature, \(\rho_{xx}\) remains nearly flat down to \(T_{\rm N}\) and decreases sharply below \(T_{\rm N}\). \(R_{\rm H}\) increases by an order of magnitude from 2.3\(\times\)10\({}^{-8}\) m\({}^{3}\)/C at 390 K to 2.43\(\times\)10\({}^{-7}\) m\({}^{3}\)/C at 250 K and decreases slightly on further cooling. The estimated carrier concentration \(n_{\rm H}=|1/R_{\rm H}e|\sim\) 2.19 \(\times\)10\({}^{-19}\) cm\({}^{-3}\) at 2 K is comparable with previous reports [1; 5], but two orders of magnitude smaller than YbMnBi\({}_{2}\)[6]. To obtain more insight, we measured the magnetic field dependence of \(\rho_{xx}\) [Fig. 4(b)] and the Hall resistivity \(\rho_{xy}\) [Fig. 4(c)]
Figure 3: (a) Momentum-dependent electronic structure of YbMnSb\({}_{2}\) in the orthorhombic \(Pnma\) space group with collinear AFM arrangement of Mn spins. Color represents the orbital contribution of each atom type as shown in the inset. Brillouin zone of YbMnSb\({}_{2}\) for (b) orthorhombic and (c) tetragonal structure showing the FS. For tetragonal structure \(E_{\rm F}\) is shifted by -50 meV. Due to doubling of unit cell volume, BZ volume in orthorhombic structure becomes half of tetragonal phase.
at representative temperatures across the transition. At low temperatures, MR follows quadratic behavior in the low-field region and saturates at higher fields. For temperatures above 200 K, MR increases quadratically in the whole field region implying the multiple bands at the FS contribute to the charge transport. Consistent with previous results [1; 5], \(\rho_{xy}\) remains positive revealing that holes are dominant charge carriers. \(\rho_{xy}\) follows a linear increase up to 7 T for 300 K \(<\) T \(<\) 390 K but exhibits a concave upward increase below 250 K. Such nonlinear \(\rho_{xy}(B)\) indicates that a relatively small number of highly mobile electron-like carriers contribute to the transport property as temperature decreases.
The magnetic field dependence of \(\rho_{xy}\) in a multi-band system is determined by the interplay of concentration and mobility of individual carriers. Hence, we analyzed the corresponding \(\rho_{xx}(B)\) and \(\rho_{xy}(B)\) employing the semiclassical two-band model as described in SM [36]. Figs. 4(d)-4(e) show the thermal evolution of the electron (hole) concentration \(n_{e}(n_{h})\) and electron (hole) mobility \(\mu_{e}(\mu_{h})\) extracted from the two-band model. The concentration of the hole carriers, \(n_{h}\), is an order of magnitude larger than the electron-type carriers, \(n_{e}\), while the mobility of the electron-like carriers, \(\mu_{e}\), is twice that of \(\mu_{h}\). Surprisingly, \(n_{e}\) and \(\mu_{e}\) do not show strong temperature dependence. In contrast, \(n_{h}\) and \(\mu_{h}\) display dramatic temperature dependence as they might be coming from the parabolic bands near the \(\Gamma\) points [in Fig. 3(b)]. Both, \(n_{h}\) falls and \(\mu_{h}\) rises by two orders of magnitude across the magnetic transition suggesting that the hole pocket is partially gapped due to band folding as YbMnSb\({}_{2}\) transitions from PM to AFM ordered state.
To further deduce several important physical parameters related to the FS, we measured \(\rho_{xx}\) of another piece of crystal \(S\#2\) up to 16 T as shown in Fig. 4(f). As the magnetic field exceeds 6 T, prominent Shubnikov de Hass (SdH) quantum oscillations are detected. Fig. 4(g) presents the background subtracted \(\Delta\rho_{xx}\)_vs._\(1/B\) showing the quantum oscillation at different temperatures up to 20 K. The corresponding fast Fourier transformation (FFT) reveals a primary frequency at \(f_{\alpha}\simeq\) 70 T [Fig. 4(h)], in good agreement with previous dHvA and SdH studies of YbMnSb\({}_{2}\)[1]. The frequency of quantum oscillation is related to the cross-section area \(S_{F}\) of FS perpendicular to the applied \(B\) direction via the Onsager relation, \(S_{F}=(2\pi^{2}/\phi_{0})F\), where \(\phi_{0}\) is the single magnetic flux quantum. Using this relation, \(S_{F}\) for \(f_{\alpha}\) is estimated as 0.007 A\({}^{2}\), representing a tiny FS cross-sectional area of only 0.3% of the BZ area \((2\pi/b)\times(2\pi/c)\) = 2.12 A\({}^{2}\).
We also analyzed the SdH oscillations quantitatively using the Lifshitz-Kosevich (LK) formula [41; 42], which predicts the oscillatory component of \(\rho_{xx}\), \(\Delta\rho\), as
\[\frac{\Delta\rho}{\rho(0)}\simeq\frac{5}{2}\left(\frac{\mu_{0}H}{2F}\right)^{ \frac{1}{2}}R_{T}(T)R_{D}(T_{D})\cos\left[2\pi\left(\frac{F}{B}-\varphi\right) \right], \tag{1}\]
where \(\rho(0)\) is the resistivity \(\rho_{xx}\) at \(B=0\). The cosine term contains a phase factor \(\varphi=\frac{1}{2}-\frac{\phi_{B}}{2\pi}-\delta\), in which \(\phi_{B}\) is the Berry's phase and \(\delta\) is related to FS curvature. \(\delta=0\) for a smooth 2D cylinder, whereas \(\delta=\pm 1/8\) for a 3D FS. In Eq.(1), the Landau level broadening and electron scattering result in two major damping factors, namely, the temperature damping factor \(R_{T}(T)\) and the Dingle factor \(R_{D}(T_{D})\), respectively:
\[R_{T}(T)=\frac{2\pi^{2}k_{B}Tm^{*}}{\hbar eB}\sinh\left(\frac{2\pi^{2}k_{B}Tm ^{*}}{\hbar eB}\right) \tag{2}\]
and
\[R_{D}(T_{D})=\exp\left(-\frac{2\pi^{2}k_{B}T_{D}m^{*}}{\hbar eB}\right), \tag{3}\]
which are determined by the cyclotron effective mass \(m^{*}\) and the Dingle temperature \(T_{D}\). Fitting of Eq.(2) to the thermal damping of FFT amplitude of \(f_{\alpha}\) [inset of Fig. 4(h)] results the \(m^{*}\)\(\sim 0.12m_{e}\), similar to the previous reports [1; 5].
To identify the topological nature of \(f_{\alpha}\), the Landau level (LL) fan diagram is employed by plotting the \(\Delta\rho_{xx}\) maxima in SdH oscillations against their associated LL index, \(n\), in Fig. 4(i). The \(x\)-intercept of a linear fit to these data provides the accrued \(\phi_{B}\) when the carrier completes one cyclotron orbit, via the relation, \(\varphi=1/2-\phi_{B}/2\pi-\delta\). We assume \(\delta=0\), as previous SdH oscillation studies have established 2D nature of \(f_{\alpha}\)[1; 5]. \(\varphi=0\) for a topologically nontrivial Berry's phase of \(\pi\), while a trivial Berry's phase of 0 results in a \(\varphi=1/2\). A \(\varphi=0.001(4)\) in the present study indicates that the Fermi pocket giving rise to the SdH oscillations is consistent with having a topological origin and Dirac-like dispersion.
In summary, we have used neutron and x-ray single crystal diffraction, magnetic susceptibility, and magnetotransport measurements together with complementary band structure calculation to investigate the crystal and magnetic structure as well as the FS topology of YbMnSb\({}_{2}\). Both the x-ray and neutron diffraction unambiguously reveal that YbMnSb\({}_{2}\) crystallizes in an orthorhombic \(Pnma\) structure. Band structure calculation revealed a reduced BZ and more anisotropic FS of YbMnSb\({}_{2}\) compared to YbMnBi\({}_{2}\) because of in-plane orthorhombicity. The FS of YbMnSb\({}_{2}\) consisting of an anisotropic heavier regular band and a linearly dispersing Dirac-like band no longer exhibits the \(C_{4}\) rotational symmetry as in undistorted YbMnBi\({}_{2}\). Analysis of SdH quantum oscillation reveals a non-trivial nature of tiny Fermi pocket consistent with Dirac-like energy-momentum dispersion.
**Acknowledgment:** We are thankful to S. Nagasaki, D. Hamane, T. Miyake and T. Masuda for technical help during experiments. Also, we gratefully acknowledge fruitful discussion with M. tokunaga, K. Matsuyabashi and P. Shahi. This work was financially supported by the JSPS KAKENHI Grant number JP19H00648. A portion of this research used resources at SNS, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory. The polarized neutron scattering experiment at JRR-3 was carried out along the proposal No. 22401. The Fermi surfaces and band structure figures were respectively plotted using FermiSurfer [43] and pymatgen [44].
|
2303.07287
|
Tight Non-asymptotic Inference via Sub-Gaussian Intrinsic Moment Norm
|
In non-asymptotic learning, variance-type parameters of sub-Gaussian
distributions are of paramount importance. However, directly estimating these
parameters using the empirical moment generating function (MGF) is infeasible.
To address this, we suggest using the sub-Gaussian intrinsic moment norm
[Buldygin and Kozachenko (2000), Theorem 1.3] achieved by maximizing a sequence
of normalized moments. Significantly, the suggested norm can not only
reconstruct the exponential moment bounds of MGFs but also provide tighter
sub-Gaussian concentration inequalities. In practice, we provide an intuitive
method for assessing whether data with a finite sample size is sub-Gaussian,
utilizing the sub-Gaussian plot. The intrinsic moment norm can be robustly
estimated via a simple plug-in approach. Our theoretical findings are also
applicable to reinforcement learning, including the multi-armed bandit
scenario.
|
Huiming Zhang, Haoyu Wei, Guang Cheng
|
2023-03-13T17:03:19Z
|
http://arxiv.org/abs/2303.07287v2
|
# Tight Non-asymptotic Inference via Sub-Gaussian Intrinsic Moment Norm
###### Abstract
In non-asymptotic statistical inferences, variance-type parameters of sub-Gaussian distributions play a crucial role. However, direct estimation of these parameters based on the empirical moment generating function (MGF) is infeasible. To this end, we recommend using a sub-Gaussian intrinsic moment norm [1], Theorem 1.3] through maximizing a series of normalized moments. Importantly, the recommended norm can not only recover the exponential moment bounds for the corresponding MGFs, but also lead to tighter Hoeffding's sub-Gaussian concentration inequalities. In practice, we propose an intuitive way of checking sub-Gaussian data with a finite sample size by the sub-Gaussian plot. Intrinsic moment norm can be robustly estimated via a simple plug-in approach. Our theoretical results are applied to non-asymptotic analysis, including the multi-armed bandit.
## 1 Introduction
With the advancement of machine learning techniques, computer scientists have become more interested in establishing rigorous error bounds for desired learning procedures, especially those with finite sample validity (Wainwright, 2019; Zhang & Chen, 2021; Yang et al., 2020). In specific settings, statisticians, econometricians, engineers and physicist have developed non-asymptotic inferences to quantify uncertainty in data; see Romano & Wolf (2000); Chassang (2009); Arlot et al. (2010); Yang et al. (2020); Horowitz & Lee (2020); Armstrong & Kolesar (2021); Zheng & Cheng (2021); Lucas et al. (2008); Owhadi et al. (2013); Wang (2020). Therefore, the concentration-based statistical inference has received a considerable amount of attention, especially for bounded data (Romano & Wolf, 2000; Auer et al., 2002; Hao et al., 2019; Wang et al., 2021; Shiu, 2022) and Gaussian data (Arlot et al., 2010; Duy & Takeuchi, 2022; Bettache et al., 2021; Feng et al., 2021). For example, Hoeffding's inequality can be applied to construct non-asymptotic confidence intervals based on bounded data1.
Footnote 1: Recently, Phan et al. (2021) obtained a sharper result than Hoeffding’s inequality for bounded data.
However, in reality, it may be hard to know the support of data or its underlying distribution. In this case, misusing Hoeffding's inequality (Hoeffding, 1963) for unbounded data will result in a notably loose confidence interval (CI); see Appendix A.1. Hence, it is a common practice to assume that data follow sub-Gaussian distribution (Kahane, 1960). By the Chernoff inequality2, we have \(\mathrm{P}(X\geq t)\leq\inf_{s>0}\big{\{}\exp\{-st\}\mathrm{E}\exp\{sX\}\big{\}}, \ \forall\,t\geq 0\). Hence, tightness of a confidence interval relies on how we upper bound the moment generating function (MGF) \(\mathrm{E}\exp\{sX\}\) for all \(s>0\). This can be further translated into the following optimal variance proxy of sub-Gaussian distribution.
Footnote 2: For simplicity, we consider centered random variable (r.v.) with zero mean throughout the paper for all sub-Gaussian r.v..
**Definition 1**.: _A r.v. \(X\) is sub-Gaussian (sub-G) with a variance proxy \(\sigma^{2}\) [denoted as \(X\sim\mathrm{subG}(\sigma^{2})\)] if its MGF satisfies \(\mathrm{E}\exp(tX)\leq\exp(\sigma^{2}t^{2}/2)\) for all \(t\in\mathbb{R}\). The sub-Gaussian parameter \(\sigma_{\text{opt}}(X)\) is defined by the optimal
variance proxy (Chow, 1966):_
\[\sigma^{2}_{opt}(X):=\inf\left\{\sigma^{2}>0:\mathrm{E}\exp(tX)\leq\exp\{\sigma^{ 2}t^{2}/2\},\quad\forall\,t\in\mathbb{R}\right\}=2\sup\nolimits_{t\in\mathbb{R} }t^{-2}\mathrm{log}[\mathrm{E}\exp(tX)]. \tag{1}\]
Note that \(\sigma^{2}_{opt}(X)\geq\mathrm{Var}\,X\); see (14) in Appendix A.2. When \(\sigma^{2}_{opt}(X)=\mathrm{Var}\,X\), it is called strict sub-Gaussianity (Arbel et al., 2020). Based on Theorems 1.5 in Buldygin & Kozachenko (2000), we have
\[\mathrm{P}\left(X\geq t\right)\leq\exp\left\{-\frac{t^{2}}{2\sigma^{2}_{opt}( X)}\right\},\ \ \mathrm{P}\Big{(}\big{|}\sum_{i=1}^{n}X_{i}|\geq t\Big{)}\leq 2\exp\left\{- \frac{t^{2}}{2\sum_{i=1}^{n}\sigma^{2}_{opt}(X_{i})}\right\}. \tag{2}\]
for independent sub-G r.v.s \(X\) and \(\{X_{i}\}_{i=1}^{n}\). The above inequality (2) provides the tightest upper bound over the form \(\mathrm{P}(X>t)\leq\exp(-Ct^{2})\) (or \(\mathrm{P}(\big{|}\sum_{i=1}^{n}X_{i}|>t)\leq\exp(-Ct^{2})\)) for some positive constant \(C\) via Chernoff inequality.
Given \(\{X_{i}\}_{i=1}^{n}\overset{\mathrm{i.i.d.}}{\sim}\mathrm{subG}(\sigma^{2}_{opt }(X))\), a straightforward application of (2) gives an non-asymptotic \(100(1-\alpha)\%\) CI
\[\mathrm{E}X=0\in[\overline{X}_{n}\pm\sigma_{opt}(X)\sqrt{2n^{-1}\log(2/\alpha) }]. \tag{3}\]
A naive plug-in estimate3 of \(\sigma^{2}_{opt}(X):=2\sup\nolimits_{t\in\mathbb{R}}t^{-2}\mathrm{log}[ \mathrm{E}\exp(tX)]\)(Arbel et al., 2020) is
Footnote 3: We point out that a conservative and inconsistent estimator \(2\inf_{t\in\mathbb{R}}\log(n^{-1}\sum_{i=1}^{n}\exp(tX_{i}))/t^{2}\) was proposed in statistical physics literature (Wang, 2020).
\[\widehat{\sigma}^{2}_{opt}(X):=2\sup\nolimits_{t\in\mathbb{R}}t^{-2}\mathrm{ log}[n^{-1}\Sigma_{i=1}^{n}\exp(tX_{i})]. \tag{4}\]
However, two weaknesses of (4) substantially hinder its application: (i) the optimization result is unstable due to the possible non-convexity of the objective function; (ii) exponentially large \(n\) is required to ensure the variance term \(\mathrm{Var}(n^{-1}\sum_{i=1}^{n}\exp(tX_{i}))\) not to explode when \(t\) is large. In Section 3, we present some simulation evidence.
On the other hand, we are aware of other forms of variance-type parameter. For instance, van der Vaart & Wellner (1996) introduced the Orlicz norm as \(\|X\|_{w_{2}}:=\inf\{c>0:\mathrm{E}\exp\{|X|^{2}/c^{2}\}\leq 2\}\), frequently used in empirical process theory. Additionally, Vershynin (2010) suggested a norm based on the scale of moments as \(\|X\|_{\psi_{2}}:=\max_{k\geq 2}k^{-1/2}(\mathrm{E}|X|^{k})^{1/k}\) in Page 6 of Buldygin & Kozachenko (2000). However, as shown in Table 1 and Appendix A.2.1, both types of norm fail to deliver sharp probability bounds even for strict sub-G distributions, such as the standard Gaussian distribution and symmetric beta distribution.
### Contributions
In light of the above discussions, we advocate the use of the intrinsic moment norm in the Definition 2 in the construction of tight non-asymptotic CIs. There are two specific reasons: (i) it approximately recovers tight inequalities (2); (ii) it can be estimated friendly (with a closed form) and robustly.
The following definition 2 is from Page 6 and Theorem 1.3 in Buldygin & Kozachenko (2000).
**Definition 2** (Intrinsic moment norm).: \(\|X\|_{G}:=\max_{k\geq 1}\big{[}\frac{\partial^{2k}k!}{(2k)!}\mathrm{E}X^{2k} \big{]}^{1/(2k)}=\max_{k\geq 1}\big{[}\frac{1}{(2k-1)!!}\mathrm{E}X^{2k} \big{]}^{1/(2k)}.\)__
From the sub-G characterization (see Theorem 2.6 in Wainwright (2019)), \(\|X\|_{G}<\infty\)_iff_\(\sigma_{opt}(X)<\infty\)_for any zero-mean r.v._\(X\). Hence, the finite intrinsic moment norm of a r.v._\(X\) ensures sub-Gaussianity (satisfying Definition 1).
Our contributions in this paper can be summarized as follows.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline \(\|\cdot\|_{*}\)-norm & sharp tail for \(\mathrm{P}(|X|\geq t)\) & sharp MGF bound & half length of \((1-\delta)\)-CI & easy to estimate \\ \hline \(\sigma_{opt}(X)\) & Yes \([2\exp\{-\frac{t^{2}}{2}/\sigma^{2}_{opt}(X)\}]\) & Yes \([\exp\{\sigma^{2}_{opt}(X)\frac{t^{2}}{2}\}]\) & \(\sqrt{2\log(2/\delta)}\sigma_{opt}(X)\) & No \\ \hline \(\|X\|_{w_{2}}\) & Yes \([2\exp\{-\frac{t^{2}}{2}/(\frac{\|X\|_{w_{2}}}{\sqrt{2}})^{2}]\}]\) & No \([\exp\{(\sqrt{5/2}\|X\|_{w_{2}})^{2}\frac{t^{2}}{2}\}]\) & \(\sqrt{2\log(2/\delta)}\sqrt{5/2}\|X\|_{w_{2}}\) & No \\ \hline \(\|X\|_{\psi_{2}}\) & No \([2\exp\{-\frac{t^{2}}{2}/(2\|X\|_{w_{2}}^{2})\}]\) & No \([\exp\{(\sqrt{10}e\|X\|_{\psi_{2}})^{2}\frac{t^{2}}{2}\}]\) & \(\sqrt{2\log(2/\delta)}\sqrt{10e}\|X\|_{\psi_{2}}\) & Yes \\ \hline \(\|X\|_{G}\) (Def. 2) & Yes \([2\exp\{-\frac{t^{2}}{2}/\|X\|_{G}^{2}\}]\) & Yes \([\exp\{\|X\|_{G}^{2}\frac{t^{2}}{2}\}]\) & \(\sqrt{2\log(2/\delta)}\|X\|_{G}\) & Yes \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of sub-Gaussian norms \(\|\cdot\|_{*}\) for centralized and symmetric \(X\).
1. By \(\left\|X\right\|_{G}\), we achieve a sharper Hoeffding-type inequality under asymetric distribution; see Theorem 2(b).
2. Compared to the normal approximation based on Berry-Esseen (B-E) bounds, our results are more applicable to data of extremely small sample size. We illustrate Bernoulli observations with the comparison of two types of CIs based on the B-E-corrected CLT and Hoeffding's inequality in Figure 1; see Appendix A for details.
3. A novel method called _sub-Gaussian plot_ is proposed for checking whether the unbounded data are sub-Gaussian. We introduce plug-in and robust plug-in estimators for \(\left\|X\right\|_{G}\), and establish finite sample theories.
4. Finally, we employ the intrinsic moment norm estimation in the non-asymptotic inference for a bandit problem: Bootstrapped UCB-algorithm for multi-armed bandits. This algorithm is shown to achieve feasible error bounds and competitive cumulative regret on unbounded sub-Gaussian data.
## 2 sub-Gaussian plot and testing
Before estimating \(\left\|X\right\|_{G}\), the first step is to verify \(X\) is indeed sub-G given its i.i.d. copies \(\{X_{i}\}_{i=1}^{n}\). Corollary 7.2 (b) in Zhang and Chen (2021) shows for r.v.s \(X_{i}\sim\mathrm{subG}(\sigma_{opt}^{2}(X))\) (without independence assumption)
\[\mathrm{P}(\max_{1\leq i\leq j}X_{i}\leq\sigma_{opt}(X)\sqrt{2(\log j+t)})\geq 1 -\exp\{-t\}, \tag{5}\]
which implies \(\max_{1\leq i\leq j}X_{i}=O_{\mathrm{P}}(\sqrt{\log j})\). Moreover, we will show the above rate is indeed sharp for a class of unbound sub-G r.v.s characterized by the _lower intrinsic moment norm_ below.
**Definition 3** (Lower intrinsic moment norm).: _The lower intrinsic moment norm for a sub-G \(X\) is defined as_
\[\left\|X\right\|_{\tilde{G}}:=\min_{k\geq 1}\{[(2k-1)!!]^{-1}\mathrm{E}X^{2k} \}^{1/(2k)}.\]
By the method in Theorem 1 of Zhang and Zhou (2020), we obtain the following the tight rate result with a lower bound.
**Theorem 1**.: _(a). If \(\left\|X\right\|_{\tilde{G}}>0\) for i.i.d. symmetric sub-G r.v.s \(\{X_{i}\}_{i=1}^{n}\sim X\), then with probability at least \(1-\delta\)_
\[\frac{\left\|X\right\|_{\tilde{G}}/\left\|X\right\|_{G}}{2\sqrt{2\left\|X \right\|_{G}^{2}/\left\|X\right\|_{\tilde{G}}^{2}}-1}\sqrt{\log n-\log C^{-2} (X)-\log\log\left(\frac{2}{\delta}\right)}\leq\max_{1\leq i\leq n}\frac{X_{i }}{\left\|X\right\|_{G}}\leq\sqrt{2[\log n+\log\left(\frac{2}{\delta}\right) ]},\]
_where \(C(X)<1\) is constant defined in Lemma 1 below; (b) if \(X\) is bounded variable, then \(\left\|X\right\|_{\tilde{G}}=0\)._
The upper bound follows from the proof of (5) similarly. The proof of lower bound relies on the sharp reverse Chernoff inequality from Paley-Zygmund inequality (see Paley and Zygmund (1932)).
**Lemma 1** (A reverse Chernoff inequality).: _Suppose \(\left\|X\right\|_{\tilde{G}}>0\) for a symmetric sub-G r.v. \(X\). For \(t>0\), then_
\[\mathrm{P}(X\geq t)\geq C^{2}(X)\exp\{-4[2\|X\|_{G}^{2}/\|X\|_{\tilde{G}}^{4}- \|X\|_{\tilde{G}}^{-2}]t^{2}\},\]
Figure 1: CIs via Hoeffding’s inequality (red line) and B-E-corrected CLT (blue line). It describes a deficiency of B-E-corrected CLT under small sample, and it suggests that a simple Hoeffding’s inequality can even perform better.
_where \(C(X):=\left(\frac{\|X\|_{G}^{2}}{4\|X\|_{G}^{2}-\|X\|_{G}^{2}}\right)\left(\frac{4 \|X\|_{G}^{2}-2\|X\|_{G}^{2}}{4\|X\|_{G}^{2}-\|X\|_{G}^{2}}\right)^{2[2\|X\|_{G} ^{2}/\|X\|_{G}^{2}-1]}\in(0,1)\)._
Theorem 1 of Zhang and Zhou (2020) does not optimize the constant in Paley-Zygmund inequality. In contrast, our Lemma 1 has an optimal constant; see Appendix C for details.
**Sub-Gaussian plot under unbounded assumption4**. By Theorem 1, we propose a novel _sub-Gaussian plot_ check whether i.i.d data \(\{X_{i}\}_{i=1}^{n}\) follow a sub-G distribution. Suppose that for each \(j\), \(\{X_{i}^{*}\}_{i=1}^{n}\) are independently sampled from the empirical distribution \(\mathbb{F}_{n}(x)=\frac{1}{n}\sum_{i=1}^{n}1(X_{i}\leq x)\) of \(\{X_{i}\}_{i=1}^{n}\). Specifically, we plot the order statistics \(\{\max_{1\leq i\leq j}X_{i}^{*}\}_{j=1}^{n}\) on the plane coordinate axis, where \(x\) axis represents \(\sqrt{\log j+1}\) and \(y\) axis the value of \(\max_{1\leq i\leq j}X_{i}^{*}\). We check whether those points have a linear tendency at the boundary: the more they are close to the tendency of a beleline, the more we can trust the data are sub-Gaussian.
Footnote 4: Sub-G plot can only be applied to data with enough samples. When \(n\) is very small, there is not enough information to suggest unbounded trends. We roughly treat the data as bounded r.v. for a very small \(n\), and there is no need to use a sub-G plot in this case.
The Figure 2 shows _sub-Gaussian plot_ of \(N(0,1)\) and \(\operatorname{Exp}(1)\). It can be seen that sub-Gaussian plot of \(N(0,1)\) shows linear tendency at the boundary, while \(\operatorname{Exp}(1)\) shows quadratic tendency at the boundary. For the quadratic tendency, we note that if \(\{X_{i}\}_{i=1}^{n}\) have heavier tails such as sub-exponentiality, then \(\max_{1\leq i\leq j}X_{i}=O_{\mathrm{P}}(\log j)\) instead of the order \(O(\sqrt{\log j})\); see Corollary 7.3 in Zhang and Chen (2021).
## 3 Finite Sample Properties of Intrinsic Moment Norm
In this section, we characterize two important properties of the intrinsic moment norm that are used in constructing non-asymptotic confidence intervals.
### Basic Properties
Lemma 2 below establishes that the intrinsic moment norm is estimable.
**Lemma 2**.: _For sub-G \(X\), we have \(\operatorname*{arg\,max}_{m\in 2^{\mathbb{N}}}\left[\frac{\operatorname{E}X^{m}}{(m- 1)!!}\right]^{1/m}<\infty\), where \(2\mathbb{N}:=\{2,4,\cdots\}\) is the even number set._
Lemma 2 ensure that for any sub-Gaussian variable \(X\), its intrinsic moment norm can be computed as
\[\|X\|_{G}:=\max_{m\in 2^{\mathbb{N}}}\left[\frac{\operatorname{E}X^{m}}{(m- 1)!!}\right]^{1/m}=\max_{1\leq k\leq k_{X}}\left[\frac{\operatorname{E}X^{2k}} {(2k-1)!!}\right]^{1/(2k)}\text{ with some finite }k_{X}<\infty.\]
This is an important property that other norms may not have. The \(\|X\|_{\psi_{2}}:=\max_{k\geq 2}k^{-1/2}(\operatorname{E}|X|^{k})^{1/k}\) for Gaussian \(X\) achieves its optimal point at \(k=\infty\); see Example 3 in Appendix A.2.1. As for \(\sigma_{opt}^{2}(X):=2\sup_{t\in\mathbb{R}}\frac{\log\operatorname{E} \operatorname{exp}(tX)}{t^{2}}\), it is unclear that its value can be achieved at a finite \(t\). Note that if \(k_{X}=1\), one has \(\|X\|_{G}^{2}=\operatorname{Var}(X)\).
Figure 2: sub-Gaussian plot of standard Gaussian and standard exponential distribution for \(n=1000\). Left: The two dot lines indicate the points drop in a triangle region with a high probability. Right: The points in the case of exponential distribution approximately live curve triangle region with quadratic trends.
Next, we present an example in calculating the values of \(k_{X}\). Denote \(\,\mathrm{Exp}(1)|_{[0,M]}\) as the truncated standard exponential distribution on \([0,M]\) with the density as \(f(x)=\frac{e^{-x}}{\int_{0}^{M}e^{-x}\,dx}1_{\{x\in[0,M]\}}\).
**Example 1**.: _a. \(X\sim U[-a,a]\), \(k_{X}=1\) for any \(a\in\mathbb{R}\); b. \(X\sim\mathrm{Exp}(1)|_{[0,2.75]}-\mathrm{E}\,\mathrm{Exp}(1)|_{[0,2.75]}\), \(k_{X}=2\); \(c\). \(X\sim\mathrm{Exp}(1)|_{[0,3]}-\mathrm{E}\,\mathrm{Exp}(1)|_{[0,3]}\); \(k_{X}=3\). Indeed, for any fixed \(k_{0}\in\mathbb{N}\), we can construct a truncated exponential r.v. \(X:=\mathrm{Exp}(1)|_{[0,M]}\) such that \(k_{X}=k_{0}\) by properly adjusting the truncation level \(M\)._
### Concentration for summation
In what follows, we will show another property of \(\|X\|_{G}\) that it recovers nearly tight MGF bounds in Definition 1. More powerfully, it enables us to derive the sub-G Hoeffding's inequality (2).
**Theorem 2**.: _Suppose that \(\{X_{i}\}_{i=1}^{n}\) are independent r.v.s with \(\max_{i\in[n]}\|X_{i}\|_{G}<\infty\). We have_
(a). _If \(X_{i}\) is symmetric about zero, then \(\mathrm{Eexp}\{tX_{i}\}\leq\exp\{t^{2}\left\|X_{i}\right\|_{G}^{2}/2\}\) for any \(t\in\mathbb{R}\), and_
\[\mathrm{P}\left(\left|\sum_{i=1}^{n}X_{i}\right|\geq s\right)\leq 2\exp\{-s^{2}/ [2\sum_{i=1}^{n}\left\|X_{i}\right\|_{G}^{2}]\},\quad s\geq 0.\]
(b). _If \(X_{i}\) is not symmetric, then \(\mathrm{Eexp}\{tX_{i}\}\leq\exp\{(17/12)t^{2}\left\|X_{i}\right\|_{G}^{2}/2\}\) for any \(t\in\mathbb{R}\), and_
\[\mathrm{P}\left(\left|\sum_{i=1}^{n}X_{i}\right|\geq s\right)\leq 2\exp\{-(12/ 17)s^{2}/[2\sum_{i=1}^{n}\left\|X_{i}\right\|_{G}^{2}]\},\qquad s\geq 0.\]
Theorem 2(a) is an existing result in Theorem 2.6 of Wainwright (2019). For Theorem 2(b), we obtain \(\sqrt{17/12}\approx 1.19\), while Lemma 1.5 in Buldygin & Kozachenko (2000) obtained \(\mathrm{Eexp}\{tX_{i}\}\leq\exp\left\{\frac{t^{2}}{2}(\sqrt[4]{3.1}\left\|X_{ i}\right\|_{G})^{2}\right\}\) for \(t\in\mathbb{R}\) with \(\sqrt[4]{3.1}\approx 1.32\). Essentially, \(\sqrt{17/12}>1\) appears for asymmetric variables, since \(\left\|\cdot\right\|_{G}\) is defined by comparing a Gaussian variable \(G\) that is symmetric. A technical reason for this improvement is that \(\left\|\cdot\right\|_{G}\) does not need Stirling's approximation for attaining a sharper MGF bound when expanding the exponential function by Taylor's formula. To show the tightness of Theorem 2(b), in Figure 4, we give some comparisons with \(\sigma_{opt}(X)\), \(\sqrt{17/12}\|X\|_{G}\), \(\sqrt{2e}\|X\|_{\psi_{2}}\), \(\|X\|_{w_{2}}/\sqrt{2}\) and \(\sqrt{\mathrm{Var}\,X}\) in terms of confidence length in Table 1, when \(X\) is Bernoulli or beta distribution.
Figure 4: \(\mathrm{Beta}(\alpha,\beta)\)
Figure 5: The half length of \(1-\delta\) confidence interval with different norms. The results are divided by \(\sqrt{2\log(2/\delta)}\) to eliminate the affect of \(\delta\).
Estimation of the intrinsic moment norm
A first thought to estimate \(\|X\|_{G}\) is by the plug-in approach. Although \(k_{X}\) is proven to be finite in Lemma 2, its (possibly large) exact value is still unknown in practice. Instead, we use a non-decreasing _index sequence_\(\{\kappa_{n}\}\) to replace \(k_{X}\) in the estimation. Hence, we suggest a plug-in feasible estimator
\[\widehat{\|X\|}_{G}=\max_{1\leq k\leq\kappa_{n}}\left[\frac{1}{(2k-1)!!}\frac{ 1}{n}\sum_{i=1}^{n}X_{i}^{2k}\right]^{1/(2k)}. \tag{6}\]
Deriving the non-asymptotic property of the \(\widehat{\|X\|}_{G}\) is not an easy task: the maximum point \(\hat{k}(\kappa_{n}):=\arg\max_{1\leq k\leq\kappa_{n}}\left[\frac{1}{(2k-1)!!} \frac{1}{n}\sum_{i=1}^{n}X_{i}^{2k}\right]^{1/(2k)}\) will change with the sample size \(n\) even \(\kappa_{n}\) is fixed.
To resolve this, we first examine the oracle estimator defined as \(\widetilde{\|X\|}_{G}=\left[\frac{1}{(2k_{X}-1)!!}\frac{1}{n}\sum_{i=1}^{n}X_ {i}^{2k_{X}}\right]^{1/2k_{X}}\). Here, based on Orlicz norm \(\|Y\|_{\psi_{\theta}}:=\inf\{t>0:\mathrm{E}\exp\{|Y|^{\theta}/t^{\theta}\} \leq 2\}\) of sub-Weibull r.v. \(Y\) with \(\theta>0\)(Hao et al., 2019; Zhang & Wei, 2022), we present the non-asymptotic concentration of \(\widehat{\|X\|}_{G}\) around it pure value \(\|X\|_{G}\).
**Proposition 1**.: _Suppose \(\{X_{i}\}_{i=1}^{n}\stackrel{{\mathrm{i.i.d.}}}{{\sim}}X\) and \(X\) satisfies \(\|X\|_{\psi_{1/k_{X}}}<\infty\), then for any \(t>0\),_
\[\mathrm{P}\bigg{(}\Big{|}\widetilde{\|X\|}_{G}^{2k_{X}}-\|X\|_{G}^{2k_{X}} \Big{|}\leq 2e\|X\|_{\psi_{1/k_{X}}}C(k_{X}^{-1})\left\{\sqrt{\frac{t}{n}}+ \gamma^{2k_{X}}A(k_{X}^{-1})\frac{t^{k_{X}}}{n}\right\}\bigg{)}\geq 1-2e^{-t},\]
_where the constant \(\gamma\approx 1.78\), and the constant functions \(C(\cdot)\) and \(A(\cdot)\) are defined in Appendix C._
The exponential-moment condition \(\|X\|_{\psi_{1/k_{X}}}<\infty\) is too strong for the error bound of \(\widehat{\|X\|}_{G}^{2k_{X}}-\|X\|_{G}^{2k_{X}}\) in Proposition 1, although it has exponential decay probability \(1-2\mathrm{exp}(-t)\).
Except for the direct plug-in estimator, here we resort to the median-of-means (MOM, Page244 in Nemirovskij & Yudin (1983)) as the robust plug-in estimator of intrinsic moment norm. Let \(m\) and \(b\) be a positive integer such that \(n=mb\) and let \(B_{1},\ldots,B_{b}\) be a partition of \([n]\) into blocks of equal cardinality \(m\). For any \(s\in[b]\), let \(\mathrm{P}_{n}^{B_{s}}X=m^{-1}\sum_{i\in B_{s}}X_{i}\) for independent data \(\{X_{i}\}_{i=1}^{n}\). The MOM version intrinsic moment norm estimator is defined as
\[\widehat{\|X\|}_{b,G}:=\max_{1\leq k\leq\kappa_{n}}\operatorname*{med}_{s\in[ b]}\left\{\left[[(2k-1)!!]^{-1}\mathrm{P}_{m}^{B_{s}}X^{2k}\right]^{1/(2k)} \right\}. \tag{7}\]
As stated in Proposition 1, the naive plug-in estimator \(\widehat{\|X\|}_{G}=\widehat{\|X\|}_{1,G}\) is not robust. MOM estimators (7) with \(b\gg 1\) have two merits: (a) it only needs finite moment conditions, but the exponential concentration bounds are still achieved; (b) it permits some outliers in the data. Non-asymptotic inferences require to bound for \(\|X\|_{G}\) exactly by a feasible estimator \(\widehat{\|X\|}_{b,G}\) up to a sharp constants. Next, we establish a high-probability upper bound for the estimated norm, if the data has \(O\cup I\) outlier assumptions as follows.
* (M.1) Suppose that the data \(\{X_{i}\}_{i=1}^{n}\) contains \(n-n_{o}\) inliers \(\{X_{i}\}_{i\in I}\) drawn i.i.d. according to a target distribution, and there are no distributional assumptions on \(n_{o}\) outliers \(\{X_{i}\}_{i\in O}\);
* (M.2) \(b=b_{O}+b_{S}\), where \(b_{O}\) is the number of blocks _containing at least one outliers_ and \(b_{S}\) is the number of _sane blocks containing no outliers_. Let \(\varepsilon:=n_{o}/n\) be the fraction of the outliers and \(\frac{n_{o}}{b}<\frac{1}{2}\). Assume here exists a fraction function \(\eta(\varepsilon)\) for same block such that \(b_{S}\geq\eta(\varepsilon)b\) for a function \(\eta(\varepsilon)\in(0,1]\).
To serve for error bounds in the presence of outliers, (M.2) considers the specific fraction function of the polluted inputs; see Laforgue et al. (2021). Define \(\underline{q}_{k,m}(\sigma_{k})\) and \(\bar{q}_{k,m}(\sigma_{k})\) as the sequences for any \(m\in\mathbb{N}\) and \(1\leq k\leq\kappa_{n}\):
\[\overline{g}_{k,m}(\sigma_{k}):=1-\left[\mathrm{E}X^{2k}/(2k-1)!!\right]^{-1/(2 k)}\max_{1\leq k\leq\kappa_{n}}\left[-2[m/\eta(\varepsilon)]^{-1/2}\sigma_{k}^{k}/( \mathrm{E}X^{2k})+\mathrm{E}X^{2k}/(2k-1)!!\right]^{1/(2k)}\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Theorem 3** (Finite sample guaranteed coverage).: _Suppose \(\sqrt{\mathrm{Var}X^{2k}}\leq\sigma_{k}^{k}\) for a sequence \(\{\sigma_{k}\}_{k=1}^{\kappa_{n}}\), we have_
\[\mathrm{P}\left\{\|X\|_{G}\leq[1-\max_{1\leq k\leq\kappa_{n}}\overline{g}_{k,m} (\sigma_{k})]^{-1}\|\widehat{X}\|_{b,G}\right\}>1-\kappa_{n}\cdot e^{-2b\eta( \varepsilon)(1-\frac{3}{4\eta(\varepsilon)})^{2}};\]
_and \(\mathrm{P}\{\|X\|_{G}\geq[1+\max_{1\leq k\leq\kappa_{n}}\underline{g}_{k,m}( \sigma_{k})]^{-1}\|\widehat{X}\|_{b,G}\}>1-\kappa_{n}\cdot e^{-2b\eta( \varepsilon)(1-\frac{3}{4\eta(\varepsilon)})^{2}}\) for \(\kappa_{n}\geq\kappa_{X}\) under (M.1-M.2)._
Theorem 3 ensures the concentration of the estimator \(\widehat{\|X\|_{b,G}}\) when \(\kappa_{n}\geq k_{X}\) under enough sample. If \(\eta(\varepsilon)=1\) with \(\varepsilon=0\), then the data are i.i.d., which have no outlier, and outlier assumptions in M.1-M.2 can be dropped in Theorem 3. When the data is i.i.d. Gaussian vector, Proposition 4.1 in Auer et al. (2002) also gave a high-probability estimated upper bound for \(\ell_{p}\)-norm of the vector of Gaussian standard deviations, our result is for intrinsic moment norm.
In practice, the block number \(b\) can be taken by the adaptation method based on the Lepski method (Depersin & Lecue, 2022). To guarantee high probability events in Theorem 3, it is required that the index sequence \(\kappa_{n}\) should not be very large for fixed \(b\). The larger \(\kappa_{n}\) needs larger \(b\) in blocks \(B_{1},\ldots,B_{b}\). In the simulation, we will see that an increasing index sequence \(\kappa_{n}\) with slow rate will lead a good performance.
Finally, we compare our two estimators (6) and (7), as well as the estimator (4) in Figure 6. We consider the standard Gaussian and Rademacher variable distributed \(X\), in the two case we have \(\|X\|_{G}^{2}=\sigma_{opt}^{2}(X)=\mathrm{Var}(X)=1\). The following figure shows the performance of three estimators under sample \(n=10\) to \(1000\) with \(\kappa_{n}\) just chosen as \(\lceil\log n\rceil\). For the MOM method, we use five blocks in this simple setting. For a more complex case, one can use Lepski's method to choose \(b\) (see Page & Grunewalder (2021)), but some considerable computation cost may be introduced. From Figure 6, we know that the performance of the MOM estimator is best, while the naive estimator (4) is worst. For the high-quality data of extremely small sample size, we can apply the leave-one-out Hodges-Lehmann method (Rousseeuw & Verboven, 2002) for further numerical improvement; see Appendix B for details.
Figure 6: DE represents the naive plug-in estimator (6), MOM represents the MOM estimator (7), and OP is the estimator plug-in naive estimator (4) for the optimal variance proxy.
## 5 Application in Multi-armed Bandit Problem
In the multi-armed bandit problem (MAB), a player chooses between \(K\) different slot machines (an \(K\)-armed bandit), each with a different unknown random reward r.v.s \(\{Y_{k}\}_{k=1}^{K}\subseteq\mathbb{R}\), while each realization of a fixed arm \(k\) is independent and shares the same distribution. Further, we assume the rewards are sub-Gaussian, i.e.
\[\|Y_{k}-\mu_{k}\|_{G}<\infty,\qquad k\in[K]. \tag{9}\]
Our goal is to find the best arm with the largest expected reward, say \(Y_{\ast}\), by pulling arms. In each round \(t\in[T]\), the player pulls an arm (an action) \(A_{t}\in[K]\). Conditioning on \(\{A_{t}=k\}\), we define the observed reward \(\{Y_{k,t}\}_{t\in[T]}\stackrel{{\text{i.i.d.}}}{{\sim}}P_{k}\). The goal of the exploration in MAB is to minimize the cumulative regret after \(T\) steps: \(\operatorname{Reg}_{T}(Y,A):=\sum_{t=1}^{T}(\mu_{t^{\ast}}-\mu_{A_{t}}),\) i.e. the performance of any exploration strategy \(\{A_{t}\}_{t\in[T]}\). The exploration performance is better, if we have smaller \(\operatorname{Reg}_{T}(Y,A)\). Without loss of generality, we assume \(t^{\ast}=1\). We seek to evaluate the expected bounds from the decomposition (see Lemma 4.5 in Lattimore & Szepesvari (2020)),
\[\operatorname{Reg}_{T}:=\operatorname{E}\operatorname{Reg}_{T}(Y,A)=\sum_{k= 1}^{K}\Delta_{k}\mathrm{E}\Bigl{[}\sum_{t=1}^{T}1\left\{A_{t}=k\right\}\Bigr{]}, \tag{10}\]
where \(\mathrm{E}\) is taken on the randomness of the player's actions \(\{A_{t}\}_{t\in[T]}\), and \(\Delta_{k}=\mu_{1}-\mu_{k}\) is the sub-optimality gap for arm \(k\in[K]/\{1\}\). The upper bound of \(\operatorname{Reg}_{T}\) is called problem-independent if the regret bound depends on the distribution of the data and does not rely on the gap \(\Delta_{k}\).
For each iteration \(t\), let \(T_{k}(t):=\operatorname{card}\{1\leq\tau\leq t:A_{\tau}=k\}\) be the number of pull for arm \(k\) until time \(t\) during the bandit process. Then if we define \(\overline{Y}_{T_{k}(t)}:=\frac{1}{T_{k}(t)}\sum_{\tau\leq t,A_{\tau}=k}Y_{k,\tau}\) as the running average of the rewards of arm \(k\) at time \(t\). Suppose we obtain a \(100(1-\delta)\%\) CI \(\left[\overline{Y}_{T_{k}(t)}-c_{k}(t),\overline{Y}_{T_{k}(t)}+c_{k}(t)\right]\) for \(\mu_{k}\) from a tight concentration inequality. Therefore, we confidently recchn that the reward of arm \(k\) is \(\overline{Y}_{T_{k}(t)}+c_{k}(t)\), and play the arm \(A_{t}=k\), hoping to maximize the reward with a high probability for finite \(t\). This is upper confidence bound (UCB, Auer et al. (2002)) algorithms. And many works based on this methods appears recently, for example, Hao et al. (2019) use bootstrap method with the second order correction to give a algorithm with the explicit regret bounds for sub-Gaussian rewards. However, many existent algorithms contain unknown norms for the random rewards, they are actually infeasible. And Theorem 4 is one example with explicit regret bound. For instance, the algorithm Hao et al. (2019) needs to use the unknown Orlicz-norm of \(\overline{Y}_{k}-\mu_{k}\) in the algorithm. Thus, it is actually infeasible in practice.
Fortunately, our estimator can solve this problem. Suppose that \(Y_{k}-\mu_{k}\) is symmetric around zero, by one-side version of Theorem 2, the (9) implies that for all \(k\) and all \(t\), \(\mathrm{P}(\overline{Y}_{T_{k}(t)}>\mu_{k}+\|Y_{k}-\mu_{k}\|_{G}\sqrt{\frac{2 }{T_{k}(t)}\log\frac{1}{\delta}})\leq\delta\). Let sub-sample size \(m_{k}\) and block size \(b_{k}\) be positive integer such that \(T_{k}(t)=m_{k}b_{k}\) for MOM estimators \(\|\widehat{Y_{k}-\mu_{k}}\|_{b_{k},G}\) in Section 3. Theorem 3 (a) guarantee that true norms can be replaced by MOM-estimated norms such that \(\mathrm{P}(\overline{Y}_{T_{k}(t)}\leq\mu_{k}+\frac{\|\widehat{Y_{k}-\mu_{k}} \|_{b_{k},G}}{1-o(1)}\sqrt{\frac{2}{T_{k}(t)}\log\frac{1}{\delta}})\geq 1- \delta-k_{Y_{k}}\cdot\exp(-b_{k}/8)\) if \(\eta(\varepsilon)=1\) with \(\varepsilon=0\).
If the UCB algorithm is correctly applied, for a finite \(T_{k}(t)\), with high probability, we will pull the best arm.
In practice, we nearly do not know any knowledge about the data. As a flexible way of uncertainty qualification, the multiplier bootstrap (Arlot et al., 2010) enables mimicking the non-asymptotic properties of the target statistic by reweighing its summands of the centralized empirical mean. The multiplier bootstrapped quantile for the i.i.d. observation \(\mathbf{Y}_{n}:=\{Y_{i}\}_{i=1}^{n}\) is the \((1-\alpha)\)-quantile of the distribution of \(n^{-1}\!\sum_{i=1}^{n}w_{i}(Y_{i}-\overline{Y}_{n})\), which is defined as
\[q_{\alpha}(\mathbf{Y}_{n}-\overline{Y}_{n},\mathbf{w}):=\inf\{x\in\mathbb{R}\,| \,\mathrm{P}_{w}(n^{-1}\!\sum_{i=1}^{n}\!w_{i}(Y_{i}-\overline{Y}_{n})>x)\leq \alpha\},\]
where \(\mathbf{w}:=\{w_{i}\}_{i=1}^{n}\) are bootstrap random weights independent of \(\mathbf{Y}_{n}\). We denote the statistics \(\widehat{\varphi}_{G}(\mathbf{Y}_{n})\) as something satisfying \(\mathrm{P}_{\mathbf{Y}_{n}}(|\overline{Y}_{n}-\mathrm{E}Y_{1}|\geq\widehat{ \varphi}_{G}(\mathbf{Y}_{n}))\leq\alpha\).
Motivated by Hao et al. (2019), we design Algorithm 1 based on some estimators of the UCB. It guarantees a relatively small regret by bootstrapped threshold \(q_{\alpha/2}(\mathbf{Y}_{T_{k}(t)}-\overline{Y}_{T_{k}(t)},\mathbf{w})\) adding a concentration based second-order correction \(\widehat{\varphi}_{G}(\mathbf{Y}_{T_{k}(t)})\) that is specified in Theorem 3.
In the following regret bounds, we assume the mean reward from the \(k\)-th arm \(\mu_{k}\) is known. In practice, it can be replaced by a robust estimator, and we obtain the results of MOM estimator.
```
0:\(\widehat{\varphi}_{G}(\mathbf{Y}_{T_{k}(t)})\) is given by (11). for\(t=1,\ldots,K\)do Pull each arm once to initialize the algorithm. end for for\(t=K+1,\ldots,T\)do Set a confidence level \(\alpha\in(0,1)\). Calculate the boostrapped quantile \(q_{\alpha/2}(\mathbf{Y}_{T_{k}(t)}-\overline{Y}_{T_{k}(t)},\mathbf{w})\) with the Rademacher bootstrapped weights \(\mathbf{w}\) independent with any \(Y\). Pull the arm \(A_{t}=\operatorname*{argmax}_{k\in[K]}\operatorname{UCB}_{k}(t):= \operatorname*{argmax}_{k\in[K]}\big{(}\overline{Y}_{T_{k}(t)}+q_{\alpha/2}( \mathbf{Y}_{T_{k}(t)}-\overline{Y}_{T_{k}(t)},\mathbf{w})+\sqrt{\frac{2\log(4 /\alpha)}{T_{k}(t)}}\widehat{\varphi}_{G}(\mathbf{Y}_{T_{k}(t)})\big{)}\). Receive reward \(Y_{A_{t}}\). end for
```
**Algorithm 1**Bootstrapped UCB
**Theorem 4**.: _Consider a \(K\)-armed sub-G bandit under (9) and suppose that \(Y_{k}-\mu_{k}\) is symmetric around zero. For any round \(T\), according to moment conditions in Theorem 3, choosing \(\widehat{\varphi}_{G}(\mathbf{Y}_{T_{k}(t)})\) as_
\[\widehat{\varphi}_{G}(\mathbf{Y}_{T_{k}(t)})=\frac{\sqrt{2\log(4/\alpha)}}{T_{ k}^{1/2}(t)-1}\|\widehat{Y_{k}-\mu_{k}}\|_{b_{k},G} \tag{11}\]
_as a re-scaled version of MOM estimator \(\widehat{\|Y_{k}-\mu_{k}\|_{b_{k},G}}\) with block number \(b_{k}\) satisfying the moment assumptions C[UCB1] and C[UCB2] in Appendix C. Fix a confidence level \(\alpha=4/T^{2}\), if the player pull an arm \(A_{t}\in[K]\) according to Algorithm 1, then we have the problem-dependent regret of Algorithm 1 is bounded by_
\[\operatorname{Reg}_{T}\leq 16(2+\sqrt{2})^{2}\max_{k\in[K]}\|Y_{k}-\mu_{k}\|_{G} ^{2}\log T\sum_{k=2}^{K}\Delta_{k}^{-1}+(4T^{-1}+2T^{-25-16\sqrt{2}}+8)\sum_ {k=2}^{K}\Delta_{k}\text{,}\]
_where \(\Delta_{k}\) is the sub-optimality gap. Moreover, let \(\mu_{1}^{*}:=\max_{k_{1}\in[K]}\mu_{k_{1}}-\min_{k_{2}\in[K]}\mu_{k_{2}}\) be the range over the rewards, the problem-independent regret_
\[\operatorname{Reg}_{T}\leq 8(2+\sqrt{2})\max_{k\in[K]}\|Y_{k}-\mu_{k}\|_{G} \sqrt{TK\log T}+(4T^{-1}+2T^{-25-16\sqrt{2}}+8)K\mu_{1}^{*}\text{.}\]
From Theorem 4, we know that the regret of our method achieve minimax rate \(\log T\) for a problem-dependent problem and \(\sqrt{KT}\) for a problem-independent case (see Tao et al. (2022)), so Algorithm 1 can be seen as an optimal algorithm. Compared with the traditional vanilla UCB, we do improve the constant. When \(Y_{k}\sim N(\mu_{k},1)\), the constant factor in regret bound in Auer et al. (2002) is \(256\), which is larger than \(16(2+\sqrt{2})^{2}\) in our theorem.
When the UCB has unknown sub-G parameters, Theorem 4 first studies a feasible UCB algorithm with sub-G parameter plugging estimation. Many previous UCB algorithms based on non-asymptotic inference in the literature assume that the sub-G parameter is a preset constant, see the algorithm in Hao et al. (2019) for instance.
Next, we give an simulation for Theorem 4 in two sub-G cases to verify the performance of estimated norms. Similar to Hao et al. (2019); Wu et al. (2022), we design the three methods as follows:
1. Use our method \(\widehat{\varphi}(\mathbf{Y}_{T_{k}(t)})\) with _Estimated Norm_ in Theorem 4;
2. Use _Asymptotic Naive variph \(\widetilde{\varphi}(\mathbf{Y}_{T_{k}(t)})\)_ satisfying \(\operatorname{P}\big{(}|\overline{Y}_{T_{k}(t)}-\mu_{k}|\leq\widetilde{ \varphi}(\mathbf{Y}_{T_{k}(t)})\big{)}\to\alpha\) by CLT, i.e. \(\widetilde{\varphi}(\mathbf{Y}_{T_{k}(t)})=\widehat{\sigma}_{k}\Phi^{-1}(1- \alpha/2)/\sqrt{T_{k}(t)}\) with \(\widehat{\sigma}_{k}=\sqrt{\frac{1}{T_{k}(t)}\sum_{\tau\leq t,A_{\tau}=k}(Y_{k,\tau}-\overline{Y}_{T_{k}(t)})^{2}}\) as the estimated standard deviation;
3. Regard all the unbounded rewards as bounded r.v. and use Hoeffding's inequality (_wrongly use Hoeffding's inequality_) to construct \(\varphi\), i.e. \(\hat{\varphi}(\mathbf{Y}_{T_{k}(t)})=\big{[}\max\{\mathbf{Y}_{T_{k}(t)}\}- \min\{\mathbf{Y}_{T_{k}(t)}\}\big{]}\sqrt{\frac{\log(2/\alpha)}{2T_{k}(t)}}\).
For our detailed MAB simulation, we consider as follows, in each case, the number of arms is assigned as \(K=10\), and the mean reward from \(k\)-th arm \(\mu_{k}\) is independently drawn from \(\mathrm{Exp}(1)\). We consider two types of unbounded reward distributions: EG1. Gaussian \(N(\mu_{k},1)\); EG2. Mixture Gaussian \(p_{k}\times N(2\mu_{k},1)+(1-p_{k})\times N\left(\frac{1-2p_{k}}{1-p_{k}}\mu_{k},1\right)\) with \(p_{k}~{}\sim~{}U(0,\frac{1}{2})\). We also use Thompson Sampling (Agrawal & Goyal, 2017) with Gaussian for both its reward and prior distribution and tuning the prior parameters on \(\{2^{4-k}\}_{k=1}^{6}\) with its best performance as a strong baseline in this simulation.
As we can see, EG1 and EG2 are both sub-Gaussian rewards. In the simulation, \(\mu_{k}\) may not be bounded, complicating this problem. The simulation results are shown in Figure 7. These results show that our method outperforms the other two methods under unbounded sub-Gaussian rewards and is even comparable to Thompson Sampling when sufficient correct prior knowledge is available. Furthermore, the smallest standard derivation of our method demonstrates the strong robustness of our estimated norm method.
#### Acknowledgments
The research of H. Zhang was supported in part by National Natural Science Foundation of China (Grant 12101630). The research of G. Cheng was supported in part by NSF - SCALE MoDL (2134209).
The authors thank Prof. Hongjun Li for the early discussions, as well as Dr. Ning Zhang for valuable suggestions. The early version of this manuscript was submitted to ICLR 2023 on September 22, 2022; see [https://openreview.net/forum?id=c9QTkDGJ_cB](https://openreview.net/forum?id=c9QTkDGJ_cB).
|
2307.15374
|
Leveraging Optical Communication Fiber and AI for Distributed Water Pipe
Leak Detection
|
Detecting leaks in water networks is a costly challenge. This article
introduces a practical solution: the integration of optical network with water
networks for efficient leak detection. Our approach uses a fiber-optic cable to
measure vibrations, enabling accurate leak identification and localization by
an intelligent algorithm. We also propose a method to access leak severity for
prioritized repairs. Our solution detects even small leaks with flow rates as
low as 0.027 L/s. It offers a cost-effective way to improve leak detection,
enhance water management, and increase operational efficiency.
|
Huan Wu, Huan-Feng Duan, Wallace W. L. Lai, Kun Zhu, Xin Cheng, Hao Yin, Bin Zhou, Chun-Cheung Lai, Chao Lu, Xiaoli Ding
|
2023-07-28T07:46:20Z
|
http://arxiv.org/abs/2307.15374v1
|
# Leveraging Optical Communication Fiber and AI for Distributed Water Pipe Leak Detection
###### Abstract
Detecting leaks in water networks is a costly challenge. This article introduces a practical solution: the integration of optical network with water networks for efficient leak detection. Our approach uses a fiber-optic cable to measure vibrations, enabling accurate leak identification and localization by an intelligent algorithm. We also propose a method to access leak severity for prioritized repairs. Our solution detects even small leaks with flow rates as low as 0.027 L/s. It offers a cost-effective way to improve leak detection, enhance water management, and increase operational efficiency.
Water distribution networks (WDNs) are essential infrastructure for providing fresh water to communities, but detecting leaks for WDNs is challenging and costly. In this article, we propose a novel solution that combines an optical network and WDN for distributed water pipe leak detection. Our approach involves using a standard outdoor fiber-optic cable for distributed vibration measurement along a 40-meter water pipe. To accurately identify and locate leaks, we introduce a leak identification algorithm based on 3D-convolutional neural networks (3D-CNNs) that consider the temporal, spectral, and spatial information. Additionally, we propose a leak quantification method that can help prioritize repairs based on the severity of the leak. We evaluate our scheme for different conditions and find that it can detect leak flow rates as low as 0.027 L/s with a location accuracy of within 3 meters and a quantification accuracy of over 85%. Our proposed method offers a cost-effective and value-added solution for designing optical networks and WDNs in new development areas.
## I Introduction
Pipe leakage is a plaguing issue in the throes of the water crisis encountered in many countries. Typically, 20-30% of the water in the pipes is leaked [1]. Solving the critical water loss problem is a challenging task due to the following reasons. Firstly, water pipes are geographically spread over vast distances, for example, in Hong Kong over 8,270 km of pressurized freshwater pipes are in service. However, the inspection length of most leak detection technologies is very limited. Secondly, most of the water pipes are buried underground and relatively inaccessible. A leak can go undetected for many years until it is discovered or develops into a significant burst. Lastly, water pipes with different materials, diameters, pressure, flow conditions, leak geometries, and sizes behave uniquely, making it difficult to determine leak signatures.
Leak detection technologies are key to reducing water loss. Table I provides an overview of the current technologies for water pipe leak detection at both industrial and research stages [2, 3, 4]. In industry, leak locating methods like leak noise correlator, noise logger, and Smart Ball are initially utilized to narrow down the leak area. Subsequently, pin-point technologies such as listening devices, ground penetration radar (GPR), and infrared thermography are followed to determine the exact leak position. Additionally, other technologies such as transient-based technique that exploits hydraulic behavior and time-domain reflectometry (TDR), which measures the reflection coefficient, have been developed. Optical fiber sensors (OFSs) that use fiber-optic cables as sensing elements to detect physical, chemical or biological properties of the surrounding environment are becoming increasingly popular in the water industry. This is due to their ability to perform long-term monitoring, transmit data at high speeds, and operate without the need for a local power supply. OFSs can be broadly classified into two categories: point sensors and distributed sensors. Point OFSs include wavelength demodulation sensors utilizing fiber Bragg gratings (FBGs) and phase-demodulation sensors based on fiber interferometers. These sensors can directly measure strain and temperature or be designed as pressure sensors, accelerometers, hydrophones, or flow meters. They can be multiplexed to form a network with complex interrogation schemes, and their inspection length mainly depends on the interrogation method. Distributed optical fiber sensors (DOFSs), in contrast, can turn a standard optical communication fiber into hundreds even thousands of sensors with a single interrogator. These sensors can be implemented in the frequency domain or time domain. Rayleigh based optical frequency-domain reflectometry (OFDR) is ideal for pin-pointing leaks, as it can measure strain/temperature in millimeter resolution over ten meters. Conversely, interrogators based on optical time-domain reflectometer (OTDR) have much longer sensing capability and are suitable for locating leaks. Raman based distributed temperature sensor (DTS) and Brillouin based distributed temperature/strain sensor (DTSS) can measure temperature along several tens of kilometers water pipe with meter level resolution. However, leaks from freshwater pipe may not introduce obvious temperature anomalies. For pressurized freshwater pipe leak detection, Rayleigh based distributed acoustic sensor (DAS) shows the most significant potential. In this study, we present a novel approach to detect water pipe leaks by combining the optical networks and water distribution network (WDNs). Our proposed method aims to be non-invasive to the WDN's normal operation and user-friendly for water industry practitioners. Different from previous leak detection method based on DAS [5, 6], we install the fiber-optic cable on the outside surface of the water pipe to address concerns about water safety and sensing sensitivity. Additionally, we investigate the leak detection capability of the technology by testing different pipe
flow rates and leak flow rates. To enhance the leak detection's intelligence and autonomy, we leverage deep learning techniques to process the densely distributed time-position vibration signals collected from DAS. Our study seeks to contribute to the development of_efficient and effective leak detection methods for WDNs and provide a new way of designing optical networks in a cost-effective and value-added manner.
The integration of optical networks and WDNs comprises four layers, as depicted in Figure. 1. The perception layer involves the use of optical communication fiber to sense the water pipe. It must be installed in a way that captures vibrations induced by leaks from pipes that are either buried underground or installed in utility tunnels. The network layer consists of both the optical network and WDN. These point-to-multipoint provide interconnection services or freshwater to customers' premises. Passive optical networks (PONs) are commonly used to bring optical fiber cabling from the central office (CO) to the end users, with a transmission distance limit of 20 km but can be extended to over 50 km with the Super-PON [7]. If dark fiber is available, it can be used for sensing. Otherwise, communication and sensing can function independently in the same fiber, without interference, based on wavelength division multiplexing technology [8]. The WDN is divided into district metered areas (DMAs) which are between 500 to 3000 properties to manage the pressure and ensure the reliability of water supply reliability. The signal processing layer involves the use of supervisory control and data acquisition (SCADA) systems, commonly used in WDN for gathering, aggregating, and processing data from pressure, flow rate sensors, and controllers. The sensing data from the optical network should be integrated into the SCADA system. Finally, the application layer involves specific applications for the water industry, such as a geographical information system (GIS), to determine whether remedial actions are needed. The design and implementation of these four layers pose various challenges at different stages and will involve private companies and government departments. This article focuses on the feasibility and performance of the technical aspects of the proposed scheme, mainly in the perception and data processing layers.
\begin{table}
\begin{tabular}{c c c c c c c c c}
**Stage** & **Measurand** & **Technology** & **Inspection** & **Long-term** & **Inline or** & **Leak level** & **Power** & **Hardware** \\ & & & **length** & **monitoring** & **Non-intrusive** & **quantification** & **supply* & **Cost (USD)* \\ & & & Leak noise correlator & Sensor-to-sensor & **X** & Both & **X** & **✓** & \(-\)10,000 \\ & & Vibration/ pressure & Leak noise logger & Sensor-to-sensor & **✓** & Both & **X** & **✓** & \(-\)1,000 \\ & & & spacing \(\sim\)200 m & **✓** & Both & **X** & **✓** & \(\sim\)10,000 \\ & & & Smart Ball & \(\sim\)20 km & **X** & Inline & **X** & **✓** & N.A. \\ & & Listening devices & Pin-point & **X** & Non-intrusive & **X** & Both & \(-\)1,000 \\ & & GPR image & Ground penetration radar & Pin-point & **X** & Non-intrusive & Research stage & ✓ & \(-\)20,000 \\ & & Thermography & Infrared thermography & Pin-point & **X** & Non-intrusive & **✓** & \(\sim\)500 \\ & & & Transient-based technique & \(\sim\) 2 km & **X** & Non-intrusive & Research stage & ✓ & \(-\)10,000 \\ & & & Time-domain & Single sensing & **✓** & Non-intrusive & **✓** & \(\sim\)3,000 \\ & & coefficient & reflectometry & element \(\sim\)100 m & **✓** & Both & Research stage & **✓** & \(-\)30,000 \\ & & & FBG & N.A. & ✓ & Both & Research stage & **✓** & \(-\)30,000 \\ & & Optical fiber & & & & & & & \\ & & interferometer-based sensor & & N.A. & ✓ & Both & Research stage & **✓** & \(-\)30,000 \\ & & & Rayleigh based OFDM & \(\sim\) 10 m & **✓** & Both & Research stage & **✓** & \(-\)100,000 \\ & & & sensor & & & & & & \\ & & Brillouin based DTSS & \(\sim\) 50 km & **✓** & Non-intrusive & **✓** & **✓** & \(\sim\)300,000 \\ & & & Raman based DTS & \(\sim\) 10 km & **✓** & Non-intrusive & **✓** & **✓** & \(\sim\)50,000 \\ & & Vibration & Rayleigh based DAS & \(\sim\) 30 km & **✓** & Both & N.A. & **✓** & \(\sim\)200,000 \\ \end{tabular}
* * It means power supply near the sensor.
* In terms power supply near the sensor.
*
## II Method and Experiment
The designed pilot experiment setup is illustrated in Figure. 2. The testbed includes a recirculating water pipe, a DAS system, and a leak detection algorithm.
### _Recirculating water pipe testbed_
The experiment was conducted at The Hong Kong Polytechnic University's Hydraulics Laboratory using a 60 m pipe with a 50 mm internal diameter and a 3.6 mm thick galvanized iron wall. Water was supplied from a reservoir by a 3-phase induction motor pump set at 2890 rpm. The pipe flow was varied between 0.4 L/s and 1.8 L/s. Five pressure gauges were installed on the pipe to measure the pressure and an ultrasonic flowmeter was installed downstream of the pipe. A GYFTY fiber-optic cable with non-metallic strengthening member, grease-filled polyethylene sheahed outdoor layers was attached on the outside pipe surface with nylon stripe and duct tape with 1 m interval. The cable, with an outer diameter of 8 mm and weight of 85 kg/km, contains four 9/125 \(\mu m\) single-mode optical fibers, with one core used for sensing. The cable was not mounted on the beginning and end sections of the water pipe to avoid water inlet and outlet interference. The sensing length of the cable was 40 m. To simulate different leak states, an artificial leak was installed at the front section of the pipe, and five end caps with drilled orifice diameters of 4.63 mm, 3.72 mm, 3.36 mm, 2.15 mm, and 1.23 mm were used to simulate different leak levels. The relationship between the pipe position and the DAS channel number was determined by tap-tests.
### _Principle of using fiber-optic cable for water pipe leak detection_
Two types of vibrations can occur along a pipe: internal flow-induced pipe vibrations and leak-induced vibrations. Internal flow-induced pipe vibrations are caused by the interaction between water molecules and the pipe wall, and their strength is proportional to the square of the flow rate [9]. On the other hand, leak-induced vibrations are created when pressurized water pipes develop leaks, and high-speed water jets shoot out through the leak orifice, causing friction and cavitation-induced vibrations. The detection of these leaks is accomplished by identifying the signature of the continuous leak-induced disturbance to the surroundings. Fiber-optic cable can sense these vibrations is because phase change of Rayleigh backscattering has a linear relationship with the external vibration induced axial strain change. The localization is based on time-of-flight measurement principle in DAS. The system used in this work is an in-house built heterodyne detection DAS based on phase-sensitive OTDR technology as described in [10].
### _Dataset collection_
The proposed system's performance was evaluated using a dataset, described in Table II (a). The water pipe system was first measured with two flow rates (0.427 L/s and 1.80 L/s) for 28 minutes each without any leaks. Next, steady-state leaks were acquired for 14 minutes per case with the artificial leak valve turned on and different end caps in place. The leak flow velocity was measured by the volumetric method. The system defined three levels of leaks based on the ratio of leak flow rate to pipe flow rate: excessive leak (over 15%), significant leak (5-15%), and small leak (below 5%). The Reynolds number (Re.)
Figure 2: Testbed illustration of the proposed leak detection scheme.
was calculated to describe the flow status in the pipes. This dimensionless quantity provides insight into the relative importance of the inertial forces to the viscous forces for a given flow condition [11]. Laminar flow occurs when Re. is smaller than 2000 and the water travels in a direction parallel to the pipe axis. Turbulent eddies in all directions are imposed on the axial flow when Re. is larger than 3500, causing significant additional vibration. Detecting and locating small leaks despite large flow rates is a challenging task in leak detection systems. Therefore, we conducted the experiments under harsh conditions where turbulent pipe flows were fully developed for all cases. The total cable length was 100 m, with 40 m mounted along the water pipe and 60 m used for connection. The spatial resolution was set to 2 m and channel spacing was 0.8 m and the sampling frequency was 10 kHz in DAS. The data were collected on 20, and 21 Nov 2021. The raw data comprise over 100 GB with a total duration of 182 minutes.
### _Leak detection algorithm_
In recent years, deep learning has outperformed traditional signal processing methods in a variety of fields. In the WDN domain, deep learning has been also explored for tasks such as demand forecasting, leak detection and localization, and water quality anomaly detection [12]. Convolutional neural networks (CNNs) have emerged as the most widely used deep learning method for leak detection and localization, using either flow/pressure data or acoustic/vibration data [12]. Since our data have spectral, temporal, and spatial dimensions due to the distributed sensing mechanism, we propose using 3D-CNNs, which jointly utilize these three dimensions for signal processing. To verify the effectiveness of leveraging inter-spatial channels by 3D-CNNs, we also trained 2D-CNNs with similar network architecture and input data for comparison.
**Pre-processing:** Traditionally, leak detection involved experienced operators who used mechanical listening sticks to discern leak noises. To replicate this method using automatic algorithms, we utilize a human auditory system that mimics Mel-spectrogram as the input feature vectors. The pre-processing steps, depicted in Figure 2, are as follows: (1) continuous DAS time series data are divided into 5-second segments for each space channel, (2) the 5-second signal clip is transformed to Mel-spectrograms with 128 bands covering the frequency range (0-5 kHz) after Z-score normalization, and the first 90 bands are used. (3) a window size of 204.8 ms and a hop length of 51.2 ms are applied, (4) Z-score feature scaling method is performed on each Mel-spectrogram to standardize the features, and (5) Mel-spectrograms from several neighboring positions are stacked to form a 3D cube.
**Network architecture:** The networks are designed to predict the leak probability of each 3D Mel-spectrogram cube. The input dimensions are \(90\times 98\times Z\), where Z is the number of spatial channels of the input cube and Z=3,5,7,9. The networks have four 3D convolutional (Conv) layers interleaved with four max-pooling layers and batch normalization (MP+BN) operations. The kernel size and filter number in each layer are given in Table III. The rectifier linear unit (ReLU) is utilized for the non-linear activation function. The outputs of FC3 are mapped to the two-class label, leak, and non-leak. The softmax function is used to provide a probabilistic output.
**Training:** In each case, 75% of the data in Table II (a) are used for training and 25% are used for testing. For non-leak training data, signals from 7 positions including flange joint (1 position), elbow (2 positions), and straight pipe (4 positions) are chosen.
For the leak training data, signals at artificial leak positions are used. During the training process, the model optimizes binary cross-entropy via Adam optimizer algorithm with exponential decay. A mini-batch size of 128 is set and L2 norm regularizers with a penalty of 0.003 are applied after each convolutional layer to reduce overfitting. The model is trained with 100 epochs with early stopping criteria. The networks are implemented in Python using Tensorflow v2.4.1 library.
**Testing:** The test data include both non-leak and leak signals from all positions and the predictions for each 3D cube are used to create leak probability maps. The threshold for determining a leak is set at 0.9.
## III Results and Discussions
### _Leak identification and localization_
To evaluate the leak identification performance, we used true positive rate (TPR) and false alarm rate (FAR). TPR is the ratio of correctly identified labeled leaks over all true leaks, while FAR corresponds to the ratio of non-leaks incorrectly identified as leaks over all true non-leaks. The leak identification results are summarized in Table II (b). With the same spatial range \(Z\), 3D-CNNs outperform 2D-CNNs in most cases for both FAR and TPR, demonstrating that the inter-channel features could provide useful neighboring information and thus improve the identification performance [13]. The additional spatial information along the fiber contributes to the results in the form of a majority vote, making the system less vulnerable to environmental noises. Therefore, the 3D-CNN is more suitable to process the time-position correlated signal. Among the 3D-CNNs, \(Z=5\), which corresponds to a 4 m spatial range, showed the best overall performance. The TPR is found to range from 60.8% to 100% and is mainly determined by the leak flow over pipe flow ratio. High TPRs normally could have been achieved at high ratios since the leak-induced vibrations dominated around the leak position. Low TPR is observed for small leak-induced vibrations due to the background pipe flow noise. The FARs are 0.24% and 1.70% for pipe flow of 0.427L/s and 1.800 L/s, respectively. It is intuitive that high pipe flow causes more severe background noise and resulted in more false alarms.
After identifying the leaks, the location of the leak should be pinpointed for remedial action. Unlike traditional sparse sensor systems that requires specific features, leak localization can be easily determined from a leak probability in the proposed scheme. The leak probability map along the fiber and the time, predicted by the 3D-CNN with \(Z=5\), under six conditions are shown in Figure. 3(a). The median value of the leak probability along the 40 m cable over 210 seconds is plotted on the top of each map. Some false alarms are scattered around the map in case 10 and 11. The FARs can however be eliminated by considering data over a longer time, as shown in median
Figure 3: (a) Leak probability map of six cases, (b) Linear correlation of Re. ratio of leak flow and pipe flow (c) Truth table of leak level quantification.
probability plots (on top of each probability map). External perturbations may introduce noises but most of them do not last long, whereas the leak-induced vibrations continuously generate signals. This characteristic greatly helps distinguish leaks from environmental noises. As shown in Figure. 3(a) case 1, case 3, case 6, case 9, though some false negative predictions cause discontinuity, especially in significant leaks and small leaks, the leak position could still be located after calculating the long-term median probability. The leak flow rate as low as 0.027 L/s is successfully located. The location error is quantified using a threshold of 0.9 to determine the spatial range of the leak, and the middle position is considered the leak center. The location error is summarized in the last column of Table II(b). The error is between 0.12 to 2.8 m. Under excessive leak conditions, the leak-induced vibrations affect a wide range of the pipe, causing higher uncertainty in leak localization. While under significant and small leak conditions, the affected spatial range is relatively limited.
### _Leak level quantification_
Leak level quantification is equally important as leak identification and localization because it provides crucial information for repair prioritization. The extent of a leak's impact on a pipe is influenced by its flow rate relative to the pipe's flow rate, as shown in Figure 3(a). A correlation between the leak affected range and the leak flow Re. over pipe flow Re. is established using linear regression on the training data, as plotted in Figure 3(b). The fit produced an \(R^{2}\) value of 0.999, confirming a strong correlation between the leak-affected range and Re. ratio. To calculate the leak flow rate, we introduced the orifice equation, which describes the conversion of pressure energy to kinetic energy in a water pipe leak [14]. Therefore, by combing the leak-affected range and Re. ratio relation and orifice equation, we can predict both leak diameter and leak flow rate. During the prediction stage, we calculated the mean leak affected range with a duration of 30 seconds for a given test dataset. As shown in Figure 3(c), we evaluated the performance of the leak quantification method by classifying the predicted leak levels using a three-level scale, achieving a classification accuracy of over 85%.
### _Discussions_
**Optical network and WDN integration**: The transmission range for Super-PON can reach 50 km, which is comparable with the sensing length of DAS. The PON and WDN architectures are typically implemented in a point-to-multipoint topology, and to sense multipath simultaneously with one interrogator in a cost-effective way, multipath DAS based on frequency division multiplexing can be adopted [15]. Further research with large-scale experiments will be conducted at Q-Leak underground water mains leak detection training center and Anderson Road Quarry Site in Hong Kong to explore the challenges and opportunities of integrating optical networks and WDNs.
**Fiber-optic cable deployment and repair**: Deploying and repairing fiber-optic cables is crucial in the proposed scheme as the fiber itself is the sensor and the deployment method can significantly impact its performance and durability. Further research should focus on developing innovative installation techniques and materials that can withstand harsh environments and reduce the damage from external factors. Repair issues of fiber-optic cable should also be considered in future research. While integrating optical networks and WDNs can reduce the cost of civil works, it also increases the risk of physical damage to the cable. Although cables are usually designed to withstand some physical stress, pipe bursts and water hammer are potential risks that could cause damage. To mitigate these risks, protective casing or avoiding high-risk areas could be considered.
**Leak detection algorithm**: In real-world WDNs, vibrations induced by leaks can be much more complex and dynamic, with varying pipe materials, sizes, as well as different flow rates and pressures. Supervised learning with large datasets and powerful computer resources, has proven effective in classifying leak and non-leak cases with labelled data. However, to fully realize the potential of deep learning in leak detection, several challenges related to data and algorithmic development need to be addressed. The high cost of manual data labelling is a significant obstacle to developing accurate and reliable deep learning models. Semi-supervised learning with a small amount of labelled data and unsupervised learning with unlabelled data may be solutions to reduce labelling cost. Another challenge is the lack of open datasets and shared models, which can hamper the adoption of this technology in the water industry. Close collaboration between researchers and the water industry can help overcome these challenges and facilitate the adoption of deep learning for leak detection in WDNs. In addition, the use of hybrid techniques, combining different sensing modalities could further enhance the accuracy and reliability of leak detection and localization in WDNs.
## IV IV. Conclusions
The proposed approach has demonstrated the potential of utilizing data from an optical communication cable installed along a water pipe and using 3D-CNN for automatic leak detection. In the experiment, a 40-meter-long water pipe and an optical communication cable with a DAS interrogator were used to validate the technique's efficacy. Our proposed method successfully detected a leak flow rate as low as 0.027 L/s with a location accuracy of 3 meters and quantification accuracy of over 85%. The utilization of optical fiber sensing for water pipe leak detection has significant practical applications in the water industry, providing a new solution to a pressing issue of reducing water loss from aging pipes and declining freshwater resources. The ability to identify, locate, and quantify leaks can lead to substantial cost savings for water companies and improve overall water sustainability. Furthermore, the integration of optical fiber sensing and deep learning algorithms also opens new avenues for innovation in the field of leak detection in water industry.
## Acknowledgements
This research was funded by the Research Grants Council of Hong Kong under project numbers 15209919, 152164/18E, 152007/19E, 152233/19E, 15200719 and The Hong Kong Polytechnic University (YW3G, ZVGB, BBWB, postdoc matching fund). The authors are grateful to Mr. Victor Lo for his valuable insights on leak detection and to Mr. Kwok Hing
Leung's for his support in conducting the experiments. Additionally, the authors acknowledge the PolyU University research facility in big data analytics for providing the computing resources.
|
2306.02521
|
Connecting Proof Theory and Knowledge Representation: Sequent Calculi
and the Chase with Existential Rules
|
Chase algorithms are indispensable in the domain of knowledge base querying,
which enable the extraction of implicit knowledge from a given database via
applications of rules from a given ontology. Such algorithms have proved
beneficial in identifying logical languages which admit decidable query
entailment. Within the discipline of proof theory, sequent calculi have been
used to write and design proof-search algorithms to identify decidable classes
of logics. In this paper, we show that the chase mechanism in the context of
existential rules is in essence the same as proof-search in an extension of
Gentzen's sequent calculus for first-order logic. Moreover, we show that
proof-search generates universal models of knowledge bases, a feature also
exhibited by the chase. Thus, we formally connect a central tool for
establishing decidability proof-theoretically with a central decidability tool
in the context of knowledge representation.
|
Tim S. Lyon, Piotr Ostropolski-Nalewaja
|
2023-06-05T01:10:23Z
|
http://arxiv.org/abs/2306.02521v1
|
# Connecting Proof Theory and Knowledge Representation:
###### Abstract
Chase algorithms are indispensable in the domain of knowledge base querying, which enable the extraction of implicit knowledge from a given database via applications of rules from a given ontology. Such algorithms have proved beneficial in identifying logical languages which admit decidable query entailment. Within the discipline of proof theory, sequent calculi have been used to write and design proof-search algorithms to identify decidable classes of logics. In this paper, we show that the chase mechanism in the context of existential rules is in essence the same as proof-search in an extension of Gentzen's sequent calculus for first-order logic. Moreover, we show that proof-search generates universal models of knowledge bases, a feature also exhibited by the chase. Thus, we formally connect a central tool for establishing decidability proof-theoretically with a central decidability tool in the context of knowledge representation.
1
Footnote 1: Existential rules are also referred to as a _tuple-generating dependencies_(Abiteboul, Hull, and Vianu, 1995), _conceptual graph rules_(Salvat and Mugnier, 1996), Datalog\({}^{\pm}\)(Gottlob, 2009), and \(\forall\exists\)_-rules_(Baget et al., 2011) in the literature.
## 1 Introduction
**Existential Rules and the Chase.** The formalism of existential rules is a significant sub-discipline within the field of knowledge representation, offering insightful results within the domain of ontology-based query answering (Baget et al., 2009), data exchange and integration (Fagin et al., 2005), and serving a central role in the study of generic decidability criteria (Feller et al., 2023).1 Ontology-based query answering is one of the principal problems studied within the context of existential rules, and asks if a query is logically entailed by a given knowledge base (KB) \(\mathcal{K}\,=\,(\mathcal{D},\mathcal{R})\), where \(\mathcal{D}\) is a database and \(\mathcal{R}\) is a finite set of existential rules (Baget et al., 2011). Databases generally consist of positive atomic facts such as \(Female(Marie)\) or \(Mother(Zuza,Marie)\), while existential rules--which are first-order formulae of the form \(\forall\mathbf{x}\mathbf{y}\beta(\mathbf{x},\mathbf{y})\to\exists\mathbf{x}\alpha(\mathbf{y},\mathbf{z})\) with \(\beta\) and \(\alpha\) conjunctions of atoms--are used to encode a logical theory or ontology that permits the extraction of implicit knowledge from the encompassing KB.
Footnote 1: Existential rules are also referred to as a _tuple-generating dependencies_(Abiteboul, Hull, and Vianu, 1995), _conceptual graph rules_(Salvat and Mugnier, 1996), Datalog\({}^{\pm}\)(Gottlob, 2009), and \(\forall\exists\)_-rules_(Baget et al., 2011) in the literature.
The primary tool for studying query answering within this setting is the so-called _chase_, an algorithm that iteratively saturates a given database under applications of existential rules (Beeri and Vardi, 1984). The chase is useful in that it generates a _universal model_ satisfying exactly those queries entailed by a KB, and thus, allows for the reduction of query entailment to query checking over the constructed universal model (Deutsch, Nash, and Remmel, 2008). In this paper, we show how the chase corresponds to proof-search in an extension of Gentzen's sequent calculus, establishing a connection between a central tool in the theory of existential rules with the primary decidability tool in proof theory.
**Sequent Calculi and Proof-Search.** Since its introduction, Gentzen's sequent formalism (Gentzen, 1935; Gentzen, 1935) has become one of the preferred proof-theoretic frameworks for the creation and study of proof calculi. A sequent is an object of the form \(\Gamma\,\vdash\,\Delta\) such that \(\Gamma\) and \(\Delta\) are finite (multi)sets of logical formulae, and a sequent calculus is a set of inference rules that operate over such. Sequent systems, and generalizations thereof, have proved beneficial in establishing (meta)logical properties with a diverse number of applications, being used to write decision algorithms (Dyckhoff, 1992; Slaney, 1997), to calculate interpolants (Maehara, 1960; Lyon et al., 2020), and have even been applied in knowledge integration scenarios (Lyon and Gomez Alvarez, 2022).
It is well-known that _geometric implications_, i.e. first-order formula of the form \(\forall\mathbf{x}(\varphi\to\exists\mathbf{y}_{1}\psi_{1}\vee\dots\vee\exists\mathbf{y}_ {n}\psi_{n})\) with \(\varphi\) and \(\psi_{i}\) conjunctions of atoms, can be converted into an inference rules in a sequent calculus (Simpson, 1994, p. 24). Since such formulae subsume the class of existential rules, we may leverage this insight to extend Gentzen's sequent calculus for first-order logic with such rules to carry out existential rule reasoning. When we do so, we find that sequent systems mimic existential rule reasoning and proof-search (described below) simulates the chase.
Proof-search is the central means by which decidability is obtained with a sequent calculus, and usually operates by applying the inference rules of a sequent calculus bottom-up on an input sequent with the goal of constructing a proof thereof. If a proof of the input is found, the input is confirmed to be valid, and if a proof of the input is not found, a counter-model can typically be extracted witnessing the invalidity of the input. We make the novel observation that counter-models extracted from proof-search (in the context
of existential rules) are universal, being homomorphically equivalent to the universal model generated by the chase.
**Contributions.** Our contributions in this paper are as follows: (1) We establish a strong connection between tools in the domain of existential rules with that of proof theory; in particular, we show how to transform derivations with existential rules into sequent calculus proofs and vice versa. (2) We establish a correspondence between the chase and sequent-based proof-search, and (3) we recognize that proof-search, like the chase, generates universal models for knowledge bases, which is a novel, previously unknown insight regarding the capability of sequent systems.
**Organization.** The preliminaries are located in Section 2. In Section 3, we present the sequent calculus framework and write a proof-search algorithm that simulates the chase. Correspondences between existential rule reasoning and sequent-based reasoning are explicated in Section 4, and in Section 5, we conclude and discuss future research. We note that most proofs have been deferred to the appendix.
## 2 Preliminaries and Existential Rules
**Formulae and Syntax.** We let \(\mathbf{C}\) and \(\mathbf{V}\) be two disjoint denumerable sets of _constants_ and _variables_. We use \(a,b,c,\ldots\) to denote constants and \(x,y,z,\ldots\) to denote variables. We define the set of _terms_ to be \(\mathbf{T}=\mathbf{C}\cup\mathbf{V}\), and we denote terms by \(t\) and annotated versions thereof. Moreover, we let \(\mathbf{P}=\{p,q,r,\ldots\}\) be a denumerable set of _predicates_ containing denumerably many predicates of each arity \(n\in\mathbb{N}\), and use \(ar(p)=n\) to denote that \(p\in\mathbf{P}\) is of arity \(n\). An _atom_ is a formula of the form \(p(t_{1},\ldots,t_{n})\) such that \(t_{1},\ldots,t_{n}\in\mathbf{T}\) and \(ar(p)=n\). We will often write atoms as \(p(\mathbf{t})\) with \(\mathbf{t}=t_{1},\ldots,t_{n}\). The _first-order language_\(\mathcal{L}\) is defined via the following grammar in Backs-Naur form:
\[\varphi:=p(\mathbf{t})\mid\neg\varphi\mid\varphi\land\varphi\mid\exists x\varphi\]
such that \(p\in\mathbf{P}\), \(\mathbf{t}\in\mathbf{T}\), and \(x\in\mathbf{V}\). We use \(\varphi\), \(\psi\), \(\chi\), \(\ldots\) to denote _formulae_ from \(\mathcal{L}\), and define \(\varphi\lor\psi:=\neg(\neg\varphi\land\neg\psi)\), \(\varphi\to\psi:=\neg\varphi\lor\psi\), and \(\forall x\varphi:=\neg\exists x\neg\varphi\). The occurrence of a variable is _free_ in a formula \(\varphi\) when it does not occur within the scope of a quantifier. We let \(\varphi(t/x)\) represent the formula obtained by substituting the term \(t\) for every free occurrence of the variable \(x\) in \(\varphi\). We use \(\Gamma,\Delta,\Sigma,\ldots\) to denote sets of formulae from \(\mathcal{L}\), let \(\mathbf{V}(\Gamma)\) denote the set of free variables in the formulae of \(\Gamma\), and let \(\mathbf{T}(\Gamma)\) denote the set of free variables and constants occurring in the formulae of \(\Gamma\). We let \(i\in[n]\) represent \(1\leq i\leq n\), and define a _ground atom_ to be an atom \(p(t_{1},\ldots,t_{n})\) such that for each \(i\in[n]\), \(t_{i}\in\mathbf{C}\). An _instance_\(\mathcal{I}\) is defined to be a (potentially infinite) set of atoms, and a _database_\(\mathcal{D}\) is defined to be a finite set of ground atoms. We let \(\top\) be a special unary predicate and define \(\mathcal{I}^{\top}=\mathcal{I}\cup\{\top(c)\mid c\in\mathbf{C}\}\). An instance \(\mathcal{I}\) is referred to as an _interpretation iff_\(\mathcal{I}^{\top}=\mathcal{I}\).
**Substitutions.** A _substitution_\(\sigma\) is defined to be a partial function over \(\mathbf{T}\). A _homomorphism_ from an instance \(\mathcal{I}\) to an instance \(\mathcal{J}\) is a substitution \(\pi\) from the terms of \(\mathcal{I}\) to the terms of \(\mathcal{J}\) such that (1) if \(p(t_{1},\ldots,t_{n})\in\mathcal{I}\), then \(p(\pi(t_{1}),\ldots,\pi(t_{n}))\in\mathcal{J}\), and (2) \(\pi(a)=a\), for each \(a\in\mathbf{C}\). We say that an instance \(\mathcal{I}\)_homomorphically maps_ into an instance \(\mathcal{J}\)_iff_ a homomorphism exists from \(\mathcal{I}\) to \(\mathcal{J}\). Two instances \(\mathcal{I}\) and \(\mathcal{J}\) are defined to be _homomorphically equivalent_, written \(\mathcal{I}\equiv\mathcal{J}\), _iff_ each instance can be homomorphically mapped into the other. An \(\mathcal{I}\)-_assignment_ is defined to be a substitution \(\mu\) such that (1) \(\mu(x)\in\mathbf{T}(\mathcal{I})\), for each \(x\in\mathbf{V}\), and (2) \(\mu(a)=a\), for each \(a\in\mathbf{C}\). For an \(\mathcal{I}\)-assignment \(\mu\), we let \(\mu(\varphi)\) denote the formula obtained by replacing each free variable of \(\varphi\) with its value under \(\mu\), and we let \(\mu[\mathbf{t}/\mathbf{x}]\) be the same as \(\mu\), but where the variables \(\mathbf{x}\) are respectively mapped to \(\mathbf{t}\in\mathbf{T}\).
**Models and Satisfaction.** Given an interpretation \(\mathcal{I}\) and an \(\mathcal{I}\)-assignment \(\mu\), we recursively define satisfaction \(\models\) as:
(1) \(\mathcal{I},\mu\models p(t_{1},\ldots,t_{n})\)_iff_\(p(\mu(t_{1}),\ldots,\mu(t_{n}))\in\mathcal{I}\);
(2) \(\mathcal{I},\mu\models\neg\varphi\)_iff_\(\mathcal{I},\mu\not\models\varphi\);
(3) \(\mathcal{I},\mu\models\varphi\land\psi\)_iff_\(\mathcal{I},\mu\models\varphi\) and \(\mathcal{I},\mu\models\psi\);
(4) \(\mathcal{I},\mu\models\exists x\varphi\)_iff_\(t\in\mathbf{T}(\mathcal{I})\) exists and \(\mathcal{I},\mu[t/x]\models\varphi\).
We say that \(\mathcal{I}\) is a _model_ of \(\Gamma\) and write \(\mathcal{I}\models\Gamma\)_iff_ for every \(\varphi\in\Gamma\) and \(\mathcal{I}\)-assignment \(\mu\), we have \(\mathcal{I},\mu=\varphi\). We define an instance \(\mathcal{I}\) to be a _universal model_ of \(\Gamma\)_iff_ for any model \(\mathcal{J}\) of \(\Gamma\) there exists a homomorphism from \(\mathcal{I}\) to \(\mathcal{J}\).
**Existential Rules.** An _existential rule_ is a first-order formula \(\rho=\forall\mathbf{x}\mathbf{y}\ \beta(\mathbf{x},\mathbf{y})\to\exists\mathbf{z}\ \alpha(\mathbf{y},\mathbf{z})\) such that \(\beta(\mathbf{x},\mathbf{y})=\mathrm{body}(\rho)\) (called the body) and \(\alpha(\mathbf{y},\mathbf{z})=\mathrm{head}(\rho)\) (called the head) are conjunctions of atoms over constants and the variables \(\mathbf{x},\mathbf{y}\) and \(\mathbf{y},\mathbf{z}\), respectively. We call a finite set \(\mathcal{R}\) of existential rules a _rule set_. We define \(\Gamma\) to be \(\mathcal{R}\)_-valid iff_ for every interpretation \(\mathcal{I}\), if \(\mathcal{I}\models\mathcal{R}\), then \(\mathcal{I}\models\Gamma\).
**Derivations and the Chase.** We say that an existential rule \(\rho\) is _applicable_ to an instance \(\mathcal{I}\)_iff_ there exists an \(\mathcal{I}\)-assignment \(\mu\) such that \(\mu(\beta(\mathbf{x},\mathbf{y}))\subseteq\mathcal{I}\), and when this is the case, we say that \(\tau=(\rho,\mu)\) is a _trigger_ in \(\mathcal{I}\). Given a trigger \(\tau=(\rho,\mu)\) in \(\mathcal{I}\) we define an _application_ of the trigger \(\tau\) to the instance \(\mathcal{I}\) to be the instance \(\tau(\mathcal{I})=\mathcal{I}\cup\alpha(\mu(\mathbf{y}),\mathbf{z})\) where \(\mathbf{z}\) is a tuple of fresh variables. We define a _chase derivation_\((\mathcal{I}_{i},\tau_{i})_{i\in[n]}\) to be a sequence \((\mathcal{I}_{1},\tau_{1}),\ldots,(\mathcal{I}_{n},\tau_{n}),(\mathcal{I}_{n+1},\emptyset)\) such that for every \(i\in[n]\), \(\tau_{i}\) is a trigger in \(\mathcal{I}_{i}\) and \(\tau_{i}(\mathcal{I}_{i})=\mathcal{I}_{i+1}\). For an instance \(\mathcal{I}\) and a rule set \(\mathcal{R}\), we define the _one-step chase_ to be:
\[\mathbf{Ch}_{1}(\mathcal{I},\mathcal{R})=\bigcup_{\tau\,\mathrm{is\,ar\,iger \,in}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
_iff_\(\mathcal{I}_{1}=\mathcal{D}\), only rules from \(\mathcal{R}\) are applied, and there exists an \(\mathcal{I}_{n+1}\)-assignment \(\mu\) such that \(\mu(q(\mathbf{x}))\subseteq\mathcal{I}_{n+1}\).
## 3 Sequent Systems and Proof-Search
We define a _sequent_ to be an object of the form \(\Gamma\vdash\Delta\) such that \(\Gamma\) and \(\Delta\) are _finite_ sets of formulae from \(\mathcal{L}\). Typically, multisets are used in sequents rather than sets, however, we are permitted to use sets in the setting of classical logic; cf. [13]. For a sequent \(\Gamma\vdash\Delta\), we call \(\Gamma\) the _antecedent_ and \(\Delta\) the _consequent_. We define the _formula interpretation_ of a sequent to be \(f(\Gamma\vdash\Delta)=\bigwedge\Gamma\rightarrow\bigvee\Delta\).
The sequent calculus G3[13] for first-order logic is defined to be the set of inference rules presented in Figure 1. It consists of the _initial rule_\((id)\) along with _logical rules_ that introduce complex logical formulae in either the antecedent or consequent of a sequent. The \((\exists_{L})\) rule is subject to a side condition, stating that the rule is applicable only if \(y\) is _fresh_, i.e. \(y\) does not occur in the surrounding context \(\Gamma,\Delta\). The \((\exists_{R})\) rule allows for the bottom-up instantiation of an existentially quantified formula with a term \(t\). An _application_ of a rule is obtained by instantiating the rule with formulae from \(\mathcal{L}\). We call an application of rule _top-down_ (_bottom-up_) whenever the conclusion (premises) is (are) obtained from the premises (conclusion).
It is well-known that every _geometric implication_, which is a formula of the form \(\forall\mathbf{x}(\varphi\rightarrow\exists\mathbf{y}_{1}\psi_{1}\vee\dots\lor\exists \mathbf{y}_{n}\psi_{n})\) with \(\varphi\) and \(\psi_{i}\) conjunctions of atoms, can be converted into an inference rule; see [10, p. 24] for a discussion. We leverage this insight to transform existential rules (which are special instances of geometric implications) into inference rules that can be added to the sequent calculus G3. For an existential rule \(\rho=\forall\mathbf{x}\mathbf{y}\beta(\mathbf{x},\mathbf{y})\rightarrow\exists\mathbf{z}\alpha( \mathbf{y},\mathbf{z})\), we define its corresponding _sequent rule_\(s(\rho)\) to be:
\[\frac{\Gamma,\beta(\mathbf{x},\mathbf{y}),\alpha(\mathbf{y},\mathbf{z})\vdash\Delta}{\Gamma, \beta(\mathbf{x},\mathbf{y})\vdash\Delta}\;s(\rho)\;\mathbf{z}\;\text{fresh}\]
Note that we take the body \(\beta(\mathbf{x},\mathbf{y})\) and head \(\alpha(\mathbf{y},\mathbf{z})\) to be sets of atoms, rather than conjunctions of atoms, and we note that \(\mathbf{x},\mathbf{y}\) may be instantiated with terms in rule applications. Also, \(s(\rho)\) is subject to the side condition that the rule is applicable only if all variables \(\mathbf{z}\) are fresh. We define the sequent calculus \(\textsf{G3}(\mathcal{R})=\textsf{G3}\cup\{s(\rho)\mid\rho\in\mathcal{R}\}\). We define a _derivation_ to be any sequence of applications of rules in \(\textsf{G3}(\mathcal{R})\) to arbitrary sequents, define an \(\mathcal{R}\)_-derivation_ to be a derivation that only applies \(s(\rho)\) rules, and define a _proof_ to be a derivation starting from applications of the \((id)\) rule. An example of a proof is shown on the left side of Figure 3.
**Theorem 1** (Soundness and Completeness).: \(f(\Gamma\vdash\Delta)\) _is \(\mathcal{R}\)-valid iff there exists a proof of \(\Gamma\vdash\Delta\) in \(\textsf{G3}(\mathcal{R})\)._
We now define a proof-search algorithm that decides (under certain conditions) if a BCQ is entailed by a knowledge base. The algorithm \(\mathtt{Prove}\) (shown in Figure 2) takes a sequent of the form \(\mathcal{D}\vdash\exists\mathbf{x}q(\mathbf{x})\) as input and bottom-up applies inference rules from \(\textsf{G3}(\mathcal{R})\) with the goal of constructing a proof thereof. Either, \(\mathtt{Prove}\) returns a proof witnessing that \((\mathcal{D},\mathcal{R})\models\exists\mathbf{x}q(\mathbf{x})\), or a counter-model to this claim can be extracted from failed proof search. The functionality of this algorithm depends on certain _saturation conditions_, defined in Definition 2 below, and which determine when a rule from \(\textsf{G3}(\mathcal{R})\) is bottom-up applicable. Due to the shape of the input \(\mathcal{D}\vdash\exists\mathbf{x}q(\mathbf{x})\), only \((id)\), \((\wedge_{R})\), \((\exists_{R})\), and \(s(\rho)\) rules are applicable during proof search. Moreover, we let \(\prec\) be an arbitrary cyclic order over \(\mathcal{R}=\{\rho_{1},\dots,\rho_{n}\}\), that is, \(\rho_{1}\prec\rho_{2}\cdots\rho_{n-1}\prec\rho_{n}\prec\rho_{1}\). We use \(\prec\) to ensure the _fair application_ of \(s(\rho)\) rules during proof-search, meaning that no bottom-up rule application is delayed indefinitely.
**Definition 2** (Saturation).: Let \(\Gamma\vdash\Delta\) be a sequent. We say that \(\Gamma\vdash\Delta\) is _saturated iff_ it satisfies the following:
_id._ if \(p(\mathbf{t})\in\Gamma\), then \(p(\mathbf{t})\not\in\Delta\);
\(\wedge_{R_{\mathbf{t}}}\). If \(\varphi\wedge\psi\in\Delta\), then either \(\varphi\in\Delta\) or \(\psi\in\Delta\);
\(\exists_{R_{\mathbf{t}}}\). If \(\exists x\varphi\in\Delta\), then for every \(t\in\mathbf{T}(\Gamma)\), \(\varphi(t/x)\in\Delta\);
\(er\). For each \(\rho\in\mathcal{R}\), if a \(\Gamma\)-assignment \(\mu\) exists such that \(\mu(\operatorname{body}(\rho))\subseteq\Gamma\), then there exist \(\mathbf{t}\in\mathbf{T}(\Gamma)\) such that \(\mu[\mathbf{t}/\mathbf{z}](\operatorname{head}(\rho))\subseteq\Gamma\) holds.
**Theorem 3**.: _Let \(\mathcal{R}\) be a rule set, \(\mathcal{D}\) be a database, and \(\exists\mathbf{x}q(\mathbf{x})\) be a BCQ. Then,_
1. _If_ \(\mathtt{Prove}(\mathcal{D}\vdash\exists\mathbf{x}q(\mathbf{x}))=\mathtt{True}\)_, then a proof in_ \(\textsf{G3}(\mathcal{R})\) _can be constructed witnessing that_ \((\mathcal{D},\mathcal{R})\models\exists\mathbf{x}q(\mathbf{x})\)_;_
Figure 2: The proof-search algorithm \(\mathtt{Prove}\).
2. _If_ \(\mathtt{Prove}(\mathcal{D}\vdash\exists\mathbf{x}q(\mathbf{x}))\neq\mathtt{True}\)_, then a universal model can be constructed witnessing that_ \((\mathcal{D},\mathcal{R})\not\models\exists\mathbf{x}q(\mathbf{x})\)_._
We refer to the universal model of \((\mathcal{D},\mathcal{R})\) stated in the second claim of Theorem 3 as the _witnessing counter-model_.
## 4 Simulations and Equivalences
We present a sequence of results which culminate in the establishment of two main theorems: (1) Theorem 9, which confirms that chase derivations are mutually transformable with certain proofs in \(\mathsf{G3}(\mathcal{R})\), and (2) Theorem 10, which confirms the equivalence of \(\mathtt{Prove}\) and the chase. We end the section by providing an example illustrating the latter correspondence between proofs and the chase.
**Observation 4**.: _Let \(\mathcal{R}\) be a rule set. If \(\rho\in\mathcal{R}\), then any application of \((\wedge_{R})\) and \((\exists_{R})\) permute above \(s(\rho)\)._
Proof.: It is straightforward to confirm the permutation of such rules as the \(s(\rho)\) rules operate on the antecedent of a sequent, and \((\wedge_{R})\) and \((\exists_{R})\) operate on the consequent.
**Observation 5**.: _If \(\mathcal{I}\) is an instance, then only \(s(\rho)\) rules of \(\mathsf{G3}(\mathcal{R})\) can be bottom-up applied to \(\mathcal{I}\vdash\emptyset\). Moreover, such an application yields a sequent \(\mathcal{I}^{\prime}\vdash\emptyset\) with \(\mathcal{I}^{\prime}\) an instance._
**Observation 6**.: _The inference shown below left is a correct application of \(s(\rho)\) iff the inference shown below right is:_
\[\frac{\Gamma^{\prime}\vdash\emptyset}{\Gamma\vdash\emptyset}\ s(\rho) \qquad\frac{\Gamma^{\prime}\vdash\Delta}{\Gamma\vdash\Delta}\ s(\rho)\]
**Observation 7**.: _Let \(\mathcal{I}\) and \(\mathcal{I}^{\prime}\) be instances with \(\tau=(\rho,\mu)\) a trigger on \(\mathcal{I}\). Then, \((\mathcal{I},\tau),(\mathcal{I}^{\prime},\emptyset)\) is a chase derivation iff the following is a correct application of \(s(\rho)\):_
\[\frac{\mathcal{I}^{\prime}\vdash\emptyset}{\mathcal{I}\vdash\emptyset}\ s(\rho)\]
**Lemma 8**.: _For every rule set \(\mathcal{R}\), \(n\in\mathbb{N}\), and instances \(\mathcal{I}_{1},\dots,\mathcal{I}_{n}\) there exists a chase derivation \((\mathcal{I}_{i},\tau_{i})_{i\in[n-1]}\) iff there exists an \(\mathcal{R}\)-derivation of \(\mathcal{I}_{1}\vdash\emptyset\) from \(\mathcal{I}_{n}\vdash\emptyset\)._
In the proof of the following theorem, one shows that every chase derivation can be transformed into a proof in \(\mathsf{G3}(\mathcal{R})\) and vice-versa, showing how existential rule reasoning and proofs in \(\mathsf{G3}(\mathcal{R})\) simulate one another.
**Theorem 9**.: _Let \(\mathcal{R}\) be a rule set. A chase derivation \((\mathcal{I}_{i},\tau_{i})_{i\in[n]}\) witnessing \((\mathcal{D},\mathcal{R})\models\exists\mathbf{x}q(\mathbf{x})\) exists iff a proof in \(\mathsf{G3}(\mathcal{R})\) of \(\mathcal{D}\vdash\exists\mathbf{x}q(\mathbf{x})\) exists._
Leveraging Theorems 3 and 9, it is straightforward to prove the first claim of the theorem below. The second claim is immediate as \(\mathcal{I}\) and \(\mathbf{Ch}_{\infty}(\mathcal{D},\mathcal{R})\) are universal models. We note that the following theorem expresses a correspondence between proof-search and the chase.
**Theorem 10**.: _Let \(\mathcal{R}\) be a rule set, \(\mathcal{D}\) be a database, and \(\exists\mathbf{x}q(\mathbf{x})\) be a BCQ. Then,_
1. \(\mathtt{Prove}(\mathcal{D}\vdash\exists\mathbf{x}q(\mathbf{x}))=\mathtt{True}\) _iff there is an_ \(n\in\mathbb{N}\) _such that_ \(\mathbf{Ch}_{n}(\mathcal{D},\mathcal{R})\models\exists\mathbf{x}q(\mathbf{x})\) _iff_ \(\mathbf{Ch}_{\infty}(\mathcal{D},\mathcal{R})\models\exists\mathbf{x}q(\mathbf{x})\)_;_
2. _If_ \(\mathtt{Prove}(\mathcal{D}\vdash\exists\mathbf{x}q(\mathbf{x}))\neq\mathtt{True}\) _, then_ \(\mathcal{I}\equiv\mathbf{Ch}_{\infty}(\mathcal{D},\mathcal{R})\) _with_ \(\mathcal{I}\) _the witnessing counter-model._
**Example 11**.: We provide an example demonstrating the relationship between a proof and the chase. We read \(\mathtt{F}(x)\) as '\(x\) is female', \(\mathtt{M}(x,y)\) as '\(x\) is the mother of \(y\)' and \(\mathtt{A}(x,y)\) as '\(x\) is the ancestor of \(y\)'. We let \(\mathcal{K}=(\mathcal{D},\mathcal{R})\) be a knowledge base such that \(\mathcal{D}=\{\mathtt{M}(b,a),\mathtt{M}(c,b)\}\), \(\mathcal{R}=\{\rho_{1},\rho_{2}\}\), and
\(\rho_{1}=\forall xy(\mathtt{M}(x,y)\rightarrow\mathtt{A}(x,y)\wedge\mathtt{F} (x))\);
\(\rho_{2}=\forall xy(\mathtt{A}(x,y)\wedge\mathtt{A}(y,z)\rightarrow\mathtt{A}( x,z))\).
In Figure 3, \(\mathcal{K}\models\exists x(\mathtt{A}(x,a)\wedge\mathtt{F}(x))\) is witnessed and verified by the proof shown left. The graph shown right demonstrates that the BCQ \(\exists x(\mathtt{A}(x,a)\wedge\mathtt{F}(x))\) (to the right) can be mapped into the chase \(\mathbf{Ch}_{\infty}(\mathcal{D},\mathcal{R})\) (to the left) via a \(\mathbf{Ch}_{\infty}(\mathcal{D},\mathcal{R})\)-assignment \(\mu\) (depicted as dotted arrows). (NB. We have omitted the points \(\{\top(c)\mid c\in\mathbf{C}\}\) in the picture of \(\mathbf{Ch}_{\infty}(\mathcal{D},\mathcal{R})\) for simplicity.)
## 5 Concluding Remarks
We have formally established an equivalence between existential rule reasoning and sequent calculus proofs, effectively showing that proof-search simulates the chase. This work is meaningful as it uncovers and connects two central reasoning tasks and tools in the domain of existential rules and proof theory. Moreover, we have found that the counter-models extracted from failed proof-search are universal, implying their homomorphic equivalence to the chase--a previously unrecognized observation.
For future work, we aim to examine the relationship between the _disjunctive chase_[1] and proof-search in sequent calculi with disjunctive inference rules. It may additionally be worthwhile to investigate if our sequent systems can be adapted to facilitate reasoning with non-classical variants or extensions of existential rules. For example, we could merge our sequent calculi
with those of [13] for _standpoint logic_--a modal logic used in knowledge integration to reason with diverse and potentially conflicting knowledge sources [14]. Finally, as this paper presents a sequent calculus for querying with existential rules, we plan to further explore its utility; e.g. by identifying admissible rules or applying loop checking techniques to uncover new classes of existential rules with decidable query entailment.
## Acknowledgments
Work supported by the European Research Council (ERC) Consolidator Grant 771779 (DeciGUT).
|
2310.01916
|
Verified completeness in Henkin-style for intuitionistic propositional
logic
|
This paper presents a formalization of the classical proof of completeness in
Henkin-style developed by Troelstra and van Dalen for intuitionistic logic with
respect to Kripke models. The completeness proof incorporates their insights in
a fresh and elegant manner that is better suited for mechanization. We discuss
details of our implementation in the Lean theorem prover with emphasis on the
prime extension lemma and construction of the canonical model. Our
implementation is restricted to a system of intuitionistic propositional logic
with implication, conjunction, disjunction, and falsity given in terms of a
Hilbert-style axiomatization. As far as we know, our implementation is the
first verified Henkin-style proof of completeness for intuitionistic logic
following Troelstra and van Dalen's method in the literature. The full source
code can be found online at https://github.com/bbentzen/ipl.
|
Huayu Guo, Dongheng Chen, Bruno Bentzen
|
2023-10-03T09:45:43Z
|
http://arxiv.org/abs/2310.01916v1
|
# Verified completeness in Henkin-style
###### Abstract
This paper presents a formalization of the classical proof of completeness in Henkin-style developed by Troelstra and van Dalen for intuitionistic logic with respect to Kripke models. The completeness proof incorporates their insights in a fresh and elegant manner that is better suited for mechanization. We discuss details of our implementation in the Lean theorem prover with emphasis on the prime extension lemma and construction of the canonical model. Our implementation is restricted to a system of intuitionistic propositional logic with implication, conjunction, disjunction, and falsity given in terms of a Hilbert-style axiomatization. As far as we know, our implementation is the first verified Henkin-style proof of completeness for intuitionistic logic following Troelstra and van Dalen's method in the literature. The full source code can be found online at [https://github.com/bbentzen/ipl](https://github.com/bbentzen/ipl).
Joint proceedings of the Third International Workshop on Logics for New-Generation Artificial Intelligence and the International Workshop on Logic, AI and Law, B. Bentzen, B. Liao, D. Liga, R. Markovich, B. Wei, M. Xiong, T. Xu (eds.), pp.36-48, 2023
[https://www.collegepublications.co.uk/LNGAI/?00003](https://www.collegepublications.co.uk/LNGAI/?00003)
School of Philosophy, Zhejiang University, Hangzhou, China
{guohuayu,chen_dongheng,bbentzen}@zju.edu.cn
## 1 Introduction
Troelstra and van Dalen [17] propose a completeness proof in Henkin-style for full intuitionistic predicate logic with respect to Kripke models. Despite being a fairly standard result in the literature, this completeness proof has yet to be formally verified in a proof assistant. In this paper, we describe a formalization for intuitionistic propositional logic using the Lean theorem prover [13].
Our main goal is to document some challenges encountered along the way and the design choices made to overcome them to obtain a formalized proof that is elegant, intuitive, and better suited for mechanization using the specific techniques available in the Lean programming language, in particular, the encodable.decode and insert_code methods developed by Bentzen [1].
To the best of our knowledge, our implementation is the first verified Henkin-style proof of strong completeness for intuitionistic logic following Troelstra and van Dalen's method in the literature. As far as its propositional fragment is concerned, the main ingredient of Troelstra and van Dalen's Henkin-proof is a model construction based on a consistent extension of sets of formulas, which is achieved by going through all disjunctions of the language [17, lem 6.3]. To carry out this extension, they assume an enumeration of disjunctions with infinite repetitions,
also remarking that an alternative approach in which at each stage we treat the first disjunction not yet treated. This variant appears in Van Dalen [5, lem 5.3.8]. Our implementation is based on a third variant of the consistent extension method, which we developed to better suit our needs of formalization. Each propositional formula is only listed once in the enumeration, but we carry out the extension for each of them infinitely many times. The formalization consists of roughly 800 lines of code and encompasses the syntax and semantics of intuitionistic propositional logic, along with the soundness and strong completeness theorems. We adopt a Hilbert-style proof system due to its simplicity. The full source code can be found online at [https://github.com/bbentzen/ipl](https://github.com/bbentzen/ipl).
### Related work
The formal verification of completeness proofs for intuitionistic logic can be traced back to Coquand's [3] use of ALF to mechanize a constructive proof of soundness and completeness with respect to Kripke models for the simply typed lambda-calculus with explicit substitutions. Heberlin and Lee [9] give a constructive completeness proof of Kripke semantics with constant domain for intuitionistic logic with implication and universal quantification in Coq. Recently, Hagemeier and Kirst [8] formalize a constructive proof of completeness for intuitionistic epistemic logic based on a natural deduction system. They also provide a classical Henkin proof using methods similar to those in Bentzen [1], but they do not present a formalization of the approach of Troelstra and van Dalen [17] as is done in this paper. Bentzen [1] formalizes the Henkin-style completeness method for modal logic S5 using Lean and From formalizes in Isabelle/HOL a Henkin-style completeness proof for both classical propositional logic [6] and classical first-order logic [7]. Maggesi and Brogi [12] give a formal completeness proof for provability logic in HOL Light. The formalization presented here is inspired by the work of Bentzen [1], but makes a few improvements regarding design choices, in particular, the use of Prop in the definition of the semantics and the indexing of models to arbitrary types.
### Lean
Lean [13] is an interactive theorem prover based on the version of dependent type theory known as the calculus of constructions with inductive types [15, 4]. Users can construct proof terms directly as in Agda [14], using tactics as in Coq [16] or both proof terms and tactics simultaneously. Lean's built-in logic is constructive, but it supports classical reasoning as well. In fact, our Henkin-style proof is classical since it relies on a nonconstructive use of contraposition. Therefore, we do not worry about any complexity and computational aspects related to our proof. Our implementation makes use of some results from Lean's standard library and the user-maintained mathematical library mathlib[2].
Throughout the remainder of this paper, Lean code will be used to showcase some design decisions in our formalization. The syntax and semantics of intuitionistic propositional logic that is the starting point of our formalization is described in Section 2. We also describe our formalization of a countermodel for the law of excluded middle and sketch a proof of soundness. Then, an informal overview of the Henkin-style proof method as well as a description of our implementation is provided in Section 3. Finally, some concluding remarks are given in Section 4.
## 2 Intuitionistic Logic
### The language
The intuitionistic propositional language considered here contains implication, conjunction, disjunction, and falsity as the only primitive logical connectives. The language is defined using inductive types with one constructor for propositional letters, falsum, implication, conjunction, and disjunction, respectively:
inductive form : Type atom : N - form bot : form impl1 : form - form - form and : form - form - form
This code can be found in language.lean file.
Since our language contains countably many propositional letters \(p_{0},p_{1},...\) we use the type \(\mathbb{N}\) of natural numbers to define the constructor atom of propositional letters. The only way to construct a term of type form is using this atomic constructor(atom) and the constructors for falsum (bot), implication (impl), conjunction (and), disjunction (or).
The elimination rule is an operation that allows us to define functions by recursion from it to any other types, including also the type of propositions Prop, in which case, this elimination rule is an instance of the principle of induction on the structure of the formula.
Constructors are displayed in Polish notation by default, but we define some custom infix notation with the usual Unicode characters for better readability:
prefix '#' := form.atom notation '\iota' := form.bot infix '\supset' := form.impl notation p '&' q := form.and p q notation p '\'' q := form.or p q notation '\'\':4 p := form.impl p (form.bot )
Contexts are just sets of formulas. In Lean sets are defined as functions of type A \(\rightarrow\) Prop. As usual in logic textbooks, we display the formulas in a context in list notation separated by a comma instead of using unions of singletons. We introduce the following notation to make this possible:
notation \(\Gamma\) '.'p := set.insert p \(\Gamma\)
The formalization of the language can be found in the language.lean file.
### The proof system
We define a Hilbert-style system for intuitionistic propositional logic that is best described as a refinement of Heyting's original axiomatization [10, SS2]. The proof system is implemented with a type of proofs, which is inductively defined as follows:
inductive prf : set form - form - Prop ax {P} {p} (h : p \(\Gamma\) :prf \(\Gamma\) p k {P} {p q} :prf \(\Gamma\) (p \(\supset\) (q \(\supset\) p)) s {P} {p q r} :prf \(((p \(\supset\) (q \(\supset\) r)) \(((p \(\supset\) q) \((p \(\supset\) r)))) ex {P} {p} :prf \(\(\supset\) (p \(\supset\) p) mp {P} {p q} (hpq: prf \(\(\supset\) p)) (hp :prf \(\Gamma\) p) :prf \(\Gamma\) q pr {P} {p q} :prf \(\((p \& q) \(\supset\) p) pr {P} {p q} :prf \(((p \& q) \(\supset\) q)
pair{\(\Gamma\}{pq}:prf\(\Gamma\)(p\(\supset\)(q\(\supset\)(p&q))) |ir{\(\Gamma\}\{pq\}:prf\(\Gamma\)(p\(\supset\)(p\(\vee\)q)) |in{\(\Gamma\}\{pq\}\)prf\(\Gamma\)(q\(\supset\)(p\(\vee\)q)) |case{\(\Gamma\}\{pqr\}:prf\((p\supset r)\supset((q\supset r)\supset((p\lor q) \supset r)))\)
Again, the elimination rule for this type generalizes definition by recursion and induction on the structure of proofs. To follow the usual logical notation, we abbreviateprf\(\Gamma\)p with \(\Gamma\vdash_{i}p\) as follows:
notation\(\Gamma\)'\(\vdash_{i}\)'p:=prf\(\Gamma\)p notation\(\Gamma\)'\(\vdash_{i}\)'p:=prf\(\Gamma\)p \(\rightarrow\)false
To illustrate, we compare a mechanized formal Hilbert-style proof of the identity of implication \(p\supset p\) in our implementation:
lemmaid{p:form}{\(\Gamma\):setform}:
\(|\Gamma\vdash_{i}\)p\(\supset\)p:=
mp(mp(@s\(\Gamma\)p(p\(\supset\)p)p)k)k
with a non-mechanized formal proof written in Lemmon style:
\[\begin{array}{|c|c}1&p\supset((p\supset p)\supset p)\supset(p\supset(p \supset p))\supset(p\supset p)&\mbox{S}\\ 2&p\supset((p\supset p)\supset p)&\mbox{K}\\ 3&(p\supset(p\supset p))\supset(p\supset p)&\mbox{MP 1, 2}\\ 4&(p\supset(p\supset p))&\mbox{K}\\ 5&p\supset p&\mbox{MP 3, 4}\end{array}\]
Notice that the proof structure in our term proof is actually clearer since it indicates how the axiom schemes should be instantiated.
The formalization of the proof system can be found in the theory.lean file.
### Semantics
#### Kripke models
We define the semantics for intuitionistic propositional logic in terms of Kripke semantics as usual [17; 5]. A model \(\mathcal{M}\) is a triple \(\langle\mathcal{W},\leq,\mathsf{v}\rangle\) where \(\mathcal{W}\) is a set of possible worlds of type \(A\), \(\leq\) is a reflexive, symmetric and monotonic binary relation on \(A\), and \(\mathsf{v}\) specifies the truth value of a formula at a world.
In Lean, Kripke models can be defined as inductive types having just one constructor using the structure command. We define it not as a triple but as a 6-tuple, composed of a domain W, an accessibility relation R, a valuation function val, and proofs of reflexivity, transitivity, and monotonicity for the accessibility relation R, denoted as refl, trans, and mono:
structuremodel(A:Type):=
(W:setA)
(R:A\(\rightarrow\)A\(\rightarrow\)Prop)
(val:N\(\rightarrow\)A\(\rightarrow\)Prop)
(refl:V\(\mathsf{v}\in\)W,R\(\mathsf{w}\)w)
(trans:\(\forall\mathsf{w}\in\)W,\(\forall\mathsf{v}\in\)W,\(\forall\mathsf{u}\in\)W,R\(\mathsf{w}\)\(\rightarrow\)R\(\mathsf{w}\)u)
(mono:\(\forall\mathsf{p}\),\(\forall\mathsf{w}\)w\(\in\)W,valp\(\rightarrow\)R\(\mathsf{w}\)w\(\rightarrow\)valp\(\mathsf{w}\))
In our case, a possible world is a term of type \(A\). This allows for more generality in the construction of a model unlike in [1]. What is more, the type of propositions Prop is used to encode our truth values true or false.
#### Semantic consequence
To formalize the notion of truth at a type, we define a forcing relation \(w\Vdash_{\mathcal{M}}p\) that takes as arguments a model \(\mathcal{M}\), a formula \(p\), and a type \(A\) and returns a term of type Prop. As usual, falsity, conjunction, and disjunction are defined truth-functionally and an implication \(p\supset q\) is true at a world \(w\) iff if \(\mathcal{R}(w,v)\) then \(p\) is true implies \(q\) is true at \(v\), for all \(v\in\mathcal{W}\). We also introduce the familiar notation for this forcing relation:
```
defforces_form{A:Type}(M:modelA):form\(\rightarrow\)A\(\rightarrow\)Prop |(#p):=\(\lambda\)v,M.valp v (bot):=\(\lambda\)v,false |(p\(\supset\)q):=\(\lambda\)v,\(\forall\)w\(\in\)M.W,v\(\in\)M.W\(\rightarrow\)M.R\(\vee\)w \(\rightarrow\)forces_formpw\(\rightarrow\)forces_formqw |(p\(\&\)q):=\(\lambda\)v,forces_formpv\(\wedge\)forces_formqv |(p\(\vee\)q):=\(\lambda\)v,forces_formpv\(\vee\)forces_formqv notationw'|-'{'{'M'}'p:=forces_formMpw
```
To formalize the intuitionistic notion of semantic consequence \(\Gamma\models_{i}p\) we first extend this forcing relation to contexts pointwise and then we stipulate that \(\Gamma\models_{i}p\) iff for all types \(A\), models \(\mathcal{M}\) and possible worlds \(w\in\mathcal{W}\), \(\Gamma\) being true at \(w\) in \(\mathcal{M}\) implies \(p\) being true at \(w\) in \(\mathcal{M}\):
```
defforces_ctx{A:Type}(M:modelA)(\(\Gamma:setform):A\(\rightarrow\)Prop:= \(\lambda\)w,\(\forall\)p,p\(\in\Gamma\)\(\rightarrow\)forces_formMpw notationw'|-'{'{'M'}'\(\Gamma:=forces_ctxM\(\Gamma\)w defsem_csq(\(\Gamma:setform)(p:form):= \(\forall\){A:Type}(M:modelA)(w\(\in\)M.W),(w\(\vdash\){M}\(\Gamma))\(\rightarrow\)(w\(\vdash\){M}p) notationr'='{'sem_csq\(\Gamma\)p
```
It is worth noting that we are overloading the forcing relation notation for formulas w \(\vdash\){M}\(p\) and contexts w \(\Vdash\){M}\(\Gamma\). There is no ambiguity because Lean will delay the choice until elaboration and determine how to disambiguate the notations depending on the relevant types.
The formalization of the Kripke semantics described above can be found in the semantics.lean file.
#### The failure of the law of excluded middle
Before proceeding to prove completeness, it will be helpful to see how we can build models in our implementation. To give a concrete example, let us show how to build the following countermodel for the law of excluded middle [11, p.99] using the type of booleans true tt and false ff:
Since our possible worlds are always booleans, the domain, accessibility relation, and valuation function are formalized in Lean in a slightly different way. The reflexivity, transitivity, and monotonicity proofs are straightforward, so we shall omit them:
def W : setbool := {ff, tt} def R : bool bool bool Prop :=\(\lambda\) w v, w =v \(\lor\) w = ff @[simp] def val : nat bool Prop :=\(\lambda\)_ w, w =tt Using this countermodel, we assume that the law of excluded middle holds, that is for any formula \(p\), either \(\emptyset\models_{i}p\) or \(\emptyset\models_{i}\neg p\), and then derive a contradiction. This allows us to prove that the law of excluded middle fails in general: lemma no_lem: - \(\lor\) p, (\(\emptyset\models_{i}\) p \(\lor\) ~p) The mechanization of the countermodel can be found in the nolem.lean file.
#### Soundness
The soundness theorem asserts that if a formula \(p\) can be derived from a set of assumptions \(\Gamma\) using the inference rules of the logical system, then \(p\) is logically valid under any interpretation that satisfies \(\Gamma\). theorem soundness {\(\Gamma\) : set form} {p : form} : (\(\Gamma\vdash_{i}\) p) \(\rightarrow\) (\(\Gamma\models_{i}\) p) The code for proof of soundness can be found in soundness.lean.
The proof proceeds by using induction to perform case analysis for each inference rule. For each rule, the proof provides a way to derive the conclusion based on the rule and a way to show that the conclusion is logically valid based on the interpretation and the premises.
## 3 The completeness theorem
Now that we have presented the implementation of the syntax and semantics of intuitionistic propositional logic in the previous section, we are prepared to undertake a formal proof of completeness. The strong completeness theorem, which states that every semantic consequence is a syntactic consequence, can be stated in Lean using our custom notation as follows: theorem completeness {\(\Gamma\) : set form} {p : form} : (\(\Gamma\models_{i}\) p) \(\rightarrow\) (\(\Gamma\vdash_{i}\) p) Our implementation follows the original Henkin-style completeness proof given by Troelstra and van Dalen [17] with some small modifications. The main proof argument runs as follows.
1. Assume that \(\Gamma\models_{i}p\) and \(\Gamma\vdash_{i}p\) hold;
2. Build a model \(\mathcal{M}\) such that \(w\Vdash_{\mathcal{M}}p\) iff \(w\vdash_{i}p\) for all worlds \(w\in\mathcal{W}\), where we have sets of formulas as possible worlds;
3. Show that there is a world \(w\in\mathcal{W}\) such that \(w\Vdash_{\mathcal{M}}\Gamma\) but \(w\Vdash_{\mathcal{M}}p\);
4. Establish a contradiction from our assumption that \(\Gamma\models_{i}p\).
Our proof appeals to classical reasoning at the metalevel of Lean's logic on two occasions [17, p.87], namely, in our proof of \(\Gamma\vdash_{i}p\) where we assume double negation elimination and in our proof of \(w\Vdash_{\mathcal{M}}p\) iff \(w\vdash_{i}p\).
The reader can refer to the completeness.lean file for the full details of our implementation of the completeness proof.
#### Consistent prime extensions
The first step of Troelstra and van Dalen's proof is the definition of what they call a "saturated theory" (17, def.6.2). We shall make use of the equivalent concept of prime theory instead (5, def.5.3.7), in which the disjunction property is expressed in terms of the membership relation. We say that a set of formulas \(\Gamma\) is a prime theory if \(\Gamma\) is closed under derivability and if \(p\lor q\in\Gamma\) implies \(p\in\Gamma\) or \(q\in\Gamma\). In completeness.lean file, we write:
```
defis_closed(\(\Gamma\):setform):= \(\forall\){p:form},(\(\Gamma\vdash_{i}\)p)-p\(\in\Gamma\) defhas_disj(\(\Gamma\):setform):= \(\forall\){pq:form},((p\(\forall\)q)\(\in\Gamma\))-((p\(\in\Gamma\))\(\vee\)(q\(\in\Gamma\))) defis_prime(\(\(\Gamma\):setform):= is_consist\(\Gamma\)\(\wedge\)has_disj\(\Gamma\)
```
The second step of Troelstra and van Dalen's completeness proof is the proof of a prime extension lemma (17, lem 6.3), which states that if \(\Gamma\not\vdash r\) then there is a prime theory \(\Gamma^{\prime}\supseteq\Gamma\) such that \(\Gamma^{\prime}\not\vdash r\). Assuming that they have a list of disjunctions \(\langle\varphi_{i,1}\vee\varphi_{i,2}\rangle_{i}\) with infinite repetitions, they define
\[\Gamma^{\prime}=\bigcup_{i\in\mathbb{N}}\Gamma_{i},\]
where \(\Gamma_{0}=\Gamma\) and \(\Gamma_{k+1}\) is defined inductively as follows:
* Case 1: \(\Gamma_{k}\vdash\varphi_{k,1}\vee\varphi_{k,2}\). Put
* \(\Gamma_{k+1}=\Gamma_{k}\cup\{\varphi_{k,2}\}\) if \(\Gamma_{k},\varphi_{k,1}\vdash r\), and
* \(\Gamma_{k+1}=\Gamma_{k}\cup\{\varphi_{k,1}\}\) otherwise
* Case 2: \(\Gamma_{k}\not\vdash\varphi_{k,1}\vee\varphi_{k,2}\). Put
Since we want to extend \(\Gamma\) to a prime theory \(\Gamma^{\prime}\), we want to ensure the disjunctive property that if \(\phi\vee\psi\in\Gamma^{\prime}\) then \(\phi\in\Gamma^{\prime}\) or \(\psi\in\Gamma^{\prime}\). If there were no infinite repetitions in the list, we could never be sure that we have treated all disjunctions in Case 1, for, at step \(k+1\), its disjuncts only get added to the set when \(\Gamma_{k}\) proves the disjunction. It is possible that later the disjunction becomes provable from \(\Gamma_{k+m}\), but, we will never go back to it again.
Troelstra and van Dalen mention a simpler variant of the construction that uses an enumeration of disjunctions without requiring infinite repetitions. At stage \(k+1\) we simply treat the first disjunction not yet treated. This proof is spelled out by van Dalen in (5, lem 5.3.8). However, the proof method is less suitable for mechanization given that it is difficult to tell a proof assistant how exactly they should find the first disjunction not yet treated. We implement a simplified version of this method where at each step \(k+1\) we always treat all disjunctions in the language once more. The following Lean code encapsulates the idea of the construction sketched above:
definsert_form(\(\Gamma\):setform)(pqr:form):setform:= if(\(\Gamma\).p\(\vdash_{i}\)r)then\(\Gamma\).qelse\(\Gamma\).p definsert_code(\(\Gamma\):setform)(r:form)(n:nat):setform:= matchencodable.decode(form)nwith |none:=\(\Gamma\) |some(p\(\vee\)q):=if\(\Gamma\vdash_{i}\)p\(\vee\)qtheninsert_form\(\Gamma\)pqrelse\(\Gamma\) |some_:=\(\Gamma\) end definsertn(\(\Gamma\):setform)(r:form):nat\(\rightarrow\)setform |:=\(\Gamma\) |(n+1):=insert_code(insertn)r n defprimen(\(\Gamma\):setform)(r:form):nat\(\rightarrow\)setform |:=\(\Gamma\) |(n+1):=\(\bigcup\)i,insertn(primenn)r i defprime(\(\Gamma\):setform)(r:form):setform:= \(\bigcup\)n,primen\(\Gamma\)r n
Unlike in Troesltra and van Dalen [17] and van Dalen [5], the enumeration in our formalization lists not just all disjunctions but all propositional formulas in the language. When a formula is not a disjunction we simply ignore it just as in Case 2 above. We follow Bentzen [1] in using encodable types to enumerate the language. In Lean, a type \(\alpha\) is encodable if there is an encoding function encode:\(\alpha\rightarrow\texttt{nat}\) and a (partial) inverse decode:nat\(\rightarrow\)option\(\alpha\) that decodes the encoded term of \(\alpha\).
Now that we extended \(\Gamma\) to \(\Gamma^{\prime}\), which we denote as prime\(\Gamma\)r, we have to prove it is indeed a prime extension of \(\Gamma\). First, we show that \(\Gamma\subseteq\Gamma^{\prime}\). But this is easy, since for every \(\Gamma^{\prime}_{n}\)n in the family of sets, \(\Gamma\subseteq\Gamma^{\prime}_{n}\)n. Therefore, \(\Gamma\) must also be included in the union of all \(\Gamma^{\prime}_{n}\)n, which is \(\Gamma^{\prime}_{n}\).
lemmaprimen_subset_prime{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{ } }}}}}}}} } \(\(\Gamma\) }\(\(\Gamma^{\prime}\) : sort form){r:form}:
\(\Gamma\subseteq\texttt{prime}\Gamma\)r
The next step is to prove that the \(\Gamma^{\prime}\) also has the disjunction property and it is closed under derivability. Let us focus on the former first.
We need to show that \(p\lor q\in\Gamma^{\prime}\) implies \(p\in\Gamma^{\prime}\) or \(q\in\Gamma^{\prime}\). If \(p\lor q\in\Gamma^{\prime}\) then there is some \(n\in\mathbb{N}\) such that \(p\lor q\in\Gamma^{\prime}_{n}\). But then since \(\Gamma^{\prime}_{n}\vdash p\lor q\), then we know that \(p\in\Gamma^{\prime}_{n+1}\) or \(q\in\Gamma^{\prime}_{n+1}\) because the disjunction was treated at some point. Thus, \(p\in\Gamma^{\prime}\) or \(q\in\Gamma^{\prime}\).
defprime_insert_disj{\(\Gamma\):setform}{pqr:form}(h:(p\lor q)\in\)prime\(\Gamma\)r): \(\exists\)n,p\(\in\)(insertn(primen\(\Gamma\)r)n(encodable.encode(p\(\vee\)q)+1))\(\vee\)q\(\in\)( insertn(primen\(\Gamma\)r n)r(encodable.encode(p\(\vee\)q)+1)) lemmainsertn_to_prime{{{{{{{{{{{{{{{{{{{{{{{{ }}}}}}}}}}}}}}}}}}} :setform}{r:form}{n=:nat}: insertn(primen\(\Gamma\)r n)r m\(\subseteq\)prime\(\Gamma\)r)
defprime_has_disj{\(\Gamma\):setform}{pqr:form}: ((p\(\vee\)q)\(\in\)prime\(\Gamma\)r)\(\rightarrow\)p\(\in\)prime\(\Gamma\)r\(\vee\)q\(\in\)prime\(\Gamma\)r
Saying that \(\Gamma^{\prime}\) is closed under derivability means that if we can deduce a formula from \(\Gamma^{\prime}\), it is an element of \(\Gamma^{\prime}\). We use a lemma that states that if we can prove \(r\lor p\) from \(\Gamma^{\prime}\), then there exists an \(n\) such that \(p\in\Gamma_{n+1}\). We use the above lemma insertn_to_prime to deduce that \(p\in\Gamma^{\prime}\):
lemmaprime_prf_disj_self{\(\Gamma:\texttt{setform}\){pr : form} : (prime \(\Gamma\)r \(\vdash_{i}\)r \(\vee\)p) \(\rightarrow\)\(\exists\)n, p \(\in\) (inserttn (prime \(\Gamma\)r n) r (encodable.enc ode (r \(\vee\) p)+1))
defprime_is_closed{\(\Gamma\) : set form}{pq r : form} : (prime \(\Gamma\)r \(\vdash_{i}\)p) \(\rightarrow\)p \(\in\)prime \(\Gamma\)r
At this moment, we need to prove that \(\Gamma^{\prime}\) still remains consistent. First, we by structural induction on the derivation that if \(\Gamma^{\prime}\vdash r\) then there is some \(n\) such that \(\Gamma_{n}\vdash r\). Then we prove by induction on \(n\) that if \(\Gamma_{n}\vdash r\) then \(\Gamma\vdash r\). The base case is trivial. In the inductive case, we complete the proof by unfolding the definition of \(\Gamma_{n}\) and manipulating the inductive hypothesis. Putting both lemmas together, we prove that \(\Gamma^{\prime}\vdash r\) implies \(\Gamma\vdash r\):
defprime_not_prfn{\(\Gamma\) : set form}{r : form}{n} : (primen \(\Gamma\)r n \(\vdash_{i}\)r) \(\rightarrow\)(\(\Gamma\vdash_{i}\)r)
defprime_not_prf{\(\Gamma\) : set form}{r : form} : (prime \(\Gamma\)r \(\vdash_{i}\)r) \(\rightarrow\)(\(\Gamma\vdash_{i}\)r)
#### The canonical model construction
Given a set of formulas \(\Gamma\) and \(\phi\) such that \(\Gamma\nvdash\phi\), the next step is to build a canonical Kripke model \(\mathcal{M}\) such that with \(w\Vdash_{\mathcal{M}}\Gamma\) and \(w\nvdash_{\mathcal{M}}\phi\) for some possible world. We build this model by letting \(\mathcal{W}\) be the set of all consistent prime theories; \(w\leq v\) iff \(w\subseteq v\) for \(w,v\in\mathcal{W}\); and \(\mathsf{v}(w,p)=1\) iff \(w\in\mathcal{W}\) and \(p\in w\), for a propositional letter \(p\). The following Lean code reflects the model construction:
defdomain : set (set form) := {w | is_consist w \(\wedge\) ctx.is_prime w}
defaccess : set form \(\rightarrow\)set form \(\rightarrow\)Prop :=\(\lambda\)wv,w \(\subseteq\)v
def val : N \(\rightarrow\)set form \(\rightarrow\)Prop :=\(\lambda\)qw,w \(\in\)domain \(\wedge\) (#q) \(\in\)w
The accessibility relation \(\leq\) is clearly reflexive and transitive since so is \(\subseteq\). Monotonicity is easy to see since \(p\in w\) and \(w\subseteq v\) means that \(q\in v\). We prove these lemmas by straightforward unfolding the definition of access.
Our model is integrated into Lean's code as follows:
def M : model (set form):= begin fapply model.mk, apply domain, apply access, apply val, apply access.ref1, apply access.trans, apply access.mono end
#### Truth and derivability
It turns out that a formula is true at a world in the canonical model if and only if it can be proved from that world:
lemmamodel_tt_iff_prf{p : form} : \(\forall\) (w \(\in\)domain), (w \(\models\){M}p) \(\leftrightarrow\) (w \(\vdash_{i}\)p)
We mechanize the proof employing the induction tactic, which allows us to use the elimination rule of a type. This approach yields five goals, namely, to prove the case where a formula is a propositional letter, falsity, implication, conjunction, or disjunction. The proof of implication and disjunction deserve some mention.
The disjunction case is simpler, so we shall discuss it first. Lean gives us a biconditional in the following goal:
\(\vdash\)\(\vee\) (w :set form),
\(\mathtt{w}\in\)domain\(\rightarrow\) (w \(\models\) {M} (p \(\vee\)q)) \(\leftrightarrow\) (w \(\vdash_{i}\)p \(\vee\)q))
The proof in the forward direction starts with the introduction of assumptions and then splits the proof into two cases. In the first case, we assume that \(w\models_{\mathcal{M}}p\lor q\) and our goal is \(w\vdash_{i}p\lor q\). Through the tactic cases, which expresses case reasoning, we can finish our goal using some basic facts about disjunctions and the inductive hypotheses in both cases.
In the backward direction, we assume that \(w\vdash_{i}p\lor q\). Since \(w\) is a prime theory and thus enjoys the disjunctive property, we can reason by cases depending on whether \(w\vdash_{i}p\) or \(w\vdash_{i}q\). The result follows the inductive hypothesis.
Now we proceed to the implication case. Using the intro tactic, we begin by assuming the inductive hypothesis for \(p\). If \(w\) is a world and it is a prime theory, then by unfolding the true definition of a formula in the model's world, we arrive at a biconditional goal that can be expressed as follows.
\(\vdash\)\(\vee\) (w :set form),
\(\mathtt{w}\in\)domain\(\rightarrow\) (w \(\models_{i}\) {M} (p \(\supset\)q)) \(\leftrightarrow\) (w \(\vdash_{i}\)p \(\supset\)q))
We split the biconditional proof into two smaller conditionals using the split tactic. In the forward direction, we first assume that \(w\Vdash_{\mathcal{M}}p\supset q\). We reason by cases depending on whether \(w\vdash_{i}p\supset q\) or not, therefore invoking the law of excluded middle. If that is the case, we are done. If not, then we know that \(w,p\nvdash q\). We want to derive a contradiction. We extend the context \(w,p\) to a prime theory \((w,p)^{\prime}\) that still does not prove \(q\). By our inductive hypothesis, since \((w,p)^{\prime}\) is in the domain, we know that \((w,p)^{\prime}\Vdash_{\mathcal{M}}q\leftrightarrow(w,p)^{\prime}\vdash_{i}q\).
To derive a contradiction, we just have to show that \((w,p)^{\prime}\Vdash_{\mathcal{M}}q\). Recall that our assumption \(w\Vdash_{\mathcal{M}}p\supset q\) states that for all \(v\in\mathcal{W}\) such that \(w\leq v\), if \(v\Vdash_{\mathcal{M}}p\) then \(v\Vdash_{\mathcal{M}}q\). But, clearly, \(w\leq(w,p)^{\prime}\). To complete the proof, we just have to show that \((w,p)^{\prime}\Vdash_{\mathcal{M}}p\). By our inductive hypothesis, it suffices to show that \((w,p)^{\prime}\vdash_{i}p\). But this is clearly true, since the original set \(w,p\) is contained in the prime extension \((w,p)^{\prime}\) and \(w,p\vdash_{i}p\).
For the backward direction, what we have to prove is \(w\Vdash_{\mathcal{M}}p\supset q\). This means for all \(v\in\mathcal{W}\) such that \(w\leq v\), if \(v\Vdash_{\mathcal{M}}p\) then \(v\Vdash_{\mathcal{M}}q\). We assume that \(v\in\mathcal{W}\) such that \(w\leq v\), \(v\Vdash_{\mathcal{M}}p\) then we have to show \(v\Vdash_{\mathcal{M}}q\). Using our inductive hypothesis, we just have to show that \(v\vdash_{i}q\).
Since we know \(w\vdash_{i}p\supset q\) and \(w\subseteq v\), by weakening, we will have \(v\vdash_{i}p\supset q\). We complete the proof by noting that \(v\vdash_{i}p\) by our inductive hypothesis and assumption that \(v\Vdash_{\mathcal{M}}p\). The result follows from modus pencas.
We have finished the proof of implication.
#### The completeness proof
To finish our completeness proof we just have to put together all the above pieces into 27 lines of code. We assume that \(\Gamma\nvdash_{i}p\) and \(\Gamma\models_{i}p\), we just need to arrive at a contradiction. We extend \(\Gamma\) to a prime theory \(\Gamma^{\prime}\) such that \(\Gamma^{\prime}\nvdash_{i}p\). Since we know \(\Gamma^{\prime}\Vdash_{\mathcal{M}}q\iff\Gamma^{\prime}\vdash_{i}q\) for every formula \(q\), we can conclude that \(\Gamma^{\prime}\nvdash_{\mathcal{M}}p\). Thus, we contradict our assumption that \(\Gamma\models_{i}p\), given that \(\Gamma^{\prime}\Vdash_{\mathcal{M}}\Gamma\) but \(\Gamma^{\prime}\nvdash_{\mathcal{M}}p\).
## 4 Conclusion
We have used Lean to formally verify the Henkin-style completeness proof for intuitionistic logic proposed by Troesltra and van Dalen [17] restricted to a propositional fragment with implication, falsity, conjunction, disjunction. The propositional proof system we implement is based on a Hilbert-style axiomatization. In future work, we hope to expand our implementation to full intuitionistic first-order logic with existential and universal quantifiers and thus complete the formalization of Troesltra and van Dalen's proof. Our implementation also includes a mechanized proof of soundness and a countermodel for the general validity of the law of excluded middle in intuitionistic propositional logic.
AcknowledgmentsThis research was supported in part by the Zhejiang Federation of Humanities and Social Sciences grant 23YJRC04ZD.
|
2306.01028
|
ITR: Grammar-based Graph Compression Supporting Fast Triple Queries
|
Neighborhood queries and triple queries are the most common queries on
graphs; thus, it is desirable to answer them efficiently on compressed data
structures. We present a compression scheme called Incidence-Type-RePair (ITR)
for graphs with labeled nodes and labeled edges based on RePair and apply the
scheme to network, version, and RDF graphs. We show that ITR performs
neighborhood queries and triple queries within only a few milliseconds and
thereby outperforms existing RePair-based solutions on graphs while providing a
compression size comparable to existing graph compressors.
|
Enno Adler, Stefan Böttcher, Rita Hartel
|
2023-06-01T13:49:18Z
|
http://arxiv.org/abs/2306.01028v3
|
# ITR: A grammar-based graph compressor supporting fast neighborhood queries
###### Abstract
Neighborhood queries are the most common queries on graphs; thus, it is desirable to answer them efficiently on compressed data structures. We present a compression scheme called Incidence-Type-RePair (ITR) for graphs with labeled nodes and labeled edges based on RePair [11] and apply the scheme to RDF graphs. We show that ITR speeds up neighborhood queries to only a few milliseconds and thereby outperforms existing solutions while providing a compression size comparable to existing RDF graph compressors.
Keywords:grammar-based compression graph compression RDF compression querying RDF
## 1 Introduction
Obtaining a node or all nodes in a graph that are adjacent to a given node is fundamental to most graph algorithms. Therefore, neighbourhood queries are the most common queries in graph processing. When huge graphs, for example, RDF graphs, are compressed, fast neighborhood queries on these compressed huge RDF graphs are a fundamental ingredient to most algorithms processing these graphs. Whenever, neighborhood queries are heavily used on compressed graphs, their performance is crucial for improving the efficiency of graph processing and the analysis of large-scale RDF graphs. Herein, we investigate the execution time for neighborhood queries on compressed RDF graphs generated by different graph compressors and we introduce Incidence-Type-RePair (ITR), a grammar-based compressor that generates a compressed graph on which neighborhood queries can be answered faster than on other compressed data formats for RDF graphs.
Grammar-based compression schemes have been shown to improve the efficiency of queries on compressed data, for example, for consecutive symbol visits on strings [8] and for parent/child navigations on trees [13]. A prominent example of a grammar-based compression scheme is RePair [11]. RePair is a linear-time algorithm for strings and is generalized to trees [12] and to graphs called gRePair [15]. The gRePair scheme has been adapted to RDF graphs by Roder et al. [18]. However, Roder et al. [18] did not evaluate the query performance of different RDF compression algorithms. Although gRePair by Maneth et al. [15]
supports neighborhood queries, they do not provide an evaluation of query execution time.
We introduce a new approach called Incidence-Type-RePair (_ITR_), which applies the RePair compression scheme to the edges of graphs. We compare the execution time of neighborhood queries on RDF graphs using gRePair, RDF-RePair, ITR and different implementations of hdt [6] and \(k^{2}\)[2]. Our evaluation shows that ITR outperforms gRePair and the other implementations of RDF compressors in executing neighborhood queries.
The main contributions of this paper are:
* the algorithm, ITR, to compress graphs based on RePair
* an experimental evaluation of an implementation of ITR in comparison to other compression algorithms regarding
* compression size
* runtime of compression
* runtime of neighborhood queries or similar queries
## 2 Related Work
There are different methods for RDF graph compression that can be grouped into syntactic and semantic compression algorithms [18]. Syntactic compression algorithms use a succinct representation in contrast to the semantic compression algorithm that first reduces the number of triples.
The hdt compressor by Fernandez et al. [6] is a syntactic compressor and splits a file into a header that contains information about the compressed RDF graph, the graph structure, and a dictionary that maps texts to unique IDs that correspond to the graph structure. The dictionary is encoded using a specific order and plain text. The triples are encoded by a predicate stream that indicates the next subject, and an object stream that indicates the next predicate. Hernandez-Illera [10] improved hdt to hdt++ by using predicate families to represent all predicates of a subject by one ID.
The \(k^{2}\) approach by Brisaboa [2][3] was used for compressing RDF graphs by Alvarez-Garcia et al. [1]. The \(k^{2}\) approach represents the graph structure by a succinct representation of the adjacency matrix in form of a tree. This is done by recursively dividing the matrix in quadrants and representing a quadrant by 0 if all values of the quadrant are 0, or 1 otherwise. A quadrant containing only the value 0 is not split again, instead the whole quadrant is stored as a single 0-bit leaf node in the \(k^{2}\)-tree.
The semantic compression algorithms that we present all use context-free grammars to reduce the number of triples. An example of a context-free grammar generating a string \(a^{6}\) is \(\{S\to AAA,A\to aa\}\). Here the grammar needs less symbols on the right-hand sides than \(a^{6}\). The idea of RePair is to repeatedly replace the most frequent digram by a new nonterminal [11]. The term _digram_ describes two adjacent elements. For strings, \(aa\) is a digram of two adjacent letters and \(\{S\to AAA,A\to aa\}\) results from replacing \(aa\) in \(a^{6}\).
Maneth et al. [15] and Roder et al. [18] both apply the RePair scheme to graphs and define digrams to be two edges sharing a common node. As explained in Maneth et al. [15], to find a largest possible set of non-overlapping occurrences for a single digram, requires \(\mathcal{O}(|V|^{2}|E|)\) time and is thus infeasible. Instead, they and we present different approximations on how to define and find frequent digrams.
## 3 Preliminaries
A _ranked alphabet_\(\Sigma\) is a set of symbols for which we define a rank function \(rank:\Sigma\rightarrow\mathbb{N}\backslash\{0\}\) mapping each symbol to its arity.
Let \(L\) be a ranked alphabet called labels. \(G=(V_{G},E_{G})\) is called a _directed hypergraph_ with nodes \(V_{G}\) and edges \(E_{G}\) if the sets \(V_{G}\subset\mathbb{N}\) and \(E_{G}\subset L\times V_{G}^{*}\) are finite. We write \(a(v_{0},\ldots,v_{n-1})=e\) for an edge \(e\in E_{G}\) with \(label(e)=a\) and \(rank(e)=n\). The rank describes the number of nodes (including duplicates) that are connected to the edge. We write \(e[i]=v_{i}\) for the node \(v_{i}\) that is connected to \(e=a(v_{0},\ldots,v_{n-1})\in E_{G}\) and we call \(i\) the _connection-type_ of \(v_{i}\) to \(e\). We always assume \(0\leq i<n=rank(e)\). For example, an edge \(e=a(7,8)\in E_{G}\) has \(label(e)=a\), \(rank(e)=2\), and connection-type \(0\) for node \(e[0]=7\). We assume that for all symbols \(a\in L\) and for all edges \(e\in E\) with \(label(e)=a\), \(rank(e)=rank(a)\), that is, all edges with the same label have the same rank. Let \(\mathbb{G}\) be the set of all directed hypergraphs.
Similar to Maneth et al. [15], we define a _Hyperedge replacement grammar_ (HR grammar) as a tuple \(H=(T,N,P,S)\) with \(T\) and \(N\) being ranked alphabets with \(N\cap T=\emptyset\), \(L=T\cup N\), \(P\subset N\times\mathbb{G}\) and \(S\in N\). We write \(A\to G_{A}\) for \((A,G_{A})\in P\). We call an edge \(e\) terminal, if \(label(e)\in T\) and we call \(e\) nonterminal if \(label(e)\in N\). We call the right-hand side \(G_{S}\) of the grammar rule \(S\to G_{S}\) the start graph of the grammar.
An example for an HR grammar is shown in Figure 1. We will construct only _straight-line_ HR grammars (SL-HR grammar). For straight-line grammars, the following conditions hold: (1) for every nonterminal in \(N\) exists exactly one rule in \(P\) and (2) the grammar is non-recursive. These grammars produce only one word, which will be the uncompressed graph. We denote the uncompressed graph of the SL-HR grammar by \(\psi(G)\in\mathbb{G}\).
ITR differs from the approaches of Maneth et al. [15] and Roder et al. [18] by our succinct encoding of the edges that replaces loops implicitly. A _loop_ means that an edge has more than one connection to the same node. In Figure 1 (a) and (b) there are loops, both at node \(10\). Because loops are replaced in the succinct encoding, our digram definition does not distinguish whether or not edges have loops.
To define digrams, we first define the _incidence-type_ as the pair \((a,l)\) of an edge label \(a\in L\) and a connection-type \(l\in\mathbb{N}\) with \(l<rank(a)\). We define the set of all incidence-types \(IT=\{(label(e),i)|\exists e\in E,0\leq i<rank(e)\}\). For example, for each edge of rank \(2\), there are two connection types, where \(0\) is
equivalent to 'outgoing' from the source node and \(1\) is equivalent to 'incoming' to the destination node.
A _digram_\(d\) is a pair of two possibly equal incidence-types \(((a,l),(b,m))\). An _occurrence_ of a digram \(d=((a,l),(b,m))\in IT\times IT\) is a pair of two edges \(o=(e_{1},e_{2})\in E\times E\) with \(e_{1}\neq e_{2}\), \(label(e_{1})=a\), \(label(e_{2})=b\) and \(e_{1}[l]=e_{2}[m]\), so the two edges are different, fit to the labels, and share a common node. Let the set of all digrams be \(\mathbb{D}\).
An implication of our definition of digrams is that we only have \(3\) shapes of digrams for two edges of rank \(2\) in contrast to \(33\) or more shapes of Roder et al. [18]. In comparison, we save less space by replacing a single occurrence, but we have more occurrences to replace of a single digram.
To generalize the concept of neighborhood queries from edges of rank \(2\) to graphs that contain hyperedges as terminals, we add a connection-type to our definition. The _set of incident edges_ having connection-type \(i\) at a node \(v\) is the set \(n_{G}(v,i)=\{e\in E_{G}|e[i]=v\}\) of edges \(e\) of the graph \(G\). For example, in the case of a graph \(G\) that has only edges of rank \(2\), we compute the outgoing neighborhood of node \(v_{1}\) by computing \(n_{G}(v_{1},0)\) and return every distinct \(v_{2}\) for every edge \(a(v_{1},v_{2})\in n_{G}(v_{1},0)\) ignoring the label \(a\).
We apply our concepts to RDF graphs by regarding each triple \((s,p,o)\) as an edge \(p(s,o)\). As the order of the triples in an RDF graph is not of interest, all isomorph copies of an RDF graph are regarded as equal.
## 4 Compression
We separate the graph structure from the text, as the separation is common practice [3][6][10][15][18]. The connection between both parts are established by IDs. We present the dictionary in the Appendix 0.A because we use well known technics of the Burrows-Wheeler transformation [4].
#### 4.0.1 GraphRePair.
The RePair algorithm consists of the steps _replace digrams_ and _prune_. We omit presenting the prune step because it is a straight-forward
Figure 1: example replacement of digram \(((a,\ 1),(b,\ 0))\) and of the loop in \((A,0,1)\). \((a,\ 1)\) means an incoming edge with label \(a\) and \((b,\ 0)\) means an outgoing edge with label \(b\).
adaption of the prune step from string RePair [11]. The base scheme of the replace digram step is shown in Algorithm 1. How to determine the most frequent digram is already discussed in Larsson et al. [11]. Hence, we only discuss the steps count (Line 1), update count (Line 5) and replace (Line 4).
```
Input: grammar
1 count occurrences of digrams
2\(mfd\ \leftarrow\ \text{most frequent digram}\)
3whilemfd reduces grammar sizedo
4 replace occurrences of mfd
5 update digram count
6\(mfd\ \leftarrow\ \text{select next mfd}\)
```
**Algorithm 1**\(replaceDigrams\)
Line 1: countTo get the maximum number of non-overlapping occurrences of a digram, Maneth et al. [15] mention an algorithm with runtime \(\mathcal{O}(|V|^{2}|E|)\). Instead, we approximate the count of digrams in two steps. Digrams consists of two incidence-types, so we first count the frequency of incidence-types at each node by a single scan over all edges. The result is a mapping \(c:V\times IT\rightarrow\mathbb{N}\). For example, in Figure 1 (a) the node 10 has two outgoing edges with label \(b\), so \(c(10,(b,0))=2\). To define \(count:\mathbb{D}\rightarrow\mathbb{N}\), we use \(c\) in the following way: For every node \(v\) and for every two incidence-types \(i_{1}\) and \(i_{2}\) occurring at the node \(v\), we can determine the number of occurrences \(count_{v}(i_{1},i_{2})\) of the digram \((i_{1},i_{2})\) at the node \(v\) by
\[count_{v}(i_{1},i_{2}):=\begin{cases}\min(c(v,i_{1}),c(v,i_{2}))&\text{ if }i_{1}\neq i_{2}\\ \left\lfloor\frac{c(v,i_{1})}{2}\right\rfloor&\text{ if }i_{1}=i_{2}.\end{cases}\]
In Figure 1 (a), we have for example \(c(10,(b,0))=2\) and \(c(10,(a,0))=1\). From these two values of \(c\), we obtain that there is one occurrence of each of the digrams (\((b,0)\),\((b,0)\)) and (\((b,0)\),\((a,0)\)) and there is no occurrence of the digram \(((a,0),(a,0))\) at node 10. We define \(count:\mathbb{D}\rightarrow\mathbb{N}\) by
\[count((i_{1},i_{2}))=\sum_{v\in V}count_{v}(i_{1},i_{2})\]
Line 5: update countWe keep both, \(c\) and \(count\), from the step count of line 1. When we replace an edge \(e\), we consider all incidence-types \(i_{1}=(a,k)\) and nodes \(v\) of \(e\) such that \(label(e)=a\) and \(e[k]=v\) and reduce \(c(v,i_{1})\) by 1 accordingly. Then, we reduce the number \(count((i_{1},i_{2}))\) of all digram occurrences of digrams \(d=(i_{1},i_{2})\) for all incidence-types \(i_{2}\) with \(c(v,i_{2})>0\) by 1 if and only if
\[i_{1}\neq i_{2}\text{ and }c(v,i_{1})\leq c(v,i_{2})\text{ or }i_{1}=i_{2} \text{ and }c(v,i_{1})\text{ is even.}\]
This adjustment has an equal result as if we would count the number of digrams calling line (1) of Algorithm 1 again. The steps to increase the counts for the new nonterminal edge are analog.
Line 4: replaceWe find occurrences of a digram \(d=((a_{1},m_{1}),(a_{2},m_{2}))\) by a left-to-right scan of the edge list and by saving pointers to edges that have one of the labels of d. Let \(e\) have \(label(e)=a_{1}\). Then, we store a pointer to \(e\) according to the node \(e[m_{1}]\). If we already had a pointer to \(f\) with \(label(f)=a_{2}\), \(f[m_{2}]=e[m_{1}]\), and \(e\neq f\), we remove the pointers of \(e\) and \(f\) and replace \((e,f)\) because it is an occurrences of \(d\). We want \(e\neq f\) to avoid replacing a loop as an occurrence. In Figure 1 (a), we replace the digram ((a, 1), (b, 0)) yielding the graph of Figure 1 (b). In Figure 1, we assume that the edge with label \(b\) from node \(10\) to node \(11\) occurs in the edge list before the edge from node \(10\) to node \(12\).
#### 3.2.2 Succinct Encoding.
Our encoding of the start graph \(G\) is based on \(k^{2}\)-trees [2] of the incidence-matrix \(M\) and is shown in Figure 2. First, we sort the edges by the ID of their label and encode the monotonically increasing list of the IDs by the Elias-Fano-encoding [19]. The incidence-matrix \(M\) has a \(1\) in row \(i\) and column \(j\) if and only if edge \(j\) is connected to node \(i\). For example, let \(e_{2}=(A,10,10,11)\) be an edge that is at position \(2\) in the sorted edges. Then, \(M\) contains a \(1\) at the positions \((10,1)\) and \((11,1)\) as in Figure 2, but it does not contain the information how often and at which positions in edge \(e_{2}\) the node \(10\) occurs. We introduce the _index-function_ to close this information gap.
Let \(\zeta_{e}\) be the duplicate-free and sorted list of nodes of \(e\). In the example, this would be \(\zeta_{e_{2}}=[10,11]\). Let \(n\) be the length of \(\zeta_{e}\). The _index-function_\(\pi_{e}:\{0,\ldots,rank(e)-1\}\rightarrow\{0,\ldots n-1\}\) maps each connection-type \(i\) of the edge \(e\) to its node in \(\zeta_{e}\). Formally \(e[i]=\zeta_{e}[\pi(i)]\). We write the index-function as a tuple \((\pi_{e}(0),\ldots,\pi_{e}(rank(e)-1))\). In the example, \(\pi_{e_{2}}=(0,0,1)\). Each edge can be uniquely reconstructed by its column in \(M\) together with its index-function and its label. Instead of saving the same index-function multiple times,
Figure 2: (a) shows a schematic image of the succinct encoding. One edge in the start rule is represented by three parts: a column of the incidence matrix, a label and an ID of an index-function. (b) shows the IDs and the corresponding index-functions and (c) shows how the index-function \(2\) stores the order and the repetitions of nodes \(\zeta_{e_{2}}=[10,11]\).
we assign IDs to all index functions. We use the \(\delta\)-code [5] in order to encode \(\pi\) as \(\delta(rank(e)-1)\delta(\pi(0))\ldots\delta(\pi(rank(e)-1))\).
Like Maneth et al. [15], we encode all graphs of rules except the start graph by only encoding the right-hand sides of the rules because the order of rules determines their nonterminal. We encode the number of edges and then, for each edge, the label and its nodes. For example, \(\{a(0,1)\ A(1,2,0)\}\) is encoded by \(\delta(2)\ \delta(a)\delta(0)\delta(1)\ \delta(A)\delta(1)\delta(2)\delta(0)\).
#### 4.2.2 Handling loops in the grammar.
In Figure 1 (b), we have an edge \(e=(A,10,10,11)\) that is connected twice to node \(10\) and thereby is a loop. We could introduce an extra rule \(B\rightarrow(A,0,0,1)\) and replace the edge \(e\) by \((B,10,11)\) yielding the graph of Figure 1 (c). This is a tradeoff: We introduce more rules, but we reduce the number of nodes that are used as a parameter in rules by \(1\) for each occurrence of such a loop. An evaluation of the implementation shows that these extra rules does not improve compression, because the index-function that is used in the succinct encoding also removes the duplicate parameters. We do not replace loops by introducing extra rules, because skipping this step reduces the time needed to compress a graph.
#### 4.2.3 Compute the set of incident edges.
Let \(v\in V\) be a node. We operate differently depending on whether we compute the set of incident edges \(n_{\psi(G)}(v,j)\) in the start graph or in another rule because they are encoded differently. In the start graph, we compute a subset \(S\) of edges by decompressing the row \(v\) of the incidence-matrix \(M\). This can be done without decompressing the whole incidence-matrix [2]. Only the edges that belong to a column where in row \(v\) is a \(1\) are connected to node \(v\), so all other edges can be omitted. We decompress only these edges and continue with this new computed edge list as described in the following part.
The set of incident edges \(n_{\psi(G)}(v,j)\) of a node \(v\) in the graph of a rule can be obtained by a left-to-right scan of the edges list of the rule. Let \(e\) be an edge. If \(e\) is a terminal with \(e[j]=v\), then add \(e\) to the result. If \(e\) is a nonterminal with \(\exists i<rank(e)\) with \(e[i]=v\), decompress the edge \(e\) locally and recursively calculate the set of incident edges on the rule \(label(e)\). The compression of the grammar improves this operation, as each \(e\) with \(e[i]\neq v\) for each \(0\leq i<rank(e)\) can directly be omitted.
## 5 Experimental results
We investigate the time needed to answer a neighborhood query. We compare results of the RDF compressors listed in Table 1. We tested all implementations on a virtual machine running Debian 5.10.149-2 with 32GB RAM and 2 Cores @ 2.30GHz.
ITR is implemented in C. As the \(k^{2}\)-approach has no open available implementation by Brisaboa et al. [2], we use the implementations in Java and C++
of Roder et al. [18]. We include all time needed for IO-operations as not all compressors output their time for performing the query without IO-operations. For \(k^{2}\)-java, we include the network time to access local host, as their queries are performed via the fuseki server1, but we do not include the time to load the data at start of the server. hdt-java and hdt++ build additional indices once and save them using additional space on storage. We also exclude this time and space.
Footnote 1: [https://jena.apache.org/documentation/fuseki2/](https://jena.apache.org/documentation/fuseki2/)
Due to the different supported query types, we compare the runtime by using queries that are equal to outgoing neighborhood queries. We use SELECT?pr?ob WHERE { nodename?pr?ob } as SPARQL query and nodename?? for triple queries. We query the outgoing neighborhood for the first 1000 nodes of a file and take the average time needed to answer the queries. In case of the neighborhood queries, the output are the node IDs, the result of the SPARQL and triple queries are the node texts. See Appendix A for the conversion of node IDs and node labels of ITR.
In Figure 3, we see that gRePair, hdt-java, and \(k^{2}\)-java as Java or Scala implementations are outperformed by the C and C++ implementations. ITR is about 2 to 6 times faster than hdt++ and more than 100 times faster than gRePair.
\begin{table}
\begin{tabular}{l l l l} Name & \multicolumn{2}{l}{Language Provided Query Type Paper} \\ \hline ITR & C & Neighborhood & Our approach \\ RDFRePair & Java & - & Röder et al. [18] \\ gRePair & Scala & Neighborhood & Maneth et al. [15] \\ hdt-java & Java & SPARQL & Fernández et al. [6] \\ hdt++ & C++ & Triple & Hernández-Illera et al. [10] \\ \(k^{2}\)-java & Java & SPARQL & Brisaboa et al. [2] \\ \(k^{2}\)++ & C++ & - & Brisaboa et al. [2] \\ \end{tabular}
\end{table}
Table 1: RDF-compressors
\begin{table}
\begin{tabular}{l r r r r r} file & \multicolumn{2}{l}{filesize in byte} & \multicolumn{2}{l}{nodes} & \multicolumn{2}{l}{edges edge/node labels} \\ \hline homepages-en-ttl & 6080545 & 98665 & 50000 & 1,01 & 1 \\ transitive-redirects-en-ttl & 7361890 & 82385 & 50000 & 1,21 & 1 \\ external-links-en-ttl & 8037122 & 54869 & 50000 & 1,82 & 1 \\ instance-types-en-ttl & 7153350 & 48912 & 50000 & 2,04 & 1 \\ geo-coordinates-en-ttl & 7362056 & 46107 & 50000 & 2,16 & 4 \\ personala-en-ttl & 5883995 & 31594 & 50000 & 3,17 & 10 \\ jamendo-nt & 147103035 & 396531 & 1047951 & 5,29 & 25 \\ wikidata-20200308 & 6502308032 & 10051660 & 42922799 & 8,54 & 635 \\ archiveshub-nt & 243700760 & 280556 & 1361816 & 9,71 & 139 \\ scholarydata-dump-nt & 225455038 & 140042 & 1159985 & 16,57 & 84 \\ \end{tabular}
\end{table}
Table 2: used datasets and specific information about them. These datasets were also used in Röder et al. [18]. The datasets are sorted by the edge per node ratio.
Our runtime improvement in neighborhood queries can be explained as follows. In comparison to gRePair and RDFRePair, we choose digrams that do not replace nodes, such that we can directly access all nodes in the startgraph. We benefit from using the incidence-matrix for all edges instead of splitting the incidence-matrix based on the edge label as done by gRePair, so we only need to look at a single \(k^{2}\) compressed matrix before decompressing the edges. The index-function enables us to represent all connections of an edge to a node with a single 1 bit and all these 1 bits are located in a single row, so they can directly be accessed, and edges that are not connected with the node are omitted early. This step is improved by the grammar-based compression that reduces the number of edges in the start graph.
In comparison to \(k^{2}\)-java, we benefit from using C instead of Java. We may also benefit from the reduced number of edges in the startgraph which leads to a smaller matrix that needs to be compressed by the \(k^{2}\) algorithm in comparison to the matrix used by \(k^{2}\)-java. The hdt format is too different to explain why we outperform them.
## 6 Conclusion
We have presented a new graph compression scheme ITR adapting the RePair compression scheme. We ran an evaluation comparing ITR with the existing variants of RePair on graphs, namely gRePair and RDFRePair, and with the compression schemes \(k^{2}\) and hdt provided on different implementations. Our comparison of the time required to perform neighborhood queries shows that ITR performs neighborhood queries significantly faster than the other compressors that support these queries.
Figure 3: Average runtime in milliseconds of 1000 queries each. A missing bar indicates a stop of the compression algorithms for unknown reasons, so there exists no output file to query.
## Acknowledgements
We would like to thank Fabian Rothlinger for his support in implementing ITR.
## Appendix A: Additional techniques
The Appendix summarizes further details of our compression technique and further results on compression size and time of ITR to show that ITR provides competitive compression with the other compressors.
### The dictionary
We use the _Burrows-Wheeler transformation_[4] (BWT) for the dictionary. We store all labels in the same BWT using multistring BWTs. As the BWT is only a permutation of the input symbols and thus yields no compression, we first apply the run-length-encoding (RLE) to the BWT. We store the run-length compressed BWT string as Huffman encoded wavelet tree as described in Navarro [16].
We call the functions of the dictionary \(locate(T)=i\), which takes a text \(T\) and returns the associated ID \(i\), and \(extract(i)=T\), which takes an ID \(i\) and returns the associated label \(T\). ITR needs about 2.5 milliseconds on average for a single \(locate\) and as well 2.5 milliseconds on average for a single \(extract\) query. The number of \(locate\) and \(extract\) queries depends on the task, for example, to check the reachability of two RDF nodes requires only two \(locate\) queries.
In addition to \(locate\), we can search substrings in the dictionary with the FM-Index of Ferragina et al. [7]. To speed-up the search, we sample the suffix array as described in Makinen [14], but we save the sampled suffix array as an indexable dictionary, that is, we save the ID of the label that the suffix belongs to for each position in the sample instead of the start position of the suffix in the label. For the evaluation of this paper, we turned the sampled suffix array off.
### Rank and select optimized bit-sequences
We use the rank and select optimized bit-sequences of Gonzalez et al. [9] and Raman et al. [17] for the compression of the wavelet tree nodes and the RLE. To compute rank, we split a bitesquence \(B\) into blocks of length \(w=32\). Factor \(f\in\mathbb{N}\) are the number of blocks collected in a super block and we choose \(f=64\) to get a low compression ratio. For every super block, we store \(R_{s}[i]=rank_{1}(B,i\cdot s)\). Then, we can compute \(rank_{1}(B,i)\) and \(rank_{0}(B,i)\) in \(\mathcal{O}(1)\)[9]. We use the binary search approach of Gonzalez et al. [9] for select queries. As the overhead would exceed the speed-up advantage of short bitesquences, we only use the technique of Gonzalez et al. [9] for bitesquences \(B\) with \(length(B)>200\). The bits are encoded as RRR bitsquences [17].
## Appendix B: Additional experimental results
We consider the compression ratio of the file size \(fsratio(G)=\frac{fs(G)}{fs(\psi(G))}\) with \(fs(G)\) being the file size of the grammar \(G\) and \(fs(\psi(G))\) being the file size of the decompressed RDF graph accordingly.
### Compression size
We say there is a noticeable difference in fsratio if for a given file, two compressors have a difference of more than \(1\%=0.01\) fsratio. First, hdt-java and hdt++ perform equally in fsratio. For most graphs, this is also true for \(k^{2}\)-java and \(k^{2}\)++. On some graphs, \(k^{2}\)-java performs noticeable better, because the dictionary is compressed better. This is interesting, as \(k^{2}\)-java and \(k^{2}\)++ use hdt-java and hdt++ to compress their dictionaries. We found no reason for this. Besides this, the syntactic compressors perform all very equally with a little advantage to the \(k^{2}\) approaches. For the semantic compressors, it depends on the dataset which of them has the strongest compression and whether or not they outperform the syntactic compressors. As shown in Figure 4, ITR achieves comparable compression to other compressors.
To allow further improvements, we sketch the parts that need the most space of our compression. The dictionary takes in most cases between 90% and 95% of the overall filesize. This result corresponds to the result of Roder et al. [18] who state that "more efficient dictionary compression approaches yield the potential for better RDF compression ratios". Regarding the graph structure, the \(k^{2}\) representation of the incidence-matrix consumes most of the space.
Figure 4: fsratio of selected RDF graphs. Missing bars indicate a stop of the compression algorithms for unknown reasons.
### Compression time
In Figure 5, we see that the C or C++ implementations in most cases outperform the Java or Scala implementations as we expected. hdt++ and \(k^{2}\)++ are about 10 times as fast as hdt-java and \(k^{2}\)-java. If we compare the syntactic compressors hdt-java and \(k^{2}\)-java with the semantic compressors gRePair and RDFRePair, all written in equally fast languages, we get the result that the syntactic compressors are about 10 times as fast as the semantic compressors. In most cases, our approach is only 5 to 8 times slower than hdt++ and \(k^{2}\)++.
ITR is the fastest semantic compressor in both absolute time and, for graphs with an edge per node ratio of more than 2, when considering a bonus factor for other compressors implemented in a slower the programming language.
|
2305.18029
|
Faithfulness Tests for Natural Language Explanations
|
Explanations of neural models aim to reveal a model's decision-making process
for its predictions. However, recent work shows that current methods giving
explanations such as saliency maps or counterfactuals can be misleading, as
they are prone to present reasons that are unfaithful to the model's inner
workings. This work explores the challenging question of evaluating the
faithfulness of natural language explanations (NLEs). To this end, we present
two tests. First, we propose a counterfactual input editor for inserting
reasons that lead to counterfactual predictions but are not reflected by the
NLEs. Second, we reconstruct inputs from the reasons stated in the generated
NLEs and check how often they lead to the same predictions. Our tests can
evaluate emerging NLE models, proving a fundamental tool in the development of
faithful NLEs.
|
Pepa Atanasova, Oana-Maria Camburu, Christina Lioma, Thomas Lukasiewicz, Jakob Grue Simonsen, Isabelle Augenstein
|
2023-05-29T11:40:37Z
|
http://arxiv.org/abs/2305.18029v2
|
# Faithfulness Tests for Natural Language Explanations
###### Abstract
Explanations of neural models aim to reveal a model's decision-making process for its predictions. However, recent work shows that current methods giving explanations such as saliency maps or counterfactuals can be misleading, as they are prone to present reasons that are unfaithful to the model's inner workings. This work explores the challenging question of evaluating the faithfulness of natural language explanations (NLEs). To this end, we present two tests. First, we propose a counterfactual input editor for inserting reasons that lead to counterfactual predictions but are not reflected by the NLEs. Second, we reconstruct inputs from the reasons stated in the generated NLEs and check how often they lead to the same predictions. Our tests can evaluate emerging NLE models, proving a fundamental tool in the development of faithful NLEs.
## 1 Introduction
Explanations of neural models aim to uncover the reasons behind model predictions in order to provide evidence on whether the model is trustworthy. To this end, explanations have to be _faithful_, i.e., reflect the decision-making process of the model, otherwise, they could be harmful (Hancox-Li, 2020). However, recent studies show that explanations can often be unfaithful, covering flaws and biases of the model. Adebayo et al. (2018) show that certain widely deployed explainability approaches that provide saliency maps (with importance scores for each part of the input, e.g., words or super-pixels) can even be _independent_ of the training data or of the model parameters. Others also question the effectiveness and reliability of counterfactuals (Slack et al., 2021), concept activations, and training point ranking explanations (Adebayo et al., 2022).
In this work, we investigate the degree of faithfulness of natural language explanations (NLEs), which explain model predictions with free text. NLEs are not constrained to contain only input segments, thus they provide more expressive (Camburu et al., 2021) and usually more human-readable explanations than, e.g., saliency maps (Wiegreffe and Marasovic, 2021). Evaluating the faithfulness of explanations is very challenging in general, as the ground-truth reasons used by a model for a prediction are usually unknown. Evaluating the faithfulness of NLEs is further complicated, as they often include words not present in the input. Thus, existing tests evaluating other types of explanations, e.g., saliency maps, cannot be directly applied to NLEs. As a stepping stone towards evaluating how faithful NLEs are, we design two tests. Our first test investigates whether NLE models are faithful to reasons for counterfactual predictions. We introduce a _counterfactual input editor_ that makes counterfactual interventions resulting in new instances on which the model prediction changes but the NLE does not reflect the intervention leading to the change. Our second test reconstructs an input from the reasons stated in a generated NLE, and checks whether the new input leads to a different prediction. We apply our tests to four NLE models over three datasets. We aim for our tests to be an important tool to assess the faithfulness of existing and upcoming NLE models.1
Footnote 1: The code is available at [https://github.com/copenlu/nle_faithfulness](https://github.com/copenlu/nle_faithfulness)
## 2 The Faithfulness Tests
Given a dataset \(X{=}(x_{i},e_{i},y_{i})\), with an input \(x_{i}\), a gold NLE \(e_{i}\), and a gold label \(y_{i}\in L\), where \(L\) is the set of all labels for \(X\), a model \(f\) is trained to produce an NLE and a task prediction for each input: \(f(x_{i})\) = (\(\hat{e_{i}}\), \(\hat{y_{i}}\)). We also refer to \(\hat{e_{i}}\) as \(f(x_{i})_{ex}\) and to \(\hat{y_{i}}\) as \(f(x_{i})_{p}\).
### The Counterfactual Test: Are NLE models faithful to reasons for counterfactual predictions?
Studies in cognitive science show that humans usually seek counterfactuals by looking for
factors that explain why event \(\mathcal{A}\) occurred instead of \(\mathcal{B}\)(Miller, 2019). Counterfactual explanations were proposed for ML models by making interventions either on the input (Wu et al., 2021; Ross et al., 2021) or on the representation space (Jacovi et al., 2021). An intervention \(h(x_{i},y_{i}^{C})=x_{i}^{\prime}\) is produced over an input instance \(x_{i}\) w.r.t. a target counterfactual label \(y_{i}^{C},y_{i}^{C}\neq\widehat{y_{i}}\), such that the model predicts the target label: \(f(x_{i}^{\prime})=\widehat{y_{i}^{\prime}}=y_{i}^{C}\).
For our test, we search for interventions that insert tokens into the input such that the model gives a different prediction, and we check whether the NLE reflects these tokens. Thus, we define an intervention \(h(x_{i},y_{i}^{C})=x_{i}^{\prime}\) that, for a given counterfactual label \(y_{i}^{C}\), generates a set of words \(W{=}\{w_{j}\}\) that, inserted into \(x_{i}\), produces a new instance \(x_{i}^{\prime}=\{x_{i,1},\ldots x_{i,k},W,\,x_{i,k+1},\ldots x_{i,|x_{i}|}\}\) such that \(f(x_{i}^{\prime})_{p}=y_{i}^{C}\). While one can insert each word in \(W\) at a different position in \(x_{i}\), here we define \(W\) to be a _contiguous_ set of words, which is computationally less expensive. As \(W\) is the counterfactual for the change in prediction, then at least one word from \(W\) should be present in the NLE for the counterfactual prediction:
\[\begin{split} h(x_{i},y_{i}^{C})=x_{i}^{\prime}\\ x_{i}^{\prime}=\{x_{i,1},\ldots x_{i,k},W,x_{i,k+1},\ldots x_{i,|x _{i}|}\}\\ f(h(x_{i},y_{i}^{C}))=f(x_{i}^{\prime})=y_{i}^{C}\neq\widehat{y_{i}}=f (x_{i})\\ \text{If }W\cap^{s}\widehat{e}_{i}^{\prime}=\emptyset,\text{ then }\widehat{e}_{i}^{\prime}\text{ is unfaithful},\end{split} \tag{1}\]
where the \(s\) superscript indicates that the operator is used at the semantic level. Sample counterfactual interventions satisfying Eq. 1 are in Table 1. More examples are in Tables 4 and 5 in the Appendix.
To generate the input edits \(W\), we propose an editor \(h\) as a neural model and follow Ross et al. (2021). The authors generate input edits that change the model prediction to target predictions and refer to these edits as explanations. We note that besides the input edits, confounding factors could cause the change in prediction, e.g., the edits could make the model change its focus towards other parts of the input and not base its decision on the edit itself. In this work, we presume that it is still important for the NLEs to point to the edits, as the model changed its prediction when the edit was inserted. This aligns with the literature on counterfactual explanations, where such edits are seen as explanations (Guidotti, 2022). We also hypothesize that confounding factors are rare, especially when insertions rather than deletions are performed. We leave such investigation for future work.
During the training of \(h\), we mask \(n_{1}\%\) tokens in \(x_{i}\), provide as an input to \(h\) the label predicted by the model, i.e., \(y_{i}^{C}=\widehat{y_{i}}\), and use the masked tokens to supervise the generation of the masked text (corresponding to \(W\)). During inference, we provide as target labels \(y_{i}^{C}\in Y,y_{i}^{C}\neq\widehat{y_{i}}\), and we search over \(n_{2}\) different positions to insert \(n_{3}\) candidate tokens at each position at a time. The training objective is the cross-entropy loss for generating the inserts.
We use as a metric of unfaithfulness the percentage of the instances in the test set for which \(h\) finds counterfactual interventions that satisfy Eq. 1. To compute this automatically, we use \(\cap^{s}\) at the syntactical level. As paraphrases of \(W\) might appear in the NLEs, we manually verify a subset of NLEs. We leave the introduction of an automated evaluation for the semantic level for future work.
Our metric is not a complete measure of the overall faithfulness of the NLEs, as (1) we only check whether the NLEs are faithful to the reasons for counterfactual predictions, and (2) it depends on the performance of \(h\). But if \(h\) does not succeed in finding a significant number of counterfactual rea
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Test** & **Original Instance** & **Instance After Test Intervention** \\ \hline Counter- & _Premise:_ Man in a black suit, white shirt and black bowtie playing an factual (§2) & _Premise:_ Man in a black suit, white shirt and black bowtie playing an instrument with the rest of his symbynopy surrounding him. \\ test (§2) & _Hypothesis:_ A call person in a suit. \\ & _Prediction:_ neutral & _Prediction:_ contradiction \\ & _NLE:_ A man is not a tall person. \\ & _Unfaithfulness cause:_ inserted word ‘**blue’** \(\notin\) NLE but changed the prediction. \\ Input & _Premise:_ Many people standing outside of a place talking to each other in \\ recom- & front of a building that has a sign that says ‘Hi-POINITE. \\ & _Hypothesis:_ The people are having a chat before going into the work \\ test (§2) & building. \\ & _Prediction:_ neutral \\ & _NLE:_ Just because people are talking does not mean they are having a chat. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples of unfaithful explanations detected with our tests for the task of NLI (see §2). We apply the tests on an original instance (second column), which results in a new instance (third column). The parts of the input changed by the test are marked with, and the intervention made by the test is in blue. marks an NLE or a prediction that does not match the expectation, thus pointing to the underlined NLE as being unfaithful.
sons not reflected in the NLEs, it could be seen as evidence of the faithfulness of the model's NLEs.
The Input Reconstruction Test: Are the reasons in an NLE sufficient to lead to the same prediction as the one for which the NLE was generated?
Existing work points out that for an explanation to be faithful to the underlying model, the _reasons \(r_{i}\) in the explanation_ should be _sufficient_ for the model to make the same prediction as on the original input Yu et al. (2019):
\[\begin{split} r_{i}=R(x_{i},\widehat{e_{i}})\\ \text{If }f(r_{i})_{p}\neq f(x_{i})_{p}\text{, then }\widehat{e_{i}} \text{ is unfaithful,}\end{split} \tag{2}\]
where \(R\) is the function that builds a new input \(r_{i}\) given \(x_{i}\) and \(\widehat{e_{i}}\). Sufficiency has been employed to evaluate saliency explanations, where the direct mapping between tokens and saliency scores allows \(r_{i}\) to be easily constructed (by preserving only the top-N most salient tokens) DeYoung et al. (2020); Atanasova et al. (2020). For NLEs, which lack such direct mapping, designing an automated extraction \(R\) of the reasons in \(\widehat{e_{i}}\) is challenging.
Here, we propose automated agents \(R\)s that are task-dependent. We build \(R\)s for e-SNLI Camburu et al. (2018) and ComVE Wang et al. (2020), due to the structure of the NLEs and the nature of these datasets. However, we could not construct an \(R\) for CoS-E Rajani et al. (2019). For e-SNLI, a large number of NLEs follow certain templates.Camburu et al. (2020) provide a list of templates covering 97.4% of the NLEs in the training set. For example, "<X> is the same as <Y>" is an NLE template for entailment. Thus, many of the generated NLEs also follow these templates. In our test, we simply use <X> and <Y> from the templates as the reconstructed pair of premise and hypothesis, respectively. We keep only those <X> and <Y> that are sentences containing at least one subject and at least one verb. If the NLE for the original input was faithful, then we expect the prediction for the reconstructed input to be the same as for the original.
Given two sentences, the ComVE task is to pick the one that contradicts common sense. If the generated NLE is faithful, replacing the correct sentence with the NLE should lead to the same prediction.
## 3 Experiments
Following Hase et al. (2020), we experiment with four setups for NLE models, which can be grouped
\begin{table}
\begin{tabular}{l r r r} \hline \hline
**Model** & **\%Counter** & \begin{tabular}{c} **\%Counter** \\ **Unfaith** \\ \end{tabular} &
\begin{tabular}{c} **\%Total** \\ **Unfaith** \\ \end{tabular} \\ \hline \multicolumn{4}{c}{**e-SNLI**} \\ MT-Re-Rand & 38.85 & **60.39** & 23.46 \\ MT-Re-Edit & **66.70** & 46.12 & **26.15** \\ _MT-Re-Rand+Edit_ & _64.98_ & _53.29_ & _34.63_ \\ ST-Re-Rand & 37.14 & **54.26** & 20.15 \\ ST-Re-Edit & **49.64** & 52.74 & **26.18** \\ _ST-Re-Rand+Edit_ & _61.75_ & _58.27_ & _35.63_ \\ MT-R-Rand & 37.17 & **54.93** & 20.42 \\ MT-R-Edit & **55.04** & 41.34 & **22.75** \\ _MT-R-Rand+Edit_ & _63.84_ & _48.63_ & _31.05_ \\ ST-R-Rand & 35.21 & **57.82** & 20.36 \\ ST-R-Edit & **60.00** & 45.66 & **27.39** \\ _ST-R-Rand+Edit_ & _57.31_ & _55.03_ & _37.04_ \\ \multicolumn{4}{c}{**CoS-E**} \\ MT-Re-Rand & 44.89 & **83.18** & 37.34 \\ MT-Re-Edit & **50.00** & 77.23 & **38.62** \\ _MT-Re-Rand+Edit_ & _59.89_ & _85.26_ & _51.06_ \\ ST-Re-Rand & 52.34 & 79.47 & 41.60 \\ ST-Re-Edit & **53.83** & **86.17** & **46.38** \\ _ST-Re-Rand+Edit_ & _67.45_ & _87.54_ & _59.04_ \\ MT-R-Rand & 39.26 & **84.01_ & _32.98_ \\ MT-R-Edit & **50.00** & 78.72 & **39.36** \\ _MT-R-Rand+Edit_ & _56.81_ & _85.58_ & _48.62_ \\ ST-R-Rand & 46.70 & **75.85** & _35.43_ \\ ST-R-Edit & **52.02** & 75.05 & **39.04** \\ _ST-R-Rand+Edit_ & _63.62_ & _81.77_ & _52.02_ \\ \multicolumn{4}{c}{**ComVE**} \\ MT-Re-Rand & 35.60 & **83.43** & 29.70 \\ MT-Re-Edit & **50.90** & 70.53 & **35.90** \\ _MT-Re-Rand+Edit_ & _61.10_ & _78.89_ & _48.20_ \\ ST-Re-Rand & 41.90 & 74.22 & 31.10 \\ ST-Re-Edit & **48.40** & **76.45** & **37.00** \\ _ST-Re-Rand+Edit_ & _62.90_ & _77.42_ & _48.70_ \\ MT-R-Rand & 33.70 & **75.67** & 25.50 \\ MT-R-R-Edit & **47.20** & 66.53 & **31.40** \\ _MT-R-Rand+Edit_ & _58.10_ & _73.84_ & _42.90_ \\ ST-R-Rand & 36.30 & **80.17** & 29.10 \\ ST-R-Edit & **49.50** & 79.80 & **39.50** \\ _ST-Re-Rand+Edit_ & _61.80_ & _83.98_ & _51.90_ \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results for the **counterfactual test**. For each setup (Eq. 3), we include the results of the random baseline (Rand), the counterfactual editor (Edit), and their union (Rand+Edit). The “% Counter” column indicates the editor’s success in finding inserts that change the model’s prediction. “% Counter Unfaith” presents the percentage of instances where the inserted text was not found in the associated NLE among the instances where the prediction was changed. “% Total Unfaith” presents the percentage of instances where the prediction was changed and the inserted text was not found in the associated NLE among all the instances in the test set. The highest rates of success in each pair of (Rand, Edit) tests are in bold. The highest total percentage of detected unfaithful NLEs for each dataset is underlined.
\begin{table}
\begin{tabular}{l l r r} \hline \hline & **Model** & **\%Reconst** & **\%Total Unfaith** \\ \hline
**e-SNLI** & MT-Re & 39.49 & 7.7 \\ & ST-Re & 39.99 & **9.7** \\ & MT-Ra & 44.87 & 7.8 \\ & ST-Ra & 43.32 & 9.3 \\
**ComVE** & MT-Re & 100 & 36.9 \\ & ST-Re & 100 & 22.7 \\ & MT-Ra & 100 & **40.3** \\ & ST-Ra & 100 & 28.5 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results for the **input reconstruction test**. “% Reconst” shows the percentage of instances for which we managed to form a reconstructed input. “% Total Unfaith” shows the total percentage of unfaithful NLEs found among all instances in the test set of each dataset. The highest detected percentage of unfaithful NLEs for each dataset is in bold.
by whether the prediction and NLE generation are trained with a multi-task objective using a joint model (MT) or with single-task objectives using separate models (ST). They can also be grouped by whether they generate NLEs conditioned on the predicted label (rationalizing models (Ra)), or not conditioned on it (reasoning models (Re)). The general notation \(f(x_{i})=(\widehat{e_{i}},\widehat{y_{i}})\) used in SS2 includes all four setups:
\[\begin{split}\textbf{MT-Re:}f_{p,ex}(x_{i})=(\widehat{e_{i}}, \widehat{y_{i}})\\ \textbf{MT-Ra:}f_{p,ex}(x_{i})=(\widehat{e_{i}}_{i|\widehat{y_{i} }},\widehat{y_{i}})\\ \textbf{ST-Re:}f_{ex}(x_{i})=\widehat{e_{i}};f_{p}(x_{i},\widehat{ e_{i}})=\widehat{y_{i}}\\ \textbf{ST-Ra:}f_{ex}(x_{i},y_{j})=\widehat{e_{i}}_{\widehat{j}} ;f_{p}(x_{i},\widehat{e_{i}})=\widehat{y_{j}}\\ j=\operatorname*{argmax}_{j\in[1,\dots,|L|]}(f_{p}(x_{i},\widehat{ e_{i}}_{\widehat{j}})),\end{split} \tag{3}\]
where \(f_{p,ex}\) is a joint model for task prediction and NLE generation, \(f_{p}\) is a model only for task prediction, and \(f_{ex}\) is a model only for NLE generation. The ST-Ra setup produces one NLE \(e_{i,j}\) for each \(y_{j}\in L\). Given \(\widehat{e_{i,j}}\) and \(x_{i}\), \(f_{p}\) predicts the probability of the corresponding label \(y_{j}\) and selects as \(\widehat{y_{i}}\) the label with the highest probability.
For both \(f\) and the editor \(h\), we employ the pre-trained T5-base model (Raffel et al., 2020). The editor uses task-specific prefixes for insertion and NLE generation. We train both \(f\) and \(h\) for 20 epochs, evaluate them on the validation set at each epoch, and select the checkpoints with the highest success rate (see SS2). We use a learning rate of 1e-4 with the Adam optimizer (Kingma and Ba, 2014). For the editor, during training, we mask \(n_{1}\) consecutive tokens with one mask token, where \(n_{1}\) is chosen at random in \([1,3]\). During inference, we generate candidate insertions for \(n_{2}=4\) random positions, with \(n_{3}=4\) candidates for each position at a time. The hyper-parameters are chosen with a grid search over the validation set.4 For the manual evaluation, an author annotated the first 100 test instances for each model (800 in total). The manual evaluation has been designed in accordance with related work (Camburu et al., 2018), which also evaluated 100 instances per model. We found that no instances were using paraphrases. Hence, in our work, the automatic metric can be trusted.
Footnote 4: When \(n_{2}\) and \(n_{3}\) are increased, a higher number of insertions are generated, which in turn could result in a higher percentage of unfaithful NLEs. However, increasing these parameters also leads to higher computational demands. Future research could explore strategies for efficiently searching the space of insertion candidates.
**Baseline.** For the counterfactual test, we incorporate a random baseline as a comparison. Specifically, we insert a random adjective before a noun or a random adverb before a verb. We randomly select \(n_{2}=4\) positions where we insert the said words, and, for each position at a time, we consider \(n_{3}=4\) random candidate words. The candidates are single words randomly chosen from the complete list of adjectives or adverbs available in WordNet (Fellbaum, 2010). We identify the nouns and verbs in the text with spaCy (Honnibal et al., 2020).
**Datasets.** We use three popular datasets with NLEs: e-SNLI (Camburu et al., 2018), CoS-E (Rajani et al., 2019), and ComVE (Wang et al., 2020). e-SNLI contains NLEs for SNLI (Bowman et al., 2015), where, given a premise and a hypothesis, one has to predict whether they are in a relationship of _entailment_ (the premise entails the hypothesis), _contradiction_ (the hypothesis contradicts the premise), or _neutral_ (neither entailment nor contradiction hold). CoS-E contains NLEs for commonsense question answering, where given a question, one has to pick the correct answer out of three given options. ComVE contains NLEs for commonsense reasoning, where given two sentences, one has to pick the one that violates common sense.
### Results
**Counterfactual Test.** Table 2 shows the results of our counterfactual test. First, we observe that when the random baseline finds words that change the prediction of the model, the words are more often not found in the corresponding NLE compared to the counterfactual editor (% Counter Unfaith). We conjecture that this is because the randomly selected words are rare for the dataset compared to the words that the editor learns to insert. Second, the counterfactual editor is better at finding words that lead to a change in the model's prediction, which in turn results in a higher percentage of unfaithful instances in general (% Total Unfaith). We also observe that the insertions \(W\) lead to counterfactual predictions for up to 56.70% of the instances (for MT-Re-Edit on e-SNLI). For up to 46.38% of the instances (for ST-Re-Edit on CoS-E), the editor is able to find an insertion for which the counterfactual NLE is unfaithful. Table 1, row 1, presents one such example. More examples for the random baseline can be found in Table 4, and for the counterfactual editor in Table 5. Finally, the union of the counterfactual interventions discovered by the random baseline and the editor, we observe total percentages of up to 59.04% unfaithfulness to the counterfactual.
We see that for all datasets and models, the total percentages of unfaithfulness to counterfactual are high, between 37.04% (for MT-Ra-Rand+Edit on e-SNLI) and 59.04% (ST-Re-Rand+Edit for CoS-E). We re-emphasize that this should not be interpreted as an overall estimate of unfaithfulness, as our test is not complete (see SS2).
**The Input Reconstruction Test.** Table 3 shows the results of the input reconstruction test. We were able to reconstruct inputs for up to 4487 out of the 10K test instances in e-SNLI, and for all test instances in ComVE. There are, again, a substantial number of unfaithful NLEs: up to 14% for e-SNLI, and up to 40% for ComVE. An example is in Table 1, row 2. More examples can be found in Table 6. We also notice that this test identified considerably more unfaithful NLEs for ComVE than for e-SNLI, while for our first test, the gap was not as pronounced. This shows the utility of developing diverse faithfulness tests.
Finally, all four types of models had similar faithfulness results5 on all datasets and tests, with no consistent ranking among them. This opposes the intuition that some configurations may be more faithful than others, e.g., Camburu et al. (2018) hypothesized that ST-Re may be more faithful than MT-Re, which is the case in most but not all of the cases, e.g., on CoS-E the editorial finds more unfaithfulness for ST-Re (44.04%) than for MT-Re (42.76 %). We also observe that Re models tend to be less faithful than Ra models in most cases.
Footnote 5: Task accuracy and NLE quality are given in Table 7.
## 4 Related Work
**Tests for Saliency Maps.** The faithfulness and, more generally, the utility of explanations were predominantly explored for saliency maps. Comprehensiveness and sufficiency DeYoung et al. (2020) were proposed for evaluating the faithfulness of existing saliency maps. They measure the decrease in a model's performance when only the most or the least important tokens are removed from the input. Madsen et al. (2022) propose another faithfulness metric for saliency maps, ROAR, obtained by masking allegedly important tokens and then retraining the model. In addition, Yin et al. (2022) and Hsieh et al. (2021) evaluate saliency maps through adversarial input manipulations presuming that model predictions should be more sensitive to manipulations of the more important input regions as per the saliency map. Chan et al. (2022) provide a comparative study of faithfulness measures for saliency maps. Further faithfulness testing for saliency maps was introduced by Camburu et al. (2019). Existing studies also pointed out that saliency maps can be manipulated to hide a classifier's biases towards dataset properties such as gender and race (Dombrowski et al., 2019; Slack et al., 2020; Anders et al., 2020). While diagnostic methods for saliency maps rely on the one-to-one correspondence between the saliency scores and the regions of the input, this correspondence is not present for NLEs, where text not in the input can be included. Thus, diagnostic methods for saliency maps are not directly applicable to NLEs. To this end, we propose diagnostic tests that can be used to evaluate NLE model faithfulness.
**Tests for NLEs.** Existing work often only looks at the plausibility of the NLEs (Rajani et al., 2019; Kayser et al., 2021; Marasovic et al., 2022; Narang et al., 2020; Kayser et al., 2022; Yordanov et al., 2022). In addition, Sun et al. (2022) investigated whether the additional context available in human- and model-generated NLEs can benefit model prediction as they benefit human users. Differently, Hase et al. (2020) proposed to measure the utility of NLEs in terms of how well an observer can simulate a model's output given the generated NLE. The observer could be an agent (Chan et al., 2022) or a human (Jolly et al., 2022; Atanasova et al., 2020). The only work we are aware of that introduces sanity tests for the faithfulness of NLEs is that of Wiegreffe et al. (2021), who suggest that an association between labels and NLEs is necessary for faithful NLEs and propose two pass/fail tests: (1) whether the predicted label and generated NLE are similarly robust to noise, (2) whether task prediction and NLE generation share the most important input tokens for each. Majumder et al. (2022) use these tests as a sanity check for the faithfulness of their model. Our tests are complementary and offer quantitative metrics.
## 5 Summary and Outlook
In this work, we introduced two tests to evaluate the faithfulness of NLE models. We find that all four high-level setups of NLE models are prone to generate unfaithful NLEs, reinforcing the need for proof of faithfulness. Our tests can be used to ensure the faithfulness of emerging NLE models and inspire the community to design complementary faithfulness tests.
## Limitations
While our tests are an important stepping stone for evaluating the faithfulness of NLEs, they are not comprehensive. Hence, a model that would perform perfectly on our tests may still generate unfaithful NLEs.
Our first test inspects whether NLE models are faithful to reasons for counterfactual predictions. It is important to highlight that NLEs may not comprehensively capture all the underlying reasons for a model's prediction. Thus, an NLE that fails to accurately represent the reasons for counterfactual predictions may still offer faithful explanations by reflecting other relevant factors contributing to the predictions. Additionally, both the random baseline and the counterfactual editor can generate insertions that result in text lacking semantic coherence. To address this limitation, future research can explore methods to generate insertion candidates that are both semantically coherent and reveal unfaithful NLEs.
Our second test uses heuristics that are task-dependent and may not be applicable to any task. The reconstruction functions \(R\)s proposed in this work are based on hand-crafted rules for the e-SNLI and ComVE datasets. However, due to the nature of the CoS-E NLEs, rule-based input reconstructions were not possible for this dataset. To address this limitation, future research could investigate automated reconstruction functions that utilize machine learning models. These models would be trained to generate reconstructed inputs based on the generated NLEs, where a small number of annotations would be provided as training instances. For example, for CoS-E, one such training annotation could be: _Original Question:_ After getting drunk people couldn't understand him, it was because of his what? _Choices:_ lower standards, slurred speech, or falling down. _Answer:_ slurred speech. _NLE:_ People who are drunk have difficulty speaking. \(\rightarrow\)_Reconstructed Question:_ What do drunk people have difficulty with? _Reconstructed Choices:_ lower standards, speaking, or falling down. This approach would enable the development of machine learning models capable of generating reconstructed inputs for various datasets.
## Acknowledgements
The research documented in this paper has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 801199. Isabelle Augenstein's research is further partially funded by a DFF Sapere Aude research leader grant under grant agreement No 0171-00034B, as well as by the Pioneer Centre for AI, DNRF grant number P1. Thomas Lukasiewicz was supported by the Alan Turing Institute under the UK EPSRC grant EP/N510129/1, the AXA Research Fund, and the EU TAILOR grant 952215. Oana-Maria Camburu was supported by a UK Leverhulme Early Career Fellowship. Christina Lioma's research is partially funded by the Villum and Velux Foundations Algorithms, Data and Democracy (ADD) grant.
|
2310.15963
|
Paralinearization and extended lifespan for solutions of the $ α
$-SQG sharp front equation
|
In this paper we paralinearize the contour dynamics equation for sharp-fronts
of $\alpha$-SQG, for any $ \alpha \in (0,1) \cup (1,2) $, close to a circular
vortex. This turns out to be a quasi-linear Hamiltonian PDE. After deriving the
asymptotic expansion of the linear frequencies of oscillations at the vortex
disk and verifying the absence of three wave interactions, we prove that, in
the most singular cases $ \alpha \in (1,2) $, any initial vortex patch which is
$ \varepsilon $-close to the disk exists for a time interval of size at least $
\sim \varepsilon^{-2} $. This quadratic lifespan result relies on a
paradifferential Birkhoff normal form reduction and exploits cancellations
arising from the Hamiltonian nature of the equation. This is the first normal
form long time existence result of sharp fronts.
|
Massimiliano Berti, Scipio Cuccagna, Francisco Gancedo, Stefano Scrobogna
|
2023-10-24T16:05:38Z
|
http://arxiv.org/abs/2310.15963v1
|
# Paralinearization and extended lifespan
###### Abstract
In this paper we paralinearize the contour dynamics equation for sharp-fronts of \(\alpha\)-SQG, for any \(\alpha\in(0,1)\cup(1,2)\), close to a circular vortex. This turns out to be a quasi-linear Hamiltonian PDE. After deriving the asymptotic expansion of the linear frequencies of oscillations at the vortex disk and verifying the absence of three wave interactions, we prove that, in the most singular cases \(\alpha\in(1,2)\), any initial vortex patch which is \(\varepsilon\)-close to the disk exists for a time interval of size at least \(\sim\varepsilon^{-2}\). This quadratic lifespan result relies on a paradifferential Birkhoff normal form reduction and exploits cancellations arising from the Hamiltonian nature of the equation. This is the first normal form long time existence result of sharp fronts.
_Keywords: \(\alpha\)-SQG equations, vortex patches, paradifferential calculus, Birkhoff normal form._
###### Contents
* 1 Introduction and main results
* 2 Functional setting
* 2.1 Paradifferential calculus
* 2.2 \(z\)-dependent paradifferential calculus
* 3 The linearized problem at \(f=0\)
* 4 Paralinearization of the Hamiltonian scalar field
* 4.1 Isolating the integral terms
* 4.2 Analysis of the nonlinear convolution kernels
* 4.3 Paralinearization of the quasilinear integral term \(\mathcal{I}(f)\)
* 4.4 Paralinearization of the quasilinear integral term \(\mathcal{J}(f)\)
* 4.5 Proof of Theorem 4.1
* 5 Birkhoff normal form reduction up to cubic terms
* A Proof of Equation (4.110)
* B Conjugation of paradifferential operators under flows
## 1 Introduction and main results
In this paper we consider the generalized surface quasi-geostrophic \(\alpha\)-SQG equations
\[\partial_{t}\theta(t,\zeta)+u(t,\zeta)\cdot\nabla\theta(t,\zeta)=0,\quad(t, \zeta)\in\mathbb{R}\times\mathbb{R}^{2}\,, \tag{1.1}\]
with velocity field
\[u:=\nabla^{\perp}\left|D\right|^{-2+\alpha}\theta\,,\qquad|D|:=(-\Delta)^{\frac{1} {2}}\,,\qquad\alpha\in(0,2)\,. \tag{1.2}\]
These class of active scalar equations have been introduced in [28, 68] and, for \(\alpha\to 0\), formally reduce to the 2D-Euler equation in vorticity formulation (in this case \(\theta\) is the vorticity of the fluid). The case \(\alpha=1\) is the surface quasi-geostrophic (SQG) equation in [23] which models the evolution of the temperature \(\theta\) for atmospheric and oceanic flows.
For the 2D Euler equation global-in-time well-posedness results are well known for either regular initial data, see e.g. [65, 22], as well as for \(L^{1}\cap L^{\infty}\) initial vorticities, thanks to the celebrated Yudovich Theorem [77]. This result is based on the fact that the vorticity is transported by the particles of fluid along the velocity field, which turns out to be log-Lipschitz, and thus it defines a global flow on the plane. On the other hand, for \(\alpha>0\), an analogous result does not hold because the velocity field \(u\) in (1.2) is more singular and does not define a flow. Nevertheless local in time smooth solutions exist thanks to a nonlinear commutator structure of the vector field for \(\alpha=1\), in [24], and for \(\alpha\in(1,2)\), in [21]. For \(\alpha=1\), the works [26, 27] rule out the possible formation of certain kind of singularities but the question of whether a finite-time singularity may develop from a smooth initial datum remains open. In this context we mention the construction in [45] of solutions that must either exhibit infinite in time growth of derivatives or blow up in finite time.
Existence of global weak \(L^{p}\) solutions has been obtained by energy methods for \(\alpha=1\), if \(p>4/3\), in [66, 70], and for \(\alpha\in(1,2)\) if \(p=2\), in [21]. For \(\alpha\in(0,1]\) global weak solutions exist also in \(L^{1}\cap L^{2}\) as proved in [63]. We also mention that non-unique weak solutions of SQG have been constructed by convex integration techniques in [14, 58].
A particular type of weak solutions are the _vortex patches_ -also called _sharp fronts_- which are given by the characteristic function of an evolving domain
\[\theta\left(t,\zeta\right):=\begin{cases}1&\text{if }\zeta\in D(t)\,,\\ 0&\text{if }\zeta\notin D(t)\,.\end{cases}\qquad D(t)\subset\mathbb{R}^{2}\,. \tag{1.3}\]
The vortex patch problem (1.3) can be described by the evolution of the interface \(\partial D(t)\) only. The simplest example of a finite energy vortex patch is the circular "Rankine" vortex which is the circular steady solution with \(D(t)=D(0)=\{|\zeta|\leq 1\}\) at any time \(t\). On the other hand, since for \(\alpha\in(0,2)\) there is no analogue of Yudovich theorem, also to establish the local existence theory for sharp fronts nearby is a difficult task. In the last few years special global in time sharp-front solutions of \(\alpha\)-SQG close to the Rankine vortex have been constructed: the uniformly rotating \(V\)-states in [16, 17, 40, 43], as well as time quasi-periodic solutions in [42] for \(\alpha\in(0,\frac{1}{2})\), and in [41] for \(\alpha\in(1,2)\). We quote further literature after the statement of Theorem 1.1.
In this work we prove the first long-time existence result of sharp fronts of \(\alpha\)-SQG, in the more singular cases \(\alpha\in(1,2)\), for any initial interface \(\partial D(0)\) sufficiently smooth and close to a circular Rankine vortex, see Theorem 1.1. This is achieved thanks to the paralinearization result of the \(\alpha\)-SQG sharp front equation in Theorem 4.1 for any \(\alpha\in(0,1)\cup(1,2)\), that we consider of independent interest in itself.
Let us present precisely our main results. The evolution of the boundary of the vortex patch is governed by the _Contour Dynamics Equation_ for a parametrization \(X:\mathbb{T}\to\mathbb{R}^{2}\), \(x\mapsto X(t,x)\), with \(\mathbb{T}:=\mathbb{R}/2\pi\mathbb{Z}\), of the boundary \(\partial D(t)\) of the vortex patch. The Contour Dynamics Equation for the \(\alpha\)-SQG patch -also called sharp-fronts equation- is
\[\partial_{t}X\left(t,x\right)=\frac{c_{\alpha}}{2\pi}\int\frac{X^{\prime}(t,x )-X^{\prime}\left(t,y\right)}{\left|X\left(t,x\right)-X\left(t,y\right)\right| ^{\alpha}}\,\mathrm{d}y\,,\quad\alpha\in(0,2)\, \tag{1.4}\]
where \({}^{\prime}\) denotes the derivative with respect to \(x\),
\[c_{\alpha}:=\frac{\Gamma\left(\frac{\alpha}{2}\right)}{2^{1-\alpha}\Gamma \left(1-\frac{\alpha}{2}\right)} \tag{1.5}\]
and \(\Gamma(\cdot)\) is the Euler-Gamma function. The local solvability of Equation (1.4) in Sobolev class has been proved in [36] for \(\alpha\in(0,1]\), if the initial datum belongs to \(H^{s}\), \(s\geq 3\) and in [37, 38] for less regular initial data (see [71] for \(C^{\infty}\) data). The uniqueness has been established in [25]. For \(\alpha\in(1,2)\) the local existence and uniqueness theory has been proved in [21, 38] for initial data in \(H^{s}\), \(s\geq 4\), see also [1, 62]. In the very recent work [59] it is proved that the \(\alpha\)-patch problem is ill posed in \(W^{k,p}\) if \(p\neq 2\).
Very little is known concerning long time existence results. Actually highly unstable dynamical behaviour could emerge. In this context we mention the remarkable work [61] where two smooth patches of opposite sign develop a finite time particle collision. We also quote the numerical study [73] which provides some evidence of the development of filaments, pointing to a possible formation of singularities via a self-similar filament cascade.
In this paper we consider sharp fronts of \(\alpha\)-SQG that are a radial perturbation of the unitary circle, i.e.
\[X\left(x\right)=\left(1+h\left(x\right)\right)\widetilde{\gamma}\left(x \right)\,,\qquad\quad\widetilde{\gamma}\left(x\right):=\left(\cos(x),\sin(x) \right). \tag{1.6}\]
Since only the normal component of the velocity field deforms the patch, one derives from (1.4) a scalar evolution equation for \(h(x)\). Multiplying (1.4) by the normal vector \(n(x)=h^{\prime}(x)\widetilde{\gamma}^{\prime}(x)-\left(1+h(x)\right)\widetilde {\gamma}(x)\) to the boundary of the patch at \(X(x)\), we deduce that \(h(t,x)\) solves the equation
\[\begin{split}-\left(1+h\left(x\right)\right)\partial_{t}h\left(x \right)&=\frac{c_{\alpha}}{2\pi}\int\frac{\cos\left(x-y\right) \left[\left(1+h\left(x\right)\right)h^{\prime}\left(y\right)-\left(1+h\left( y\right)\right)h^{\prime}\left(x\right)\right]}{\left[\left(1+h\left(x\right) \right)^{2}+\left(1+h\left(y\right)\right)\right]^{2}-2\left(1+h\left(x\right) \right)\left(1+h\left(y\right)\right)\cos\left(x-y\right)\right]^{\frac{\alpha }{2}}}\mathrm{d}y\\ &\quad+\frac{c_{\alpha}}{2\pi}\int\frac{\sin\left(x-y\right) \left[\left(1+h\left(x\right)\right)\left(1+h\left(y\right)\right)+h^{\prime} \left(x\right)h^{\prime}\left(y\right)\right]}{\left[\left(1+h\left(x\right) \right)^{2}+\left(1+h\left(y\right)\right)^{2}-2\left(1+h\left(x\right)\right) \left(1+h\left(y\right)\right)\cos\left(x-y\right)\right]^{\frac{\alpha}{2}}} \mathrm{d}y\,.\end{split} \tag{1.7}\]
In view of [21, 38] if \(h_{0}\in H^{s}\), for any \(s\geq 4\), there exists a unique solution \(h\in\mathcal{C}\left(\left[0,T\right];H^{s}\right)\) of (1.7) defined up to a time \(T>\frac{1}{C_{s}\left\|h_{0}\right\|_{H^{s}}}\). The following result extends the local-existence result for longer times.
**Theorem 1.1** (Quadratic life-span).: _Let \(\alpha\in\left(1,2\right)\). There exists \(s_{0}>0\) such that for any \(s\geq s_{0}\), there are \(\varepsilon_{0}>0\), \(c_{s,\alpha}>0\), \(C_{s,\alpha}>0\) such that, for any \(h_{0}\) in \(H^{s}\left(\mathbb{T};\mathbb{R}\right)\) satisfying \(\left\|h_{0}\right\|_{H^{s}}\leq\varepsilon<\varepsilon_{0}\), the equation (1.7) with initial condition \(h(0)=h_{0}\) has a unique classical solution_
\[h\in\mathcal{C}\left(\left[-T_{s,\alpha},T_{s,\alpha}\right];H^{s}\left( \mathbb{T};\mathbb{R}\right)\right)\qquad\text{with}\qquad T_{s,\alpha}>c_{s, \alpha}\varepsilon^{-2}\,, \tag{1.8}\]
_satisfying \(\left\|h\left(t\right)\right\|_{H^{s}}\leq C_{s,\alpha}\varepsilon\), for any \(t\in\left[-T_{s,\alpha},T_{s,\alpha}\right]\)._
Theorem 1.1 is proved by normal form arguments for quasi-linear Hamiltonian PDEs. The first important step is the _parallearization_ of (1.7) once it has been written in Hamiltonian form, see Theorem 4.1. The paralinearization formula (4.1) of the \(\alpha\)-SQG equations holds for any \(\alpha\in\left(0,1\right)\cup\left(1,2\right)\). It is a major result of this paper, that we expect to be used also in other contexts.
In order to prove Theorem 1.1 we reduce the paralinearized equation (4.1), for any \(\alpha\in\left(1,2\right)\), to Birkhoff normal form up to cubic smoothing terms. This requires to prove the absence of _three wave interactions_, which is verified in Lemma 3.5 by showing the convexity of the linear normal frequencies of the \(\alpha\)-SQG equation at the circular vortex patch.
Theorem 1.1 is the first Birkhoff normal form results for sharp fronts equations.
In recent years several advances have been obtained concerning long time existence of solutions for quasi-linear equations in fluids dynamics on \(\mathbb{T}\), namely with periodic boundary conditions, as the water waves equations. Quadratic life span of small amplitude solutions have been obtained in [2, 6, 55, 56, 57, 76], extended to longer times in [5, 6, 7, 10, 12, 34, 75, 78], by either introducing quasi-linear modified energies or using Birkhoff normal form techniques. We also quote the long time existence result [20] for solutions of SQG close to the infinite energy radial solution \(\left|\xi\right|\).
Before explaining in detail the main ideas of proof we present further results in literature about \(\alpha\)-SQG.
_Further literature._ Special infinite energy global-in-time sharp front solutions have been constructed in [29, 53, 54] if the initial patch is a small perturbation of the half-space, by exploiting dispersive techniques. In [18, 19] special global smooth solutions in the cases \(\alpha=0,1\) are obtained using bifurcation theory. Concerning the possible formation of singularities, we mention that [61, 62] constructed special initial sharp fronts in the half-space which develop a splash singularity in finite time if \(\alpha\in\left(0,\frac{1}{12}\right)\), later extended in [38] for \(\alpha\in\left(0,\frac{1}{3}\right)\). We also mention that [39, 60] have proved that, if \(\alpha\in\left(0,1\right]\), the sharp fronts equation in the whole space does not generate finite-time singularities of splash type.
_V-states._ The existence of uniformly rotating \(V\)-states close to the disk was first numerically investigated in [33] and analytically proved in [15] for the Euler equations, recently extended to global branches in [44].
For \(\alpha\)-SQG equations, as already mentioned, local bifurcation results of sharp fronts from the disk have been obtained in [16, 17, 18, 40, 43]. Smooth \(V\)-states bifurcating from different steady configurations have been constructed for \(\alpha\in[0,2)\) in [30, 31, 32, 46, 47, 48, 50, 69]. We refer to the introductions in [41, 42] for more references.
_Quasi-periodic solutions._ As already mentioned global in time quasi-periodic solutions of the \(\alpha\)-SQG vortex patch equation close to the circle (1.6) have been recently constructed in [42] for \(\alpha\in\left(0,\frac{1}{2}\right)\) and in [41] for \(\alpha\in\left(1,2\right)\). The result [42] holds for "most" values of \(\alpha\in\left(0,\frac{1}{2}\right)\) (used a parameter to impose non-resonance conditions) whereas [41] holds for any \(\alpha\in\left(1,2\right)\), using the initial conditions as parameters, via a Birkhoff normal form analysis. The \(2D\)-Euler equation is more degenerate and in this case quasi-periodic solutions have been constructed in [9] close to the family of Kirkhoff ellipses, not only close to the disk (we refer to [9] for a wider introduction to the field and literature about quasi-periodic solutions). These results build on on KAM techniques [3, 4, 8, 13, 35] developed for the water waves equations.
### Ideas of the proof
**The average-preserving unknown and the Hamiltonian formulation.** The equation (1.7) for the unknown \(h(x)\) is _not_ convenient because its evolution does not preserve the average and it is _not_ Hamiltonian. This problem is overcome by reformulating (1.7) in terms of the variable
\[f\left(x\right):=h\left(x\right)+\tfrac{1}{2}h^{2}\left(x\right)\,. \tag{1.9}\]
Indeed, symmetrizing in the \(x,y\) variables, we get \(f_{\mathbb{T}}\) r.h.s. (1.7) \(\mathrm{d}y=0\) and then \(\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathbb{T}}\left(h\left(t,x\right)+\tfrac {1}{2}h^{2}\left(t,x\right)\right)\mathrm{d}x=0\). Thus the average of \(f\left(x\right)\) in (1.9) is preserved along the patch evolution. Note that, inverting (1.9) for small \(\|f\|_{L^{\infty}}\) and \(\|h\|_{L^{\infty}}\), we have \(h\left(x\right)=\sqrt{1+2f\left(x\right)}-1\) and the Sobolev norms of \(f\left(x\right)\) and \(h\left(x\right)\) are equivalent
\[\|f\|_{s}\sim\|h\|_{s}\,\qquad\forall s>\tfrac{1}{2}\,. \tag{1.10}\]
**Remark 1.2**.: There is a deep connection between the conservation of the average of \(f(x)\) and the incompressibility of the flow generated by the \(\alpha\)-SQG patch. Actually the Lebesgue measure \(\mathrm{Vol}\left(t\right)\) of the finite region of \(\mathbb{R}^{2}\) enclosed by the patch is, passing to polar coordinates,
\[\mathrm{Vol}\left(t\right)=\int_{-\pi}^{\pi}\int_{0}^{1+h\left(t,x\right)}\rho \ \mathrm{d}\rho\ \mathrm{d}x=\pi+\int_{-\pi}^{\pi}\left(h\left(t,x\right)+\frac{h^{2}\left(t, x\right)}{2}\right)\mathrm{d}x=\pi+\int_{-\pi}^{\pi}f\left(t,x\right)\mathrm{d}x\]
and therefore the conservation of the average \(\int_{\mathbb{T}}f\left(x\right)\ \mathrm{d}x\) amounts to the conservation of \(\mathrm{Vol}\left(t\right)\).
The variable (1.9) has been used in [42] where it is also proved that the evolution equation for \(f\) has a Hamiltonian structure, see also [41]. The following result is [42, Proposition 2.1]:
**Proposition 1.3** (Hamiltonian formulation of (1.7)).: _Let \(\alpha\in\left(0,2\right)\). If \(h\) is a solution of Eq. (1.7) then the variable \(f\) defined in (1.9) solves the Hamiltonian equation_
\[\partial_{t}f=\partial_{x}\nabla E_{\alpha}\left(f\right) \tag{1.11}\]
_where \(E_{\alpha}\left(f\right)\) is the pseudo-energy of the patch whose \(L^{2}\)-gradient \(\nabla E_{\alpha}\left(f\right)\) is_
\[\nabla E_{\alpha}\left(f\right)=\frac{c_{\alpha}}{2\left(1-\frac{\alpha}{2} \right)}\int\frac{1+2f\left(y\right)+\sqrt{1+2f\left(x\right)}\ \partial_{y}\left[\sqrt{1+2f\left(y\right)}\sin\left(x-y\right)\right]}{\left[ 1+2f\left(x\right)+1+2f\left(y\right)-2\sqrt{1+2f\left(x\right)}\sqrt{1+2f \left(y\right)}\cos\left(x-y\right)\right]^{\frac{\alpha}{2}}}\ \mathrm{d}y\,. \tag{1.12}\]
Note that the evolution equation (1.11) is translation-invariant because \(E_{\alpha}\circ\mathrm{t}_{\zeta}=E_{\alpha}\) for any \(\zeta\in\mathbb{R}\), where \(\mathrm{t}_{\zeta}f(x):=f(x+\zeta)\). Moreover, in view of the presence of the Poisson tensor \(\partial_{x}\) in (1.11) it is evident that the space average of \(f\) is a prime integral of (1.11). In the sequel we assume the space average of \(f\) to be zero.
**Parallearization of (1.11) for \(\alpha\in\left(0,1\right)\cup\left(1,2\right)\).** Section 4 is dedicated to write the Hamiltonian equation (1.11) in paradifferential form and to provide the detailed structure of the principal and subprincipal symbols in the expansion of the paradifferential operator, obtaining
\[\partial_{t}f+\partial_{x}\circ\mathrm{Op}^{BW}\left[\left(1+\nu\left(f;x \right)\right)L_{\alpha}\left(\left|\xi\right|\right)+V\left(f;x\right)+P\left( f;x,\xi\right)\right]f=\text{smoothing terms} \tag{1.13}\]
where (see Theorem 4.1 for a detailed statement)
* \(\left(1+\nu\left(f;x\right)\right)L_{\alpha}\left(\left|\xi\right|\right)+V\left(f;x\right)\) is a real symbol of order \(\max\{\alpha-1,0\}\) and \(\nu\left(f;x\right)\), \(V\left(f;x\right)\) are real functions vanishing for \(f=0\);
* the symbol \(P\left(f;x,\xi\right)\) has order \(-1\) and vanishes for \(f=0\).
We note that in (1.13) the operator \(\operatorname{Op}^{BW}\left[\,\right]\) is the paradifferential quantization according to Weyl (see Definition 2.10) and thus \(\operatorname{Op}^{BW}\left[1+\nu\left(f;x\right)L_{\alpha}\left(\left|\xi \right|\right)+V\left(f;x\right)\right]\) is self-adjoint. As a consequence the linear Hamiltonian operator \(\partial_{x}\circ\operatorname{Op}^{BW}\left[1+\nu\left(f;x\right)L_{\alpha} \left(\left|\xi\right|\right)+V\left(f;x\right)\right]\) is skew-self-adjoint at positive orders. This is the reason why the unbounded quasi-linear vector field \(\partial_{x}\circ\operatorname{Op}^{BW}\left[1+\nu\left(f;x\right)L_{\alpha} \left(\left|\xi\right|\right)+V\left(f;x\right)\right]f\) admits energy estimates in Sobolev spaces \(H^{s}\) via commutator estimates (actually existence and unicity of the solutions of (1.13) would follow as in [11]). We remark the absence in (1.13) of operators like \(\partial_{x}\circ\operatorname{Op}^{BW}\left[\text{symbol of order }(\alpha-2)\right]\). The cancellations of such terms are verified in Appendix A by a direct calculus and it is ultimately a consequence of the Hamiltonian structure of the equation (1.11).
We also note that the equation (1.13) can be written, in homogeneity degrees, as
\[\partial_{t}f+\omega_{\alpha}(D)f=\mathcal{O}\left(f^{2}\right)\qquad\text{ where}\qquad\omega_{\alpha}(D):=\partial_{x}\circ L_{\alpha}\left(\left|D\right|\right) \tag{1.14}\]
is the unperturbed dispersion relation.
Let us explain how we deduce the paralinearization formula (1.13) in Section 4. The nonlinear term \(\nabla E_{\alpha}\left(f\right)\) in (1.11) can be written as a convolution operator
\[\nabla E_{\alpha}\left(f\right)(x)=\int_{-\pi}^{\pi}K\left(f;x,z\right)\frac{ f\left(x\right)-f\left(x-z\right)}{\left|z\right|^{\alpha}}\,\mathrm{d}z\]
with a nonlinear real valued convolution kernel \(K\left(f;x,z\right)\). By Taylor expanding the kernel \(z\mapsto K\left(f;x,z\right)\) at \(z=0\) (provided \(f\) is sufficiently regular) and expanding in paraproducts the arguments of the above integral, we obtain an expansion of the form
\[\nabla E_{\alpha}\left(f\right)(x) =\sum_{j=0}^{J}\operatorname{Op}^{BW}\left[K_{j}\left(f,\ldots,f^ {\left(j+1\right)};x\right)\right]\int_{-\pi}^{\pi}\left(f\left(x\right)-f \left(x-z\right)\right)\,\frac{z^{j}}{\left|z\right|^{\alpha}}\,\mathrm{d}z \tag{1.15a}\] \[+\int_{-\pi}^{\pi}\operatorname{Op}^{BW}\left[R\left(f,\ldots,f^ {\left(j+1\right)};x,z\right)\right]\left(f\left(x\right)-f\left(x-z\right) \right)\mathrm{d}z+\,\text{smoothing terms}\, \tag{1.15b}\]
where \(R\left(f,\ldots,f^{\left(j+1\right)};x,z\right)=o\left(\left|z\right|^{1- \alpha}\right)\) as \(z\to 0\) being the Taylor remainder at order \(J\) (here \(f^{\left(j\right)}(x)\) denotes the j-derivative of \(f(x)\)). The terms in the finite sum (1.15a) are particularly simple paradifferential operators. Indeed, provided \(\alpha<2\),
\[\int_{-\pi}^{\pi}\left(f\left(x\right)-f\left(x-z\right)\right)\,\frac{z^{j}}{ \left|z\right|^{\alpha}}\,\mathrm{d}z=\mathbb{V}_{\alpha-\mathrm{j}}f+m_{\alpha -\left(j+1\right)}\left(D\right)f\]
where \(\mathbb{V}_{\alpha-\mathrm{j}}\) is a real constant and \(m_{\alpha-\left(j+1\right)}\left(\xi\right)\) is a Fourier multiplier of order \(\alpha-\left(\mathrm{j}+1\right)\), as follows by standard asymptotics of singular integral operators, see [74]. Thus, by symbolic calculus,
\[\operatorname{Op}^{BW}\left[K_{j}\right]\int_{-\pi}^{\pi}\left(f\left(x\right) -f\left(x-z\right)\right)\,\frac{z^{j}}{\left|z\right|^{\alpha}}\,\mathrm{d}z= \operatorname{Op}^{BW}\left[V_{\alpha-\mathrm{j}}\left(f,\ldots,f^{\left(j+1 \right)};x\right)+K_{j}\left(f,\ldots,f^{\left(j+1\right)};x\right)m_{\alpha- \left(j+1\right)}\left(\xi\right)\right]f+\text{lo.t.},\]
where \(V_{\alpha-\mathrm{j}}\) are real functions. The unbounded terms \(\partial_{x}\circ\operatorname{Op}^{BW}\left[K_{j}\left(f,\ldots,f^{\left(j+1 \right)};x\right)m_{\alpha-\left(j+1\right)}\left(\xi\right)\right]f\), \(\mathrm{j}=0,1\), would induces a loss of derivatives in the \(H^{s}\) energy estimates if the imaginary part \(\operatorname{Im}m_{\alpha-\left(j+1\right)}\left(\xi\right)\neq 0\). Therefore a detailed analysis of these symbols is essential. The highest order Fourier multiplier \(m_{\alpha-1}(\xi)\) turns out to be real. Concerning the subprincipal symbol \(K_{1}\left(f,f^{\prime};x\right)m_{\alpha-2}(\xi)\), it turns out that \(m_{\alpha-2}(\xi)\) has a non-zero imaginary part but a subtle nonlinear cancellation reveals that the corresponding coefficient \(K_{1}\left(f,f^{\prime};x\right)\) is identically zero, as verified in Appendix A. Such a structure, which ultimately stems by the Hamiltonian nature of (1.11), could be proven up to an arbitrary negative order.
Concerning the first term in (1.15b), we use that \(R\left(f,\ldots,f^{\left(j+1\right)};x,z\right)\) is \(o\left(\left|z\right|^{1-\alpha}\right)\) as \(z\to 0\) so that, modulo regularizing operators, it can be expressed as a paradifferential operator of order \(\alpha-\left(J+1\right)\), which is a bounded vector field taking \(J\geq 2\), see Proposition 2.36.
**Reduction of (1.13) to Birkhoff normal form up to cubic terms.** In Section 5 we first conjugate the paradifferential equation (1.13) into an equation with constant coefficient symbols, modulo smoothing operators,
\[\partial_{t}g+\partial_{x}\operatorname{op}^{BW}\big{[}\big{(}1+\omega_{0}\big{(} f\big{)}\big{)}\,L_{\alpha}\,(|\xi|)+\mathsf{H}_{\alpha}\,\big{(}f;\xi\big{)}\big{]}g= \operatorname{smoothing\,terms} \tag{1.16}\]
where \(\omega_{0}\big{(}f\big{)}\) is the average of a real nonlinear function of \(\nu\big{(}f;x\big{)}\) and \(\mathsf{H}_{\alpha}\,\big{(}f;\xi\big{)}\) is a \(x\)-independent symbol with imaginary part \(\operatorname{Im}\mathsf{H}_{\alpha}\,\big{(}f;\xi\big{)}\) of order \(-1\), see Proposition 5.2. Thus (1.16) is still Hamiltonian up to order zero and thus it satisfies \(H^{*}\)-energy estimates. The unknowns \(g\,(t)\) and \(f\,(t)\) have equivalent Sobolev norms \(\big{\|}g\,(t)\big{\|}_{s}\sim_{s,\alpha}\big{\|}f\,(t)\big{\|}_{s}\). We remark that in (1.16) the constant \(\omega_{0}\big{(}f\big{)}\) and the symbol \(\mathsf{H}_{\alpha}\,\big{(}f;\xi\big{)}\) vanish quadratically at \(f=0\) and thus the only term which can disturb the quadratic life span of the solution \(g\,(t)\) is the smoothing operator \(R_{1}\,\big{(}f\big{)}\) in the decomposition
\[\operatorname{smoothing\,terms}=R_{1}\,\big{(}f\big{)}g+R_{2}\,\big{(}f \big{)}g\,.\]
Then in Lemma 5.7 we implement a Birkhoff normal form step to cancel \(R_{1}\,\big{(}f\big{)}g\). An algebraic ingredient is to verify the absence of three wave interactions, namely that, for any \(n,j,k\in\mathbb{Z}\setminus\{0\}\) satisfying \(k=j+n\),
\[\big{|}\omega_{\alpha}\,(k)-\omega_{\alpha}\,\big{(}j\big{)}-\omega_{\alpha} \,(n)\big{|}\geq c>0\,,\]
where \(\omega_{\alpha}\,\big{(}j\big{)}\) are the normal \(\alpha\)-SQG frequencies in (1.14). Such a property follows by proving the _convexity_ of the the map \(\omega_{\alpha}\,\big{(}j\big{)}\) for \(j\in\mathbb{N}\), see Lemma 3.5.
The final outcome is an _energy estimate_ for any small enough solution of (1.11) of the form
\[\big{\|}f\,(t)\big{\|}_{H^{*}}^{2}\lesssim_{s,\alpha}\big{\|}f\,(0)\big{\|}_{H ^{*}}^{2}+\int_{0}^{t}\big{\|}f\,(\tau)\big{\|}_{H^{*}}^{4}\,\mathrm{d}\tau\,, \qquad t>0\,,\]
which implies Theorem 1.1.
**Structure of the manuscript.** Section 2 contains the paradifferential calculus used along the paper. In Section 2.1 we report the main results in [5, 10]. Then in Section 2.2 we introduce a \(z\)-dependent paradifferential calculus used for the paralinearization of (1.13) in Section 4. Section 3 is dedicated to the linearization of (1.11) at the stationary state \(f\equiv 0\). Lemmas 3.1 and 3.6 extend to any \(\alpha\in(0,2)\) the asymptotic expansions of the normal frequencies \(\omega_{\alpha}\,\big{(}j\big{)}\) proved in [42] for \(\alpha\in(0,1)\). In Section 4 we provide the paralinearization (1.13) of the Hamiltonian equation (1.11) for any \(\alpha\in(0,1)\cup(1,2)\). In Section 5 we conjugate the paradifferential equation (1.13) into an equation with constant coefficients, modulo smoothing operators. In Section 5 we perform the Birkhoff normal form step and prove Theorem 1.1.
**Notation.** We denote with \(C\) a positive constant which does not depend on any parameter of the problem. We write \(A\lesssim_{c_{1},\ldots,c_{M}}B\) if \(A\leq C(c_{1},\ldots,c_{M})\,B\) and \(A\sim_{c_{1},\ldots,c_{M}}B\) if \(A\lesssim_{c_{1},\ldots,c_{M}}B\) and \(B\lesssim_{c_{1},\ldots,c_{M}}A\). We denote with \(\mathbb{N}=1,2,\ldots\) the set of natural numbers and \(\mathbb{N}_{0}:=\mathbb{N}\cup\{0\}\). For any \(x\geq 0\) we denote \(\lceil x\rceil:=\min\{n\in\mathbb{N}_{0}\mid x\leq n\}\). We denote \(\mathbb{T}:=\mathbb{R}\setminus(2\pi\mathbb{Z})\) the one-dimensional torus with norm \(|x|_{\mathbb{T}}:=\inf_{j\in\mathbb{Z}}|x+2\pi j|\). We denote \(D=-\mathrm{i}\partial_{x}\) and \([A,\,B]\) the commutator \([A,\,B]=AB-BA=:\mathrm{i}\mathrm{d}A_{A}B\). Given a linear real self-adjoint operator \(A\) any operator of the form \(\partial_{x}\circ A\) will be referred as _linear Hamiltonian_. We denote \(f\mathbin{\prec\mskip-10.0mu \prec}\mathrm{d}x=\frac{1}{2\pi}\int_{\mathbb{T}} \mathbin{\prec\mskip-10.0mu \prec}\mathrm{d}x\).
## 2 Functional setting
Along the paper we deal with real parameters
\[s\geq s_{0}\gg K\gg\rho\gg N\geq 0 \tag{2.1}\]
where \(N\in\mathbb{N}\). The values of \(s,s_{0},K\) and \(\rho\) may vary from line to line while still being true the relation (2.1). For the proof of Theorem 1.1 we shall take \(N=1\).
We expand a \(2\pi\)-periodic function \(u(x)\) in \(L^{2}(\mathbb{T};\mathbb{C})\) in Fourier series as
\[u(x)=\sum_{j\in\mathbb{Z}}\hat{u}\,\big{(}j\big{)}\,e^{\mathrm{i}jx}\,,\qquad \hat{u}\,\big{(}j\big{)}:=\mathcal{F}_{x-j}\,\big{(}j\big{)}:=u_{j}:=\frac{1} {2\pi}\int_{\mathbb{T}}u(x)\,e^{-\mathrm{i}jx}\,\mathrm{d}x\,. \tag{2.2}\]
A function \(u(x)\) is real if and only if \(\overline{u_{j}}=u_{-j}\), for any \(j\in\mathbb{Z}\). For any \(s\in\mathbb{R}\) we define the Sobolev space \(H^{s}:=H^{s}(\mathbb{T};\mathbb{C})\) with norm
\[\|u\|_{s}:=\|u\|_{H^{s}}=\left(\sum_{j\in\mathbb{Z}}\left\langle j\right\rangle^{ 2s}\left|\hat{u}\left(j\right)\right|^{2}\right)^{\frac{1}{2}},\qquad\left\langle j \right\rangle:=\max(1,|j|)\,.\]
We define \(\Pi_{0}u:=\hat{u}_{0}\) the average of \(u\) and
\[\Pi_{0}^{\perp}:=\mathrm{Id}-\Pi_{0}\,. \tag{2.3}\]
We define \(H_{0}^{s}\) the subspace of zero average functions of \(H^{s}\), for which we also denote \(\|u\|_{s}=\|u\|_{H^{s}}=\|u\|_{H^{s}}\). Clearly \(H_{0}^{0}(\mathbb{T};\mathbb{C})=L_{0}^{2}(\mathbb{T};\mathbb{C})\) with scalar product, for any \(u,v\in L_{0}^{2}(\mathbb{T};\mathbb{C})\),
\[\left\langle u,v\right\rangle_{L_{0}^{2}}=\int_{\mathbb{T}}u(x)\,\overline{v( x)}\,\mathrm{d}x\,. \tag{2.4}\]
Given an interval \(I\subset\mathbb{R}\) symmetric with respect to \(t=0\) and \(s\in\mathbb{R}\), we define the space
\[C_{*}^{K}\left(I;H_{0}^{s}(\mathbb{T};\mathbb{X})\right):=\bigcap_{k=0}^{K}C^ {k}\left(I;H_{0}^{s-\alpha k}(\mathbb{T};\mathbb{X})\right)\,,\qquad\mathbb{X }=\mathbb{R},\,\mathbb{C}\,,\]
resp. \(C_{*}^{K}(I;H^{s}(\mathbb{T};\mathbb{X}))\), endowed with the norm
\[\sup_{t\in I}\|u(t,\cdot)\|_{K,s}\qquad\text{where}\qquad\|u(t,\cdot)\|_{K,s}: =\sum_{k=0}^{K}\left\|\hat{\theta}_{s}^{k}u(t,\cdot)\right\|_{H^{s-\alpha k}}\,. \tag{2.5}\]
We denote \(B_{s}^{K}(I;\epsilon_{0})\), resp. \(B_{s,\mathbb{R}}^{K}(I;\epsilon_{0})\), the ball of radius \(\epsilon_{0}>0\) in \(C_{*}^{K}(I,H_{0}^{s}(\mathbb{T};\mathbb{C}))\), resp. in \(C_{*}^{K}(I,H_{0}^{s}(\mathbb{T};\mathbb{R}))\). We also we define \(B_{C_{*}^{K}(I,H^{s}(\mathbb{T};\mathbb{C}))}(0;\epsilon_{0})\) the ball of center zero and radius \(\epsilon_{0}\) in \(C_{*}^{K}(I,H^{s}(\mathbb{T};\mathbb{C}))\).
**Remark 2.1**.: The parameter \(s\) in (2.5) denotes the spatial Sobolev regularity of the solution \(u(t,\cdot)\) and \(K\) its regularity in the time variable. The \(\alpha\)-SQG vector field loses \(\alpha\)-derivatives, and therefore, differentiating the solution \(u(t)\) for \(k\)-times in the time variable, there is a loss of \(\alpha k\)-spatial derivatives. The parameter \(\rho\) in (2.1) denotes the order where we decide to stop our regularization of the system.
We set some further notation. For \(n\in\mathbb{N}\) we denote by \(\Pi_{n}\) the orthogonal projector from \(L^{2}(\mathbb{T};\mathbb{C})\) to the linear subspace spanned by \(\{e^{\mathrm{i}nx},e^{-\mathrm{i}nx}\}\), \((\Pi_{n}u)(x):=\hat{u}(n)e^{\mathrm{i}nx}+\hat{u}(-n)e^{-\mathrm{i}nx}\). If \(\mathcal{U}=(u_{1},\ldots,u_{p})\) is a \(p\)-tuple of functions and \(\overline{n}=(n_{1},\ldots,n_{p})\in\mathbb{N}^{p}\), we set \(\Pi_{n}\mathcal{U}:=(\Pi_{n_{1}}u_{1},\ldots,\Pi_{n_{p}}u_{p})\) and \(\mathfrak{t}_{\mathbb{C}}\mathcal{U}:=(\mathfrak{t}_{\mathbb{C}}u_{1},\ldots, \mathfrak{t}_{\mathbb{C}}u_{p})\), where \(\mathfrak{t}_{\mathbb{C}}\) is the translation operator
\[\mathfrak{t}_{\mathbb{C}}:u(x)\mapsto u(x+\mathfrak{c})\,. \tag{2.6}\]
For \(\mathcal{J}_{p}=(j_{1},\ldots,j_{p})\in\mathbb{Z}^{p}\) we denote \(|\mathcal{J}_{p}|:=\max(|j_{1}|,\ldots,|j_{p}|)\) and \(u_{\mathcal{J}_{p}}:=u_{j_{1}}\ldots u_{j_{p}}\). Note that the Fourier coefficients of \(\mathfrak{t}_{\mathbb{C}}u\) are \((\mathfrak{t}_{\mathbb{C}}u)_{j}=e^{\mathrm{i}j_{\mathbb{C}}}u_{j}\).
A vector field \(X(u)\) is _translation invariant_ if \(X\circ\mathfrak{t}_{\mathbb{C}}=\mathfrak{t}_{\mathbb{C}}\circ X\) for any \(\mathfrak{c}\in\mathbb{R}\).
Given a linear operator \(R(u)[\cdot]\) acting on \(L_{0}^{2}(\mathbb{T};\mathbb{C})\) we associate the linear operator defined by the relation \(\overline{R(u)}v:=\overline{R(u)}\overline{v}\) for any \(v\in L_{0}^{2}(\mathbb{T};\mathbb{C})\) An operator \(R(u)\) is _real_ if \(R(u)=\overline{R(u)}\) for any \(u\) real.
### Paradifferential calculus
We introduce paradifferential operators (Definition 2.10) following [5], with minor modifications due to the fact that we deal with a scalar equation and not a system, and the fact that we consider operators acting on \(H_{0}^{s}\) and \(H^{s}\) and not on homogenous spaces \(\dot{H}^{s}\). In this way we will mainly rely on results in [5, 10].
Classes of symbols.Roughly speaking the class \(\widetilde{\Gamma}_{p}^{m}\) contains symbols of order \(m\) and homogeneity \(p\) in \(u\), whereas the class \(\Gamma_{K,K^{\prime},p}^{m}\) contains non-homogeneous symbols of order \(m\) that vanish at degree at least \(p\) in \(u\) and that are \((K-K^{\prime})\)-times differentiable in \(t\). We can think the parameter \(K^{\prime}\) like the number of time derivatives of \(u\) that are contained in the symbols. We denote \(H_{0}^{\infty}(\mathbb{T};\mathbb{C}):=\bigcap_{s\in\mathbb{R}}H_{0}^{s}( \mathbb{T};\mathbb{C})\).
**Definition 2.2** (Symbols).: Let \(m\in\mathbb{R}\), \(p,N\in\mathbb{N}_{0}\), \(K,K^{\prime}\in\mathbb{N}_{0}\) with \(K^{\prime}\leq K\), and \(\epsilon_{0}>0\).
1. \(p\)**-homogeneous symbols.** We denote by \(\tilde{\Gamma}_{p}^{m}\) the space of symmetric \(p\)-linear maps from \(\left(H_{0}^{\infty}\left(\mathbb{T};\mathbb{C}\right)\right)^{p}\) to the space of \(\mathcal{C}^{\infty}\) functions from \(\mathbb{T}\times\mathbb{R}\) to \(\mathbb{C}\), \(\left(x,\xi\right)\mapsto a(\mathcal{U};x,\xi)\), satisfying the following: there exist \(\mu\geq 0\) and, for any \(\gamma,\beta\in\mathbb{N}_{0}\), there is a constant \(C>0\) such that \[\left|\partial_{\xi}^{\gamma}\partial_{\xi}^{\beta}a\left(\Pi_{\widetilde{n}} \mathcal{U};x,\xi\right)\right|\leq C|\widetilde{n}|^{\mu+\gamma}\langle\xi \rangle^{m-\beta}\prod_{j=1}^{p}\left\|\Pi_{n_{j}}u_{j}\right\|_{L^{2}}\] (2.7) for any \(\mathcal{U}=\left(u_{1},\ldots,u_{p}\right)\in\left(H_{0}^{\infty}\left( \mathbb{T};\mathbb{C}\right)\right)^{p}\) and \(\widetilde{n}=\left(n_{1},\ldots,n_{p}\right)\in\mathbb{N}^{p}\). Moreover we assume that, if for some \(\left(n_{0},\ldots,n_{p}\right)\in\mathbb{N}_{0}\times\mathbb{N}^{p}\), \(\Pi_{n_{0}}a\left(\Pi_{\widetilde{n}}\mathcal{U};\right)\neq 0\), then there exists a choice of signs \(\eta_{j}\in\left\{\pm 1\right\}\) such that \(\sum_{j=1}^{p}\eta_{j}\,n_{j}=n_{0}\). In addition we require the translation invariance property \[a\left(\mathfrak{t}_{\zeta}\mathcal{U};x,\xi\right)=a\left(\mathcal{U};x+\zeta,\xi\right),\quad\forall\zeta\in\mathbb{R},\] (2.8) where \(\mathfrak{t}_{\varsigma}\) is the translation operator in (2.6). For \(p=0\) we denote by \(\tilde{\Gamma}_{0}^{m}\) the space of constant coefficients symbols \(\xi\mapsto a(\xi)\) which satisfy (2.7) with \(\gamma=0\) and the right hand side replaced by \(C\langle\xi\rangle^{m-\beta}\) and we call them Fourier multipliers.
2. **Non-homogeneous symbols.** We denote by \(\Gamma_{K,K^{\prime},p}^{m}[\epsilon_{0}]\) the space of functions \(a(u;t,x,\xi)\), defined for \(u\in B_{s_{0}}^{K^{\prime}}(I;\epsilon_{0})\) for some \(s_{0}\) large enough, with complex values, such that for any \(0\leq k\leq K-K^{\prime}\), any \(s\geq s_{0}\), there are \(C>0\), \(0<\epsilon_{0}(s)<\epsilon_{0}\) and for any \(u\in B_{s_{0}}^{K}(I;\epsilon_{0}(s))\cap C_{*}^{k+K^{\prime}}\left(I,H_{0}^{ \infty}\left(\mathbb{T};\mathbb{C}\right)\right)\) and any \(\gamma,\beta\in\mathbb{N}_{0}\), with \(\gamma\leq s-s_{0}\) one has the estimate \[\left|\partial_{t}^{k}\partial_{x}^{\gamma}\partial_{\xi}^{\beta}a\left(u;t,x,\xi\right)\right|\leq C\langle\xi\rangle^{m-\beta}\|u\|_{k+K^{\prime},s_{0}}^ {p-1}\|u\|_{k+K,s}.\] (2.9) If \(p=0\) the right hand side has to be replaced by \(C\langle\xi\rangle^{m-\beta}\). We say that a non-homogeneous symbol \(a(u;x,\xi)\) is _real_ if it is real valued for any \(u\in B_{s_{0},\mathbb{R}}^{K^{\prime}}(I;\epsilon_{0})\).
3. **Symbols.** We denote by \(\Sigma\Gamma_{K,K^{\prime},p}^{m}[\epsilon_{0},N]\) the space of symbols \[a(u;t,x,\xi)=\sum_{q=p}^{N}a_{q}\left(u,\ldots,u;x,\xi\right)+a_{>N}(u;t,x,\xi)\] where \(a_{q}\), \(q=p,\ldots,N\) are homogeneous symbols in \(\tilde{\Gamma}_{q}^{m}\) and \(a_{>N}\) is a non-homogeneous symbol in \(\Gamma_{K,K^{\prime},N+1}^{m}\).
We say that a symbol \(a(u;t,x,\xi)\) is _real_ if it is real valued for any \(u\in B_{s_{0},\mathbb{R}}^{K^{\prime}}(I;\epsilon_{0})\).
**Notation 2.3**.: If \(a(\mathcal{U};\cdot)\) is a \(p\)-homogenous symbol we also denote \(a(u):=a(u,\ldots,u;\cdot)\) the corresponding polynomial and we identify the \(p\)-homogeneous monomial \(a(u;\cdot)\) with the \(p\)-linear symmetric form \(a(\mathcal{U};\cdot)\).
Actually also the non-homogeneous component of the symbols that we will encounter in Section 4 depends on time and space only through \(u\), but since this information is not needed it is not included in Definition 2.2 (as in [5]).
**Remark 2.4**.: If \(a(\mathcal{U};\cdot)\) is a homogeneous symbol in \(\tilde{\Gamma}_{p}^{m}\) then \(a(u,\ldots,u;\cdot)\) belongs to \(\Gamma_{K,0,p}^{m}[\epsilon_{0}]\), for any \(\epsilon_{0}>0\).
**Remark 2.5**.: If \(a\) is a symbol in \(\Sigma\Gamma_{K,K^{\prime},p}^{m}[\epsilon_{0},N]\) then \(\partial_{x}a\in\Sigma\Gamma_{K,K^{\prime},p}^{m}[\epsilon_{0},N]\) and \(\partial_{\xi}a\in\Sigma\Gamma_{K,K^{\prime},p}^{m-1}[\epsilon_{0},N]\). If in addition \(b\) is a symbol in \(\Sigma\Gamma_{K,K^{\prime},p}^{m^{\prime}}[\epsilon_{0},N]\) then \(ab\in\Sigma\Gamma_{K,K^{\prime},p+p}^{m+m^{\prime}}[\epsilon_{0},N]\).
**Remark 2.6** ( Fourier representation of symbols).: The translation invariance property (2.8) means that the dependence with respect to the variable \(x\) of a symbol \(a(\mathcal{U};x,\xi)\) enters only through the functions \(\mathcal{U}(x)\), implying that a symbol \(a_{q}(u;x,\xi)\) in \(\tilde{\Gamma}_{q}^{m}\), \(m\in\mathbb{R}\), has the form
\[a_{q}(u;x,\xi)=\sum_{\tilde{J}_{q}\in(\Sigma\langle 0\rangle]^{q}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \((a_{q})_{\overline{j}}(\xi)\in\mathbb{C}\) are Fourier multipliers of order \(m\) satisfying: there exists \(\mu\geq 0\), and for any \(\beta\in\mathbb{N}_{0}\), there is \(C_{\beta}>0\) such that
\[\left|\partial_{\xi}^{\beta}\left(a_{q}\right)_{\overline{j}_{q}}(\xi)\right| \leq C_{\beta}\left|\overline{j}_{q}\right|^{\mu}\langle\xi\rangle^{m-\beta}, \quad\forall\overline{j}_{q}\in(\mathbb{Z}\setminus\{0\})^{q}\,. \tag{2.11}\]
A symbol \(a_{q}(u;x,\xi)\) as in (2.10) is real if
\[\overline{\left(a_{q}\right)_{\overline{j}_{q}}(\xi)}=\left(a_{q}\right)_{- \overline{j}_{q}}(\xi) \tag{2.12}\]
By (2.10) a symbol \(a_{1}\) in \(\widetilde{\Gamma}_{1}^{m}\) can be written as \(a_{1}(u;x,\xi)=\sum_{j\in\mathbb{Z}\setminus 0}(a_{1})_{j}(\xi)u_{j}e^{ij,x}\), and therefore, if \(a_{1}\) is independent of \(x\), it is actually \(a_{1}\equiv 0\).
We also define classes of functions in analogy with our classes of symbols.
**Definition 2.7** (Functions).: Let \(p,N\in\mathbb{N}_{0}\), \(K,K^{\prime}\in\mathbb{N}_{0}\) with \(K^{\prime}\leq K\), \(\epsilon_{0}>0\). We denote by \(\widetilde{\mathcal{F}}_{p}\), resp. \(\mathcal{F}_{K,K^{\prime},p}[\epsilon_{0}]\), \(\Sigma\mathcal{F}_{K,K^{\prime},p}[\epsilon_{0},N]\), the subspace of \(\widetilde{\Gamma}_{p}^{0}\), resp. \(\Gamma_{K,K^{\prime},p}^{0}[\epsilon_{0}]\), resp. \(\Sigma\Gamma_{K,K^{\prime},p}^{0}[\epsilon_{0},N]\), made of those symbols which are independent of \(\xi\). We write \(\widetilde{\mathcal{F}}_{p}^{\mathrm{R}}\), resp. \(\mathcal{F}_{K,K^{\prime},p}^{0}[\epsilon_{0}]\), \(\Sigma\mathcal{F}_{K,K^{\prime},p}^{0}[\epsilon_{0},N]\), to denote functions in \(\widetilde{\mathcal{F}}_{p}\), resp. \(\mathcal{F}_{K,K^{\prime},p}[\epsilon_{0}]\), \(\Sigma\mathcal{F}_{K,K^{\prime},p}[\epsilon_{0},N]\), which are real valued for any \(u\in B_{\mathfrak{s}_{0},\mathbb{R}}^{K^{\prime}}(I;\epsilon_{0})\).
The above class of symbols is closed under composition by a change of variables, see [5, Lemma 3.23].
**Lemma 2.8**.: _Let \(K^{\prime}\leq K\in\mathbb{N}\), \(m\in\mathbb{R}\), \(p\in\mathbb{N}_{0}\), \(N\in\mathbb{N}\) with \(p\leq N\), \(\epsilon_{0}>0\) small enough. Consider a symbol \(a\) in \(\Sigma\Gamma_{K,K^{\prime},p}^{m}[\epsilon_{0},N]\) and functions \(b,c\) in \(\Sigma\mathcal{F}_{K,K^{\prime},1}^{0}[\epsilon_{0},N]\). Then \(a(v;t,x+b(v;t,x),\xi(1+c(v;t,x)))\) is in \(\Sigma\Gamma_{K,K^{\prime},p}^{m}[\epsilon_{0},N]\). In particular, if \(a\) is a function in \(\Sigma\mathcal{F}_{K,K^{\prime},p}[\epsilon_{0},N]\), then \(a(v;t,x+b(v;t,x))\) is in \(\Sigma\mathcal{F}_{K,K^{\prime},p}[\epsilon_{0},N]\)._
The following result is [5, Lemma 3.21].
**Lemma 2.9** (Inverse diffeomorphism).: _Let \(0\leq K^{\prime}\leq K\) be in \(\mathbb{N}\) and \(\beta(f;t,x)\) be a real function \(\beta(f;t,\cdot)\) in \(\Sigma\mathcal{F}_{K,K^{\prime},1}^{\mathbb{R}}[\epsilon_{0},N]\). If \(s_{0}\) is large enough, and \(f\in B_{\mathfrak{s}_{0}}^{K}(I;\epsilon_{0})\) then the map \(\Phi_{f}:x\to x+\beta(f;t,x)\) is, for \(\epsilon_{0}\) small enough, a diffeomorphism of \(\mathbb{T}^{1}\), and its inverse diffeomorphism may be written as \(\Phi_{f}^{-1}:y\to y+\tilde{\beta}(f;t,y)\) for some \(\tilde{\beta}\) in \(\Sigma\mathcal{F}_{K,K^{\prime},1}^{\mathbb{R}}[\epsilon_{0},N]\)._
Paradifferential quantization.Given \(p\in\mathbb{N}\) we consider _admissible cut-off_ functions \(\psi_{p}\in C^{\infty}(\mathbb{R}^{p}\times\mathbb{R};\mathbb{R})\) and \(\psi\in C^{\infty}(\mathbb{R}\times\mathbb{R};\mathbb{R})\), even with respect to each of their arguments, satisfying, for \(0<\delta\ll 1\),
\[\operatorname{supp}\psi_{p}\subset\left\{\langle\xi^{\prime},\xi \rangle\in\mathbb{R}^{p}\times\mathbb{R};|\xi^{\prime}|\leq\delta\langle\xi \rangle\right\}, \psi_{p}(\xi^{\prime},\xi)\equiv 1\,\text{ for }|\xi^{\prime}|\leq\delta\langle\xi \rangle/2\,, \tag{2.13}\] \[\operatorname{supp}\psi\subset\left\{\langle\xi^{\prime},\xi \rangle\in\mathbb{R}\times\mathbb{R};|\xi^{\prime}|\leq\delta\langle\xi \rangle\right\}, \psi(\xi^{\prime},\xi)\equiv 1\,\text{ for }|\xi^{\prime}|\leq\delta\langle\xi \rangle/2\,. \tag{2.14}\]
For \(p=0\) we set \(\psi_{0}\equiv 1\). We assume moreover that
\[\left|\partial_{\xi}^{\gamma}\partial_{\xi^{\prime}}^{\beta}\psi_{p}(\xi^{ \prime},\xi)\right|\leq C_{\gamma,\beta}\langle\zeta\rangle^{-\gamma-|\beta|}, \ \forall\gamma\in\mathbb{N}_{0},\ \beta\in\mathbb{N}_{0}^{p},\ \ \left|\partial_{\xi}^{\gamma}\partial_{\xi^{\prime}}^{\beta}\psi(\xi^{ \prime},\xi)\right|\leq C_{\gamma,\beta}\langle\zeta\rangle^{-\gamma-\beta},\ \forall\gamma,\ \beta\in\mathbb{N}_{0}\,. \tag{2.15}\]
If \(a(x,\xi)\) is a smooth symbol we define its Weyl quantization as the operator acting on a \(2\pi\)-periodic function \(u(x)\) (written as in (2.2)) as
\[\operatorname{Op}^{W}\left[a\right]u=\sum_{k\in\mathbb{Z}}\left(\sum_{j\in \mathbb{Z}}\hat{a}\left(k-j,\frac{k+j}{2}\right)\hat{a}\left(j\right)\right)e^{ ikx} \tag{2.16}\]
where \(\hat{a}(k,\xi)\) is the \(k\)-Fourier coefficient of the \(2\pi-\)periodic function \(x\mapsto a(x,\xi)\).
**Definition 2.10** (Bony-Weyl quantization).: If \(a\) is a symbol in \(\widetilde{\Gamma}_{p}^{m}\), respectively in \(\Gamma_{K,K^{\prime},p}^{m}[\epsilon_{0}]\), we set
\[a_{\psi_{p}}(\mathcal{U};x,\xi) :=\sum_{\tilde{n}\in\mathbb{N}^{p}}\psi_{p}\left(\tilde{n},\xi \right)\ a\left(\Pi_{\tilde{n}}\mathcal{U};x,\xi\right)\,,\] \[a_{\psi}(u;t,x,\xi) :=\frac{1}{2\pi}\int_{\mathbb{R}}\psi(\xi^{\prime},\xi)\hat{a} \left(u;t,\xi^{\prime},\xi\right)e^{ik^{\prime}x}\mathrm{d}\xi^{\prime}\,,\]
where \(\hat{a}\) stands for the Fourier transform with respect to the \(x\) variable, and we define the _Bony-Weyl_ quantization of \(a\) as
\[\operatorname{Op}^{BW}\left[a(\mathcal{U};\cdot)\right]=\operatorname{Op}^{W} \left[a_{\psi_{p}}\left(\mathcal{U};\cdot\right)\right],\qquad\operatorname{Op}^{ BW}\left[a(u;t,\cdot)\right]=\operatorname{Op}^{W}\left[a_{\psi}\left(u;t,\cdot \right)\right]\,. \tag{2.17}\]
If \(a\) is a symbol in \(\Sigma\Gamma^{m}_{K,K^{\prime},p}[e_{0},N]\), we define its _Bony-Weyl_ quantization
\[\operatorname{Op}^{BW}\left[a(u;t,\cdot)\right]=\sum_{q=p}^{N}\operatorname{Op }^{BW}\left[a_{q}(u,\ldots,u;\cdot)\right]+\operatorname{Op}^{BW}\left[a_{>N}(u ;t,\cdot)\right]\,.\]
**Remark 2.11**.: \(\bullet\) The operator \(\operatorname{Op}^{BW}\left[a\right]\) maps functions with zero average in functions with zero average, and \(\Pi^{1}_{0}\operatorname{Op}^{BW}\left[a\right]=\operatorname{Op}^{BW}\left[a \right]\Pi^{1}_{0}\).
\(\bullet\) If \(a\) is a homogeneous symbol, the two definitions of quantization in (2.17) differ by a smoothing operator according to Definition 2.17 below.
\(\bullet\) Definition 2.10 is independent of the cut-off functions \(\psi_{p}\), \(\psi\), up to smoothing operators (Definition 2.17).
\(\bullet\) The action of \(\operatorname{Op}^{BW}\left[a\right]\) on the spaces \(H^{s}_{0}\) only depends on the values of the symbol \(a(u;t,x,\xi)\) for \(|\xi|\geq 1\). Therefore, we may identify two symbols \(a(u;t,x,\xi)\) and \(b(u;t,x,\xi)\) if they agree for \(|\xi|\geq 1/2\). In particular, whenever we encounter a symbol that is not smooth at \(\xi=0\), such as, for example, \(a=g(x)|\xi|^{m}\) for \(m\in\mathbb{R}\setminus\{0\}\), or \(\operatorname{sign}(\xi)\), we will consider its smoothed out version \(\chi(\xi)a\), where \(\chi\in C^{\infty}(\mathbb{R};\mathbb{R})\) is an even and positive cut-off function satisfying
\[\chi(\xi)=0\,\,\,\text{if}\,\,|\xi|\leq\tfrac{1}{8}\,,\quad\chi(\xi)=1\,\,\, \text{if}\,\,|\xi|>\tfrac{1}{4}\,,\quad\partial_{\xi}\chi(\xi)>0\quad\forall \xi\in\left(\tfrac{1}{8},\tfrac{1}{4}\right). \tag{2.18}\]
**Remark 2.12**.: Given a paradifferential operator \(A=\operatorname{Op}^{BW}\left[a(x,\xi)\right]\) it results
\[\overline{A}=\operatorname{Op}^{BW}\left[\overline{a(x,-\xi)}\right]\,,\quad A ^{\intercal}=\operatorname{Op}^{BW}\left[a(x,-\xi)\right]\,,\quad A^{ \intercal}=\operatorname{Op}^{BW}\left[\overline{a(x,\xi)}\right]\,, \tag{2.19}\]
where \(A^{\intercal}\) is the transposed operator with respect to the real scalar product \(\langle u,v\rangle_{r}=\int_{\mathbb{T}}u(x)\,v(x)\,\mathrm{d}x\), and \(A^{*}\) denotes the adjoint operator with respect to the complex scalar product of \(L^{2}_{0}\) in (2.4). It results \(A^{*}=\overline{A}^{\intercal}\).
\(\bullet\) A paradifferential operator \(A=\operatorname{Op}^{BW}\left[a(x,\xi)\right]\) is _real_ (i.e. \(A=\overline{A}\)) if
\[\overline{a(x,\xi)}=a(x,-\xi)\,. \tag{2.20}\]
It is _symmetric_ (i.e. \(A=A^{\intercal}\)) if \(a(x,\xi)=a(x,-\xi)\). A operator \(\partial_{x}\operatorname{Op}^{BW}\left[a(x,\xi)\right]\) is Hamiltonian if and only if
\[a(x,\xi)\in\mathbb{R}\quad\quad\text{and}\quad\quad a(x,\xi)=a(x,-\xi)\quad \text{is even}\,\,\,\text{in}\,\,\,\xi\,. \tag{2.21}\]
We now provide the action of a paradifferential operator on Sobolev spaces, cf. [5, Prop. 3.8].
**Lemma 2.13** (Action of a paradifferential operator).: _Let \(m\in\mathbb{R}\)._
1. _If_ \(p\in\mathbb{N}\)_, there is_ \(s_{0}>0\) _such that for any symbol_ \(a\) _in_ \(\tilde{\Gamma}^{m}_{p}\)_, there is a constant_ \(C>0\)_, depending only on_ \(s\) _and on (_2.7_) with_ \(\gamma=\beta=0\)_, such that, for any_ \((u_{1},\ldots,u_{p})\)_, for_ \(p\geq 1\)_,_ \[\left\|\operatorname{Op}^{BW}\left[a(u_{1},\ldots,u_{p};\cdot)\right]\,u_{p+1} \right\|_{H^{s-m}_{0}}\leq C\left\|u_{1}\right\|_{H^{s_{0}}_{0}}\cdots\left\| u_{p}\right\|_{H^{s_{0}}_{0}}\left\|u_{p+1}\right\|_{H^{s}_{0}}.\] _If_ \(p=0\) _the above bound holds replacing the right hand side with_ \(C\left\|u_{p+1}\right\|_{H^{s}_{0}}\)_._
2. _Let_ \(e_{0}>0\)_,_ \(p\in\mathbb{N}\)_,_ \(K^{\prime}\leq K\in\mathbb{N}\)_,_ \(a\) _in_ \(\Gamma^{m}_{K,K^{\prime},p}[e_{0}]\)_. There is_ \(s_{0}>0\)_, and a constant_ \(C\)_, depending only on_ \(s,e_{0}\)_, and on (_2.9_) with_ \(0\leq\gamma\leq 2\)_,_ \(\beta=0\)_, such that, for any_ \(t\) _in_ \(I\)_, any_ \(0\leq k\leq K-K^{\prime}\)_, any_ \(u\) _in_ \(B^{K}_{s_{0}}(I;e_{0})\)_,_ \[\left\|\operatorname{Op}^{BW}\left[\partial_{t}^{k}a(u;t,\cdot)\right]\right\|_{ \mathcal{L}(H^{s}_{0},H^{s-m}_{0})}\leq C\left\|u(t,\cdot)\right\|_{k+K^{\prime },s_{0}}^{p},\] _so that_ \(\left\|\operatorname{Op}^{BW}\left[a(u;t,\cdot)\right]v(t)\right\|_{K-K^{ \prime},s-m}\leq C\left\|u(t,\cdot)\right\|_{K,s_{0}}^{p}\left\|v(t)\right\|_{ K-K^{\prime},s}\)_._
Classes of \(m\)-Operators and smoothing Operators.Given integers \((n_{1},\ldots,n_{p+1})\in\mathbb{N}^{p+1}\), we denote by \(\max_{2}(n_{1},\ldots,n_{p+1})\) the second largest among \(n_{1},\ldots,n_{p+1}\).
We now define \(m\)-operators. The class \(\widetilde{\mathcal{M}}^{m}_{p}\) denotes multilinear operators that lose \(m\) derivatives and are \(p\)-homogeneous in \(u\), while the class \(\mathcal{M}^{m}_{K,K,P}\) contains non-homogeneous operators which lose \(m\) derivatives, vanish at degree at least \(p\) in \(u\), satisfy tame estimates and are \((K-K^{\prime})\)-times differentiable in \(t\). The constant \(\mu\) in (2.23) takes into account possible loss of derivatives in the "low" frequencies. The following definition is a small adaptation of [10, Def. 2.5] as it defines \(m\)-operators acting on \(H^{\infty}(\mathbb{T};\mathbb{C})\) and not \(\hat{H}^{\infty}(\mathbb{T};\mathbb{C}^{2})\) (and we state it directly in Fourier series representation).
**Definition 2.14** (Classes of \(m\)-operators).: Let \(m\in\mathbb{R}\), \(p,N\in\mathbb{N}_{0}\), \(K,K^{\prime}\in\mathbb{N}_{0}\) with \(K^{\prime}\leq K\), and \(\epsilon_{0}>0\).
1. \(p\)**-homogeneous \(m\)-operators.** We denote by \(\widetilde{\mathcal{M}}^{m}_{p}\) the space of \((p+1)\)-linear translation invariant operators from \((H^{\infty}(\mathbb{T};\mathbb{C}))^{p}\times H^{\infty}(\mathbb{T};\mathbb{C})\) to \(H^{\infty}(\mathbb{T};\mathbb{C})\), symmetric in \((u_{1},\ldots,u_{p})\), with Fourier expansion \[M(u)\,v:=M\,(u,\ldots,u)\,v=\sum_{\begin{subarray}{c}(j_{1},\ldots,j_{p},j,k) \in\mathbb{Z}^{p+2}\\ j_{1}+\ldots+j_{p}+j=k\end{subarray}}M_{j_{1},\ldots,j_{p},j,k}\;u_{j_{1}} \ldots u_{j_{p}}v_{j}e^{\mathrm{i}k\,x}\,,\] (2.22) with coefficients \(M_{j_{1},\ldots,j_{p},j,k}\) symmetric in \(j_{1},\ldots,j_{p}\), satisfying the following: there are \(\mu\geq 0\), \(C>0\) such that, for any \(j_{1},\ldots,j_{p}\), \(j,k\in\mathbb{Z}^{p+2}\), it results \[\Big{|}\,M_{j_{1},\ldots,j_{p},j,k}\Big{|}\leq C\,\max_{2}\,\{\langle j_{1} \rangle,\ldots,\langle j_{p}\rangle,\langle j\rangle\}^{\mu}\,\max\{\langle j _{1}\rangle,\ldots,\langle j_{p}\rangle,\langle j\rangle\}^{m}\,,\] (2.23) and the reality condition holds: \[\overline{M_{j_{p},j,k}}=M_{-\overline{j_{p},-j,-k}}\,,\qquad\forall\overline {J_{p}}=\big{(}j_{1},\ldots,j_{p}\big{)}\in\mathbb{Z}^{p},\big{(}j,k\big{)} \in\mathbb{Z}^{2}\,.\] (2.24) If \(p=0\) the right hand side of (2.22) must be substituted with \(\sum_{j\in\mathbb{Z}}M_{j}\,v_{j}e^{\mathrm{i}jx}\) with \(\big{|}M_{j}\big{|}\leq C\,\big{\langle}j\big{\rangle}^{m}\).
2. **Non-homogeneous \(m\)-operators.** We denote by \(\mathcal{M}^{m}_{K,K^{\prime},P}[\epsilon_{0}]\) the space of operators \((u,t,v)\mapsto M(u;t)\,v\) defined on \(B_{C_{K^{\prime}}(I,H^{\infty}(\mathbb{T};\mathbb{C}))}\,(0;\epsilon_{0}) \times I\times C^{0}_{*}(I,H^{\infty}(\mathbb{T};\mathbb{C}))\) for some \(s_{0}>0\), which are linear in the variable \(v\) and such that the following holds true. For any \(s\geq s_{0}\) there are \(C>0\) and \(\epsilon_{0}(s)\in]0,\epsilon_{0}[\) such that for any \(u\in B_{C_{K^{\prime}}^{s^{\prime}}(I,H^{\infty}(\mathbb{T};\mathbb{C}))}\,(0; \epsilon_{0})\cap C^{K}_{*}(I,H^{s}(\mathbb{T};\mathbb{C}))\), any \(v\in C^{K-K}_{*}(I,H^{s}(\mathbb{T};\mathbb{C}))\), any \(0\leq k\leq K-K^{\prime}\), \(t\in I\), we have that \[\Big{\|}\partial_{t}^{k}\,(M(u;t)\,v)\Big{\|}_{s-ak-m}\leq C\sum_{k^{\prime}+k^{ \prime}=k}\Big{(}\|v\|_{k^{\prime},s}\|u\|_{k^{\prime}+K^{\prime},s_{0}}^{p}+ \|v\|_{k^{\prime},s_{0}}\|u\|_{k^{\prime}+K^{\prime},s_{0}}^{p-1}\|u\|_{k^{ \prime}+K^{\prime},s}\Big{)}\.\] (2.25) In case \(p=0\) we require the estimate \(\|\partial_{t}^{k}\,(M(u;t)\,v)\|_{s-ak-m}\leq C\|v\|_{k,s}\). We say that a non-homogeneous \(m\)-operator \(M(u;t)\) is _real_ if it is real valued for any \(u\in B_{C_{K^{\prime}}^{s^{\prime}}(I,H^{\infty}(\mathbb{T};\mathbb{R}))}\,(0; \epsilon_{0})\).
3. \(m\)**-Operators.** We denote by \(\Sigma\mathcal{M}^{m}_{K,K^{\prime},p}[\epsilon_{0},N]\) the space of operators \[M(u;t)\,v=\sum_{q=p}^{N}M_{q}(u,\ldots,u)\,v+M_{>N}(u;t)\,v\] (2.26) where \(M_{q}\) are homogeneous \(m\)-operators in \(\widetilde{\mathcal{M}}^{m}_{q}\), \(q=p,\ldots,N\) and \(M_{>N}\) is a non-homogeneous \(m\)-operator in \(\mathcal{M}^{m}_{K,K^{\prime},N+1}[\epsilon_{0}]\). We say that a \(m\)-operator \(M(u;t)\) is _real_ if it is real valued for any \(u\in B_{C_{K^{\prime}}^{s^{\prime}}(I,H^{\infty}(\mathbb{T};\mathbb{R}))}\,(0; \epsilon_{0})\).
4. **Pluri-homogeneous \(m\)-Operator.** We denote by \(\Sigma^{N}_{p}\widetilde{\mathcal{M}}^{m}_{q}\) the pluri-homogeneous \(m\)-operators of the form (2.26) with \(M_{>N}=0\).
We denote with \(\widetilde{\mathcal{M}}^{m}_{p}\), \(\dot{\mathcal{M}}^{m}_{K,K^{\prime},p}[\epsilon_{0}]\) and \(\Sigma\dot{\mathcal{M}}^{m}_{K,K^{\prime},p}\,[\epsilon_{0},N]\) the subspaces of \(m\)-operators in \(\widetilde{\mathcal{M}}^{m}_{p}\), respectively \(\mathcal{M}^{m}_{K,K^{\prime},p}\,[\epsilon_{0}]\) and \(\Sigma\dot{\mathcal{M}}^{m}_{K,K^{\prime},p}[\epsilon_{0},N]\), defined on zero-average functions taking value \(M(u)\,v\) in zero-average functions.
**Remark 2.15**.: By [10, Lemma 2.8], if \(M(u_{1},\ldots,u_{p})\) is a \(p\)-homogeneous \(m\)-operator in \(\mathcal{M}_{J}^{m}\) then \(M(u)=M(u,\ldots,u)\) is a non-homogeneous \(m\)-operator in \(\mathcal{M}_{K,0,p}^{m}[\epsilon_{0}]\) for any \(\epsilon_{0}>0\) and \(K\in\mathbb{N}_{0}\). We shall say that \(M(u)\) is in \(\widetilde{\mathcal{M}_{p}^{m}}\).
**Remark 2.16**.: The multiplication operator \(v\mapsto\frac{1}{1+2f}v\) belongs to \(\Sigma\mathcal{M}_{K,0,0}^{0}[\epsilon_{0},N]\).
If \(m\leq 0\) the operators in \(\Sigma\mathcal{M}_{K,K^{\prime},p}^{m}[\epsilon_{0},N]\) are referred to as smoothing operators.
**Definition 2.17** (Smoothing operators).: Let \(\rho\geq 0\). A \((-\rho)\)-operator \(R(u)\) belonging to \(\Sigma\mathcal{M}_{K,K^{\prime},p}^{-\rho}[\epsilon_{0},N]\) is called a smoothing operator. We also denote
\[\widetilde{\mathcal{R}}_{p}^{-\rho}:=\widetilde{\mathcal{M}_{p}^{-\rho}}^{ \rho},\qquad\qquad\mathcal{R}_{K,K^{\prime},p}^{-\rho}[\epsilon_{0}]:= \mathcal{M}_{K,K^{\prime},p}^{-\rho}[\epsilon_{0}],\qquad\qquad\Sigma \mathcal{R}_{K,K^{\prime},p}^{-\rho}[\epsilon_{0},N]:=\Sigma\mathcal{M}_{K,K^ {\prime},p}^{-\rho}[\epsilon_{0},N]\,.\]
We define \(\widetilde{\mathcal{R}}_{p}^{-\rho}=\widetilde{\mathcal{M}_{p}^{-\rho}}\), \(\hat{\mathcal{R}}_{K,K^{\prime},p}^{-\rho}[\epsilon_{0}]=\hat{\mathcal{M}_{K,K^{\prime},p}^{-\rho}}[\epsilon_{0}]\) and \(\Sigma\hat{\mathcal{R}}_{K,K^{\prime},p}^{-\rho}[\epsilon_{0},N]=\Sigma\hat{ \mathcal{M}_{K,K^{\prime},p}^{-\rho}}[\epsilon_{0},N]\) as in Definition 2.14.
If \(R(u)\) is a homogenous smoothing operator in \(\widetilde{\mathcal{R}}_{p}^{-\rho}\) then \(\Pi_{0}^{\perp}R(u)\), where \(\Pi_{0}^{\perp}\) is defined (2.3), restricted to zero average functions \(u\), belongs to \(\widetilde{\mathcal{R}}_{p}^{-\rho}\).
**Remark 2.18**.: \(\bullet\) Lemma 2.13 implies that, if \(a(u;t,\cdot)\) is in \(\Sigma\Gamma_{K,K^{\prime},p}^{m}[\epsilon_{0},N]\), \(m\in\mathbb{R}\), then \(\operatorname{Op}^{BW}[a(u;t,\cdot)]\) defines a \(m\)-operator in \(\Sigma\mathcal{M}_{K,K^{\prime},p}^{m}[\epsilon_{0},N]\).
\(\bullet\) The composition of smoothing operators \(R_{1}\in\Sigma\mathcal{R}_{K,K^{\prime},p_{1}}^{-\rho}[\epsilon_{0},N]\) and \(R_{2}\in\Sigma\mathcal{R}_{K,K^{\prime},p_{2}}^{-\rho}[\epsilon_{0},N]\) is a smoothing operator \(R_{1}R_{2}\) in \(\Sigma\mathcal{R}_{K,K^{\prime},p_{1}+p_{2}}^{-\rho}[\epsilon_{0},N]\). This is a particular case of Proposition 2.23-(_i_).
**Lemma 2.19**.: _Let \(m\in\mathbb{R},\epsilon_{0}>0\), \(K,K^{\prime}\in\mathbb{N}_{0}\), \(K^{\prime}\leq K\), \(N,p\in\mathbb{N}_{0}\), \(u\in B_{s,R}^{K}\) (\(I;\epsilon_{0}\)) and \(M(u;t)\) be a real operator in \(\Sigma\mathcal{M}_{K,K^{\prime},p}^{m}[\epsilon_{0},N]\). Then \(M(u;t)u\) is a real function in \(\Sigma\mathcal{R}_{K,K^{\prime},p+1}^{R}[\epsilon_{0},N+1]\) according to Definition 2.7._
Proof.: We decompose \(M(u;t)=\sum_{q=p}^{N}M_{q}(u)+M_{\prec N}(u;t)\) in the usual homogeneous and non-homogeneous components. We assume \(u\) is in \(B_{s,R}^{K}\) (\(I;\epsilon_{0}\)) so that \(u\) has zero average. We now prove that \(M_{q}(u)u\) is a function in \(\widetilde{\mathcal{R}}_{q+1}^{R}\). For any zero average function \(u\), according to (2.22) we have
\[(M_{q}(u)u)\,(x)=\sum_{\begin{subarray}{c}(j_{1},\ldots,j_{p},j)\in(\mathbb{Z} \setminus\{0\})^{q+1}\\ j_{1}+\ldots+j_{p}+j=k\end{subarray}}M_{j_{1},\ldots,j_{p},j,k}\,u_{j_{1}} \ldots u_{j_{q}}u_{j}\,\,e^{\{\{j_{1}+\ldots+j_{q}+j\}\}x}.\]
Moreover, by (2.23), for any \((j_{1},\ldots,j_{p},j)=(\mathbb{Z}\setminus\{0\})^{q+1}\), we have
\[\left|M_{j_{1},\ldots,j_{p},j,k}\right| \lesssim\max_{2}\left\{\langle j_{1}\rangle,\ldots,\langle j_{q} \rangle,\langle j\rangle\right\}^{\mu}\max\{\langle j_{1}\rangle,\ldots, \langle j_{q}\rangle,\langle j\rangle\}^{m}\] \[\lesssim\max\left\{\langle j_{1}\rangle,\ldots,\langle j_{q} \rangle,\langle j\rangle\}^{2\max\{\mu,m\}}\lesssim\left|\langle q_{q},j \rangle\right|^{2\max\{\mu,m\}},\]
and, in view of (2.11), we thus obtain that \(M_{q}(u)u\) is a function in \(\widetilde{\mathcal{F}}_{q+1}\). In view of (2.24) the function \(M_{q}(u)u\) is real.
We now prove that \((M_{\prec N}(u;t)\,u)\,(t,x)\) is a function in \(\mathcal{F}_{K,K,N+2}^{\mathbb{R}}(\epsilon_{0})\). Let \(s_{0}:=1+\alpha\left(K-K^{\prime}\right)+m\). For any \(0\leq k\leq K-K^{\prime}\), for any \(s\geq s_{0}\), and \(0\leq\gamma\leq s-s_{0}\) we have that \(s-\alpha k-m>\gamma+1\), and
\[\left|\partial_{t}^{k}\partial_{x}^{\gamma}(M_{\prec N}(u;t)\,u)\right|\lesssim \left\|\partial_{t}^{k}(M_{\prec N}(u;t)\,u)\right\|_{\gamma+1}\lesssim\left\| \partial_{t}^{k}(M_{\prec N}(u;t)\,u)\right\|_{s-\alpha k-m}\overset{(\ref{eq:s-2-1})}{ \lesssim}\|u\|_{k+K^{\prime},s_{0}}^{N+1}\|u\|_{k+K^{\prime},s}\]
proving, in view of Definitions 2.2 and 2.7, that \(M_{\prec N}(u;t)\,u\) is a function in \(\mathcal{F}_{K,K^{\prime},N+2}(\epsilon_{0})\). The reality condition is verified since \(M_{\prec N}\) is a real \(m\)-operator per hypothesis.
Symbolic calculus.Let \(\sigma(D_{x},D_{\xi},D_{y},D_{\eta}):=D_{\xi}D_{y}-D_{x}D_{\eta}\) where \(D_{x}:=\frac{1}{1}\partial_{x}\) and \(D_{\xi},D_{y},D_{\eta}\) are similarly defined. The following is Definition 3.11 in [5].
**Definition 2.20** (Asymptotic expansion of composition symbol).: Let \(p\), \(p^{\prime}\) in \(\mathbb{N}_{0}\), \(K,K^{\prime}\in\mathbb{N}_{0}\) with \(K^{\prime}\leq K\), \(\rho\geq 0\), \(m,m^{\prime}\in\mathbb{R}\), \(\epsilon_{0}>0\). Consider symbols \(a\in\Sigma\Gamma^{m}_{K,K^{\prime},p}[\epsilon_{0},N]\) and \(b\in\Sigma\Gamma^{m^{\prime}}_{K,K^{\prime},p^{\prime}}[\epsilon_{0},N]\). For \(u\) in \(B^{K}_{\sigma}(I;\epsilon_{0})\) we define, for \(\rho<\sigma-s_{0}\), the symbol
\[(a\#_{\rho}b)\,(u;t,x,\xi):=\sum_{k=0}^{\rho}\frac{1}{k!}\left(\frac{\mathrm{i }}{2}\sigma\left(D_{x},D_{\xi},D_{y},D_{\eta}\right)\right)^{k}\left[a(u;t,x, \xi)b(u;t,y,\eta)\right]_{|_{x=y,\xi=\eta}} \tag{2.27}\]
modulo symbols in \(\Sigma\Gamma^{m+m^{\prime}-\rho}_{K,K^{\prime},p+p^{\prime}}[\epsilon_{0},N]\).
The symbol \(a\#_{\rho}b\) belongs to \(\Sigma\Gamma^{m+m^{\prime}}_{K,K^{\prime},p+p^{\prime}}[\epsilon_{0},N]\). Moreover
\[a\#_{\rho}b=ab+\frac{1}{2\mathrm{i}}\{a,b\} \tag{2.28}\]
up to a symbol in \(\Sigma\Gamma^{m+m^{\prime}-2}_{K,K^{\prime},p+p^{\prime}}[\epsilon_{0},N]\), where
\[\{a,b\}:=\partial_{\xi}a\,\partial_{x}b-\partial_{x}a\,\partial_{\xi}b\]
denotes the Poisson bracket. The following result is proved in Proposition 3.12 in [5].
**Proposition 2.21** (Composition of Bony-Weyl operators).: _Let \(p,q,N,K,K^{\prime}\in\mathbb{N}_{0}\) with \(K^{\prime}\leq K\), \(\rho\geq 0\), \(m,m^{\prime}\in\mathbb{R}\), \(\epsilon_{0}>0\). Consider symbols \(a\in\Sigma\Gamma^{m}_{K,K^{\prime},p}[\epsilon_{0},N]\) and \(b\in\Sigma\Gamma^{m^{\prime}}_{K,K^{\prime},q}[\epsilon_{0},N]\). Then_
\[\mathrm{Op}^{BW}\left[a(u;t,x,\xi)\right]\circ\mathrm{Op}^{BW}\left[b(u;t,x, \xi)\right]-\mathrm{Op}^{BW}\left[(a\#_{\rho}b)(u;t,x,\xi)\right] \tag{2.29}\]
_is a smoothing operator in \(\Sigma\hat{\mathcal{R}}^{-\rho+m+m^{\prime}}_{K,K^{\prime},p+q}[\epsilon_{0},N]\)._
We have the following result, see e.g. Lemma 7.2 in [5].
**Lemma 2.22** (Bony paraproduct decomposition).: _Let \(u_{1},u_{2}\) be functions in \(H^{\sigma}(\mathbb{T};\mathbb{C})\) with \(\sigma>\frac{1}{2}\). Then_
\[u_{1}u_{2}=\mathrm{Op}^{BW}\left[u_{1}\right]u_{2}+\mathrm{Op}^{BW}\left[u_{2 }\right]u_{1}+R_{1}(u_{1})u_{2}+R_{2}(u_{2})u_{1} \tag{2.30}\]
_where for \(\mathrm{j}=1,2\), \(R_{\mathrm{j}}\) is a homogeneous smoothing operator in \(\widehat{\mathcal{R}}^{-\rho}_{1}\) for any \(\rho\geq 0\)._
We now state other composition results for \(m\)-operators which follow as in [10, Proposition 2.15].
**Proposition 2.23** (Compositions of \(m\)-operators).: _Let \(p,p^{\prime},N,K,K^{\prime}\in\mathbb{N}_{0}\) with \(K^{\prime}\leq K\) and \(\epsilon_{0}>0\). Let \(m,m^{\prime}\in\mathbb{R}\). Then_
* _If_ \(M(u;t)\) _is in_ \(\Sigma\mathcal{M}^{m}_{K,K^{\prime},p}[\epsilon_{0},N]\) _and_ \(M^{\prime}(u;t)\) _is in_ \(\Sigma\mathcal{M}^{m^{\prime}}_{K,K^{\prime},p^{\prime}}[\epsilon_{0},N]\) _then the composition_ \(M(u;t)\circ M^{\prime}(u;t)\) _is in_ \(\Sigma\mathcal{M}^{m+\max(m^{\prime},0)}_{K,K^{\prime},p+p^{\prime}}[\epsilon_ {0},N]\)_._
* _If_ \(M(u)\) _is a homogeneous_ \(m\)_-operator in_ \(\widehat{\mathcal{M}}^{m}_{p}\) _and_ \(M^{(\ell)}(u;t)\)_,_ \(\ell=1,\ldots,p+1\)_, are matrices of_ \(m_{\ell}\)_-operators in_ \(\Sigma\mathcal{M}^{m_{\ell}}_{K,K^{\prime},q_{\ell}}[\epsilon_{0},N]\) _with_ \(m_{\ell}\in\mathbb{R}\)_,_ \(q_{\ell}\in\mathbb{N}_{0}\)_, then_ \[M\left(M^{(1)}(u;t)u,\ldots,M^{(p)}(u;t)u\right)M^{(p+1)}(u;t)\] _belongs to_ \(\Sigma\mathcal{M}^{m+\hat{m}}_{K,K^{\prime},p+q}[\epsilon_{0},N]\) _with_ \(\hat{m}:=\sum_{\ell=1}^{p+1}\max(m_{\ell},0)\) _and_ \(\hat{q}:=\sum_{\ell=1}^{p+1}q_{\ell}\)_._
* _If_ \(M(u;t)\) _is in_ \(\mathcal{M}^{m}_{K,0,p}[\epsilon_{0}]\) _for any_ \(\epsilon_{0}\in\mathbb{R}^{+}\) _and_ \(\mathbb{M}_{0}(u;t)\) _belongs to_ \(\mathcal{M}^{0}_{K,K^{\prime},0}[\epsilon_{0}]\)_, then_ \(M(\mathbb{M}_{0}(u;t)u;t)\) _is in_ \(\mathcal{M}^{m}_{K,K^{\prime},p}[\epsilon_{0}]\)_._
* _Let_ \(a\) _be a symbol in_ \(\Sigma\Gamma^{m}_{K,K^{\prime},p}[\epsilon_{0},N]\) _with_ \(m\geq 0\) _and_ \(R\) _a smoothing operator in_ \(\Sigma\mathcal{R}^{-\rho}_{K,K^{\prime},p^{\prime}}[\epsilon_{0},N]\)_.
**Notation 2.24**.: In the sequel if \(K^{\prime}=0\) we denote a symbol \(a(u;t,x,\xi)\) in \(\Gamma^{m}_{K,0,p}[e_{0}]\) simply as \(a(u;x,\xi)\), and a smoothing operator in \(R(u;t)\) in \(\Sigma\mathcal{R}^{-p}_{K,0,p}[e_{0},N]\) simply as \(R(u)\), without writing the \(t\)-dependence.
We finally provide the Bony paralinearization formula of the composition operator.
**Lemma 2.25** (Bony Paralinearization formula).: _Let \(F\) be a smooth \(\mathbb{C}\)-valued function defined on a neighborhood of zero in \(\mathbb{C}\), vanishing at zero at order \(q\in\mathbb{N}\). Then there is \(e_{0}>0\) and a smoothing operator \(R(u)\) in \(\Sigma\mathcal{R}^{-p}_{K,0,q^{\prime}}[e_{0},N]\), \(q^{\prime}:=\max(q-1,1)\), for any \(\rho\), such that_
\[F(u)=\,\operatorname{Op}^{BW}\left[F^{\prime}(u)\right]u+R(u)\,u\,. \tag{2.31}\]
Proof.: The formula follows by combination of [5, Lemmata 3.19 and 7.2].
### \(z\)-dependent paradifferential calculus
Along the paralinearization process of the \(\alpha\)-SQG equation in Section 4 we shall encounter parameter dependent paradifferential operators depending on a \(2\pi\)-periodic variable \(z\). The following "Kernel-functions" have to be considered as Taylor remainders of maps of the form \(F(u;x,z)\) at \(z=0\) which are smooth in \(u\) and with finite regularity in \(x\) and \(z\). We are interested in the behavior of such functions close to \(z=0\).
**Definition 2.26** (Kernel functions).: Let \(n\in\mathbb{R}\), \(p,N\in\mathbb{N}_{0}\), \(K\in\mathbb{N}_{0}\), and \(e_{0}>0\).
1. \(p\)**-homogeneous Kernel-functions.** If \(p\in\mathbb{N}\) we denote \(\widetilde{K\mathcal{F}}^{n}_{p}\) the space of \(z\)-dependent, \(p\)-homogeneous maps from \(H^{\infty}_{0}(\mathbb{T};\mathbb{C})\) to the space of \(x\)-translation invariant real functions \(\varrho(u;x,z)\) of class \(\mathcal{C}^{\infty}\) in \((x,z)\in\mathbb{T}^{2}\) with Fourier expansion \[\varrho(u;x,z)=\sum_{j_{1},\ldots,j_{p}\in\mathbb{Z}\setminus\{0\}}\varrho_{ j_{1},\ldots,j_{p}}(z)\,u_{j_{1}}\cdots u_{j_{p}}e^{\mathrm{i}\{j_{1}+\cdots+j_{p}\}x},\quad z\in\mathbb{T}\setminus\{0\}\,,\] (2.32) with coefficients \(\varrho_{j_{1},\ldots,j_{p}}(z)\) of class \(\mathcal{C}^{\infty}(\mathbb{T};\mathbb{C})\), symmetric in \((j_{1},\ldots,j_{p})\), satisfying the reality condition \(\overline{\varrho_{j_{1},\ldots,j_{p}}}(z)=\varrho_{-j_{1},\ldots,-j_{p}}(z)\) and the following: for any \(l\in\mathbb{N}_{0}\), there exist \(\mu>0\) and a constant \(C>0\) such that \[\left|\partial_{z}^{l}\varrho_{j_{1},\ldots,j_{p}}(z)\right|\leq C\left|J \right|^{\mu}\,|z|_{\mathbb{T}}^{n-l}\,,\quad\forall\,\mathcal{J}=(j_{1}, \ldots,j_{p})\in(\mathbb{Z}\setminus\{0\})^{p}\,.\] (2.33) For \(p=0\) we denote by \(\widetilde{K\mathcal{F}}^{n}_{0}\) the space of maps \(z\mapsto\varrho(z)\) which satisfy \(\left|\partial_{z}^{l}\varrho(z)\right|\leq C\left|z\right|_{\mathbb{T}}^{n-l}\).
2. **Non-homogeneous Kernel-functions.** We denote by \(K\mathcal{F}^{n}_{K,0,p}[e_{0}]\) the space of \(z\)-dependent, real functions \(\varrho(u;x,z)\), defined for \(u\in B^{0}_{s_{0}}(I;e_{0})\) for some \(s_{0}\) large enough, such that for any \(0\leq k\leq K\) and \(l\leq\max\left\{0,\lceil 1+n\rceil\right\}\), any \(s\geq s_{0}\), there are \(C>0\), \(0<\epsilon_{0}(s)<\epsilon_{0}\) and for any \(u\in B^{K}_{s_{0}}(I;e_{0}(s))\cap C^{k}_{*}\left(I,H^{s}_{0}(\mathbb{T}; \mathbb{C})\right)\) and any \(\gamma\in\mathbb{N}_{0}\), with \(\gamma\leq s-s_{0}\), one has the estimate \[\left|\partial_{t}^{k}\partial_{x}^{\gamma}\partial_{z}^{l}\varrho(u;x,z) \right|\leq C\left|u\right|_{k,s_{0}}^{p-1}\left|u\right|_{k,s}\,|z|_{\mathbb{T }}^{n-l}\,,\quad\quad z\in\mathbb{T}\setminus\{0\}\,.\] (2.34) If \(p=0\) the right hand side in (2.34) has to be replaced by \(|z|_{\mathbb{T}}^{n-l}\).
3. **Kernel-functions.** We denote by \(\Sigma K\mathcal{F}^{n}_{K,0,p}[e_{0},N]\) the space of real functions of the form \[\varrho(u;x,z)=\sum_{q=p}^{N}\varrho_{q}\left(u;x,z\right)+\varrho_{>N}(u;x,z)\] (2.35) where \(\varrho_{q}\left(u;x,z\right)\), \(q=p,\ldots,N\) are homogeneous Kernel functions in \(\widetilde{K\mathcal{F}}^{n}_{q}\), and \(\varrho_{>N}(u;x,z)\) is a non-homogeneous Kernel function in \(K\mathcal{F}^{n}_{K,0,N+1}[e_{0}]\). A Kernel function \(\varrho(u;x,z)\) is _real_ if it is real valued for any \(u\in B^{0}_{s_{0},\mathbb{R}}(I;e_{0})\).
In view of Remark 2.4, a homogeneous Kernel function \(\varrho(u;x,z)\) in \(\widetilde{K\mathcal{F}}^{n}_{p}\) defines a non-homogenous Kernel function in \(K\mathcal{F}^{n}_{K,0,p}[e_{0}]\) for any \(\epsilon_{0}>0\).
**Remark 2.27**.: Let \(\varrho(u;x,z)\) be a Kernel function in \(\Sigma\mathcal{F}^{n}_{K,0,p}\left[\epsilon_{0},N\right]\) with \(n\geq 0\), which admits a continuous extension in \(z=0\). Then its trace \(\varrho(u;x,0)\) is a function in \(\Sigma\mathcal{F}^{n}_{K,0,p}\left[\epsilon_{0},N\right]\).
**Remark 2.28**.: If \(\varrho(u;x,z)\) is a homogeneous Kernel function \(\overline{K}\mathcal{F}^{n}_{p}\), the two definitions of quantization in (2.17) differ by a Kernel smoothing operator in \(\overline{K}\mathcal{F}^{-\varrho,n}_{p}\), for any \(\rho>0\), according to Definition 2.33 below.
**Remark 2.29**.: If \(\varrho_{1}(u;x,z)\) is a Kernel function in \(\Sigma K\mathcal{F}^{m}_{K,0,p_{1}}\left[\epsilon_{0},N\right]\) and \(\varrho_{2}(u;x,z)\) in \(\Sigma K\mathcal{F}^{m_{2}}_{K,0,p_{2}}\left[\epsilon_{0},N\right]\), then the sum \((\varrho_{1}+\varrho_{2})(u;x,z)\) is a Kernel function in \(\Sigma K\mathcal{F}^{m_{1}+n_{2}}_{K,0,m_{1}\left[\rho_{1},p_{2}\right]}\left[ \epsilon_{0},N\right]\) and the product \((\varrho_{1}\varrho_{2})(u;x,z)\) is a Kernel function in \(\Sigma K\mathcal{F}^{n_{1}+n_{2}}_{K,0,p_{1}+p_{2}}\left[\epsilon_{0},N\right]\).
**Remark 2.30**.: Let \(\varrho(u;x,z)\) be a Kernel function in \(\Sigma K\mathcal{F}^{n}_{K,0,p}\left[\epsilon_{0},N\right]\) with \(n>-1\). Then \(\hat{f}\varrho(u;x,z)\,\mathrm{d}z\) is a function in \(\Sigma\mathcal{F}^{n}_{K,0,p}\left[\epsilon_{0},N\right]\). This follows directly integrating (2.33) and (2.34) in \(z\).
The \(m\)-Kernel-operators defined below are a \(z\)-dependent family of \(m\)-operators with coefficients small as \(|z|_{\mathbb{T}}^{n}\). They appear for example as smoothing operators in the composition of Bony-Weyl quantizations of Kernel-functions.
**Definition 2.31**.: Let \(m,n\in\mathbb{R}\), \(p,N\in\mathbb{N}_{0}\), \(K\in\mathbb{N}_{0}\) with \(\epsilon_{0}>0\).
1. \(p\)**-homogeneous \(m\)-Kernel-operator.** We denote by \(\overline{K}\mathcal{M}^{m,n}_{p}\) the space of \(z\)-dependent, \(x\)-translation invariant homogeneous \(m\)-operators according to Definition 2.14, Item i, in which the constant \(C\) is substituted with \(C|z|_{\mathbb{T}}^{n}\), equivalently \[M(u;z)\,\nu(x)=\sum_{\begin{subarray}{c}(\gamma_{p},j),k\in Z^{p 2}\\ j_{1}+\ldots+j_{p}+j=k\end{subarray}}M_{\gamma_{p},j,k}\left(z\right)u_{j_{1}} \ldots u_{j_{p}}\,\nu_{j}\,\varrho^{\mathrm{i}kx}\,,\qquad z\in\mathbb{T} \setminus\left\{0\right\},\] (2.36) with coefficients satisfying \[|M_{\gamma_{p},j,k}\left(z\right)|\leq C\max_{2}\left\{\langle j_{1}\rangle, \ldots,\langle j_{p}\rangle,\langle j\rangle\right\}^{\mu}\,\max\left\{ \langle j_{1}\rangle,\ldots,\langle j_{p}\rangle,\langle j\rangle\right\}^{m} |z|_{\mathbb{T}}^{n}\,.\] (2.37) If \(p=0\) the right hand side of (2.36) is replaced by \(\sum_{j\in Z}M_{j}\left(z\right)v_{j}\,\varrho^{\mathrm{i}jx}\) with \(|M_{j}\left(z\right)|\leq C\left\langle j\right\rangle^{m}|z|_{\mathbb{T}}^{n}\).
2. **Non-homogeneous \(m\)-Kernel-operator.** We denote by \(K\mathcal{M}^{m,n}_{K,0,p}[\epsilon_{0}]\) the space of \(z\)-dependent, non-homogeneous operators \(M(u;z)\,\nu\) defined for any \(z\in\mathbb{T}\setminus\left\{0\right\}\), such that for any \(0\leq k\leq K\) \[\left\|\partial_{t}^{k}\left(M(u;z)\,\nu\right)\right\|_{s-ak-m}\leq C\,|z|_{ \mathbb{T}}^{n}\,\sum_{k^{\prime}+k^{\prime}=k}\left(\|\,v\|_{k^{\prime},s} \|u\|_{k^{\prime},s_{0}}^{p}+\|\,v\|_{k^{\prime\prime},s_{0}}\|u\|_{k^{\prime}, s_{0}}^{p-1}\|u\|_{k^{\prime},s}\right).\] (2.38)
3. \(m\)**-Kernel-Operator.** We denote by \(\Sigma K\mathcal{M}^{m,n}_{K,0,p}[\epsilon_{0},N]\) the space of operators of the form \[M(u;z)\,\nu=\sum_{q=p}^{N}M_{q}(u,\ldots,u)\,\nu+M_{>N}(u;z)\,\nu\] (2.39) where \(M_{q}\) are homogeneous \(m\)-Kernel operators in \(\overline{K}\mathcal{M}^{m,n}_{q}\), \(q=p,\ldots,N\) and \(M_{>N}\) is a non-homogeneous \(m\)-Kernel-operator in \(\mathcal{M}^{m,n}_{K,0,N+1}[\epsilon_{0}]\).
4. **Pluri-homogeneous \(m\)-Kernel-Operator.** We denote by \(\Sigma_{p}^{N}\widetilde{\mathcal{M}}^{m}_{q}\) the pluri-homogeneous \(m\)-operators of the form (2.39) with \(M_{>N}=0\).
**Remark 2.32**.: Given \(\varrho(u;x,z)\in\Sigma K\mathcal{F}^{n}_{K,0,p}\left[\epsilon_{0},N\right]\) then \(\mathrm{Op}^{BW}\left[\varrho(u;x,z)\right]\in\Sigma K\mathcal{M}^{0,n}_{K,0, p}\left[\epsilon_{0},N\right]\).
**Definition 2.33** (Kernel-smoothing operators).: Given \(\rho>0\) we define the homogeneous and non-homogeneous Kernel-smoothing operators as
\[\overline{K}\mathcal{R}^{-\rho,n}_{p}:=\overline{K}\mathcal{M}^{-\rho,n}_{p}, \qquad K\mathcal{R}^{-\rho,n}_{K,0,p}\left[\epsilon_{0}\right]:=K\mathcal{M}^{- \rho,n}_{K,0,p}\left[\epsilon_{0}\right],\qquad\Sigma K\mathcal{R}^{-\rho,n}_{K,0,p}\left[\epsilon_{0},N\right]:=\Sigma K\mathcal{M}^{-\rho,n}_{K,0,p}\left[ \epsilon_{0},N\right].\]
In view of [10, Lemma 2.8], if \(M(u,\ldots,u;z)\) is a homogenous \(m\)-Kernel operator in \(\overline{K}\mathcal{M}^{m,n}_{p}\) then \(M(u,\ldots,u;z)\) defines a non-homogenous \(m\)-Kernel operator in \(K\mathcal{M}^{m,n}_{K,0,p}\left[\epsilon_{0}\right]\) for any \(\epsilon_{0}>0\) and \(K\in\mathbb{N}_{0}\).
**Proposition 2.34** (Composition of \(z\)-dependent operators).: _Let \(m,n,m^{\prime},n^{\prime}\in\mathbb{R}\), and integers \(K,p,p^{\prime},N\in\mathbb{N}_{0}\) with \(p,p^{\prime}\leq N\)._
1. _Let_ \(\varrho\left(u;x,z\right)\in\Sigma K\mathcal{F}_{K,0,p}^{n}\left[\epsilon_{0}, N\right]\) _and_ \(\varrho^{\prime}\left(u;x,z\right)\in\Sigma K\mathcal{F}_{K,0,p^{\prime}}^{n^{ \prime}}\left[\epsilon_{0},N\right]\) _be Kernel functions. Then_ \[\operatorname{Op}^{BW}\left[\varrho\left(u;x,z\right)\right]\circ\operatorname {Op}^{BW}\left[\varrho^{\prime}\left(u;x,z\right)\right]=\operatorname{Op}^{ BW}\left[\varrho\,\varrho^{\prime}\left(u;x,z\right)\right]+R\left(u;z\right)\] _where_ \(R\left(u;z\right)\) _is a Kernel-smoothing operator in_ \(\Sigma K\mathcal{R}_{K,0,p+p}^{-p,n+n^{\prime}}\left[\epsilon_{0},N\right]\) _for any_ \(\rho\geq 0\)_;_
2. _Let_ \(M\left(u;z\right)\) _be a_ \(m\)_-operator in_ \(\Sigma K\mathcal{M}_{K,0,p}^{m,n}\left[\epsilon_{0},N\right]\) _and_ \(M^{\prime}\left(u;z\right)\) _be an_ \(m^{\prime}\)_-operator in_ \(\Sigma K\mathcal{M}_{K,0,p^{\prime}}^{m^{\prime},n^{\prime}}\left[\epsilon_{0 },N\right]\)_. Then_ \(M\left(u;z\right)\circ M^{\prime}\left(u;z\right)\) _belongs to_ \(\Sigma K\mathcal{M}_{K,0,p+p}^{m+\max\left(m^{\prime};0\right),n+n^{\prime}} \left[\epsilon_{0},N\right]\)_;_
3. _Let_ \(\varrho\left(u;x,z\right)\) _be a Kernel function in_ \(\Sigma K\mathcal{F}_{K,0,p}^{n^{\prime}}\left[\epsilon_{0},N\right]\) _and_ \(R\left(u;z\right)\) _be a Kernel smoothing operator in_ \(\Sigma K\mathcal{R}_{K,0,p}^{-p,n^{\prime}}\left[\epsilon_{0},N\right]\) _then_ \(\operatorname{Op}^{BW}\left[\varrho\left(u;x,z\right)\right]\circ R\left(u;z\right)\) _and_ \(R\left(u;z\right)\circ\operatorname{Op}^{BW}\left[\varrho\left(u;x,z\right)\right]\) _are a Kernel smoothing operator in_ \(\Sigma K\mathcal{R}_{K,0,p+p}^{-p,n+n^{\prime}}\left[\epsilon_{0},N\right]\)_;_
4. _Let_ \(M\left(u;z\right)\) _be an homogeneous_ \(m\)_-Kernel operator in_ \(\overline{K\mathcal{M}_{1}^{m,n}}\)_, and_ \(M^{\prime}\left(u;z\right)\) _in_ \(\Sigma K\mathcal{M}_{K,0,0}^{0,0}\left[\epsilon_{0},N\right]\) _then_ \(M\left(M^{\prime}\left(u;z\right)u;z\right)\in\Sigma K\mathcal{M}_{K,0,1}^{m,n }\left[\epsilon_{0},N\right]\)_._
Proof.: The proof of item 1 is performed in [5, Proposition 3.12] keeping track of the dependence in the variable \(z\) of the symbols as in (2.33), (2.34) when \(\gamma=0\). More precisely \(\varrho\) and \(\varrho^{\prime}\) satisfy \(z\)-dependent inequalities (cf. (2.7), (2.9))
\[\left|\partial_{x}^{\alpha}\varrho_{q}\left(\Pi_{\overline{n}}\mathcal{U};x,z \right)\right|\leq C\left|z\right|_{\mathbb{T}}^{n}\left|\bar{n}\right|^{\mu+ \mu}\prod_{j=1}^{p}\left\|\Pi_{n_{j}}u_{j}\right\|_{L^{2}},\qquad\left| \partial_{t}^{k}\partial_{x}^{\alpha}\varrho\left(u;x,z\right)\right|\leq C \left|z\right|_{\mathbb{T}}^{n}\left\|u\right\|_{k,z_{0}}^{p-1}\left\|u\right\|_ {k,z_{0}},\]
and, in the proof of [5, Proposition 3.12], the seminorm of the composed symbol always appear as a product of the seminorms of the factor symbols. The proof of item 2 is the same as in [10, Proposition 2.15-i], keeping track of the dependence in \(z\) of the \(m\)-operators. For item 3, see [10, Proposition 2.19-i] factoring the dependence on \(z\). Item 4 is a consequence of [10, Proposition 2.15-ii] factoring the dependence on \(z\).
Finally integrating (2.37) and (2.38) in \(z\) we deduce the following lemma.
**Lemma 2.35**.: _Let \(R\left(u;z\right)\) be a Kernel smoothing operator in \(\Sigma K\mathcal{F}_{K,0,p}^{-\rho,n}\left[\epsilon_{0},N\right]\) with \(n>-1\). Then_
\[\int R\left(u;z\right)g\left(x-z\right)\mathrm{d}z=R_{1}\left(u\right)g,\qquad \int R\left(u;z\right)\mathrm{d}z=R_{2}\left(u\right)\,,\]
_where \(R_{1}\left(u\right)\), \(R_{2}\left(u\right)\) are smoothing operators in \(\Sigma K\mathcal{F}_{K,0,p}^{-\rho}\left[\epsilon_{0},N\right]\)._
The following proposition will be crucial in Section 4.
**Proposition 2.36**.: _Let \(n>-1\) and \(\varrho\left(u;x,z\right)\) be a Kernel-function in \(\Sigma K\mathcal{F}_{K,0,p}^{n}\left[\epsilon_{0},N\right]\). Let us define the operator, for any \(g\in H_{0}^{s}(\mathbb{T})\), \(s\in\mathbb{R}\),_
\[\left(\mathcal{T}_{\varrho}g\right)\left(x\right):=\int\operatorname{Op}^{BW} \left[\varrho\left(u;\bullet,z\right)\right]g\left(x-z\right)\mathrm{d}z\,. \tag{2.40}\]
_Then there exists_
* _a symbol_ \(a\left(u;x,\xi\right)\) _in_ \(\Sigma\Gamma_{K,0,p}^{-\left(1+n\right)}\left[\epsilon_{0},N\right]\) _satisfying_ (_2.20_)_;_
* _a pluri-homogeneous smoothing operator_ \(R\left(u\right)\) _in_ \(\Sigma_{p}^{N}\overline{\mathcal{R}_{q}^{-\rho}}\) _for any_ \(\rho>0\)_;_
_such that \(\mathcal{T}_{\varrho}g=\operatorname{Op}^{BW}\left[a\left(u;x,\xi\right)\right]g+ R\left(u\right)g\)._
Proof.: In view of Definition 2.10 and Remark 2.28 we have that
\[\operatorname{Op}^{BW}\left[\varrho\left(u;x,z\right)\right]-\operatorname{Op}^ {W}\left[\varrho_{\psi}\left(u;x,z\right)\right]=:R\left(u;z\right) \tag{2.41}\]
is a pluri-homogeneous Kernel smoothing operator in \(\Sigma_{p}^{N}\overline{\mathcal{K}\mathcal{R}}_{q}^{-\rho,n}\) for any \(\rho\). Since \(n>-1\), integrating in \(z\), we deduce that \(\int R\left(u;z\right)g(x-z)\mathrm{d}z=R\left(u\right)g\) where \(R\left(u\right)\) is a pluri-homogeneous smoothing operator in \(\Sigma_{p}^{N}\overline{\mathcal{R}}_{q}^{-\rho}\) (cf. Lemma 2.35).
In view of (2.17) and (2.16) we compute for any \(\nu\in\mathbb{Z}\)
\[\mathcal{F}_{x\to\nu}\left(\int\operatorname{Op}^{W}\left[\varrho_{\psi}\left( u;x,z\right)\right]g\left(x-z\right)\mathrm{d}z\right)\left(\nu\right)=\sum_{k \in\mathbb{Z}}\psi\left(\nu-k,\frac{\nu+k}{2}\right)\int\hat{\varrho}\left(u; \nu-k,z\right)e^{-\mathrm{i}kx}\mathrm{d}z\;\hat{g}\left(k\right)\]
where \(\psi(\xi^{\prime},\xi)\) is an admissible cut-off function, namely satisfying (2.14)-(2.15). Introducing another admissible cut-off function \(\bar{\psi}(\xi^{\prime},\xi)\) identically equal to one on the support of \(\psi(\xi^{\prime},\xi)\), and since \(\widehat{g}(0)=0\),
\[\mathcal{F}_{x\to\nu}\left(\int\operatorname{Op}^{W}\left[ \varrho_{\psi}\left(u;x,z\right)\right]\mathrm{d}z\right)\left(\nu\right)\\ =\sum_{k\in\mathbb{Z}}\psi\left(\nu-k,\frac{\nu+k}{2}\right)\bar{ \psi}\left(\nu-k,\frac{\nu+k}{2}\right)\chi\left(2k\right)\int\hat{\varrho} \left(u;\nu-k,z\right)e^{-\mathrm{i}kx}\mathrm{d}z\;\hat{g}\left(k\right) \tag{2.42}\]
where \(\chi(\cdot)\) is the \(\mathcal{C}^{\infty}\) function defined in (2.18). Introducing a \(\mathcal{C}^{\infty}\) function \(\eta:\mathbb{R}\to[0,1]\) with compact support such that
\[\eta(z)=1,\;\forall|z|\leq\frac{\pi}{2}\,,\qquad\qquad\eta(z)=0,\;\forall|z| \geq\frac{3\pi}{2}\,,\qquad\qquad\sum_{j\in\mathbb{Z}}\eta(z+2\pi j)=1,\; \forall z\in\mathbb{R}, \tag{2.43}\]
we may write the integral on \(\mathbb{T}\) as
\[\int\hat{\varrho}\left(u;\nu-k,z\right)e^{-\mathrm{i}kx}\mathrm{d}z=\frac{1}{ 2\pi}\int_{\mathbb{R}}\hat{\varrho}\left(u;j,z\right)\eta(z)e^{-\mathrm{i}kx} \mathrm{d}z\;\bigg{|}_{\left(j,\xi\right)=\left(\nu-k,k\right)}\,. \tag{2.44}\]
Therefore by (2.41), (2.42) and (2.44) the operator \(\mathcal{T}_{\varrho}\) in (2.40) is equal to
\[\mathcal{T}_{\varrho}=\operatorname{Op}^{W}\left[a_{\psi}\left(u;x,\xi\right) \right]=\operatorname{Op}^{BW}\left[a\left(u;x,\xi\right)\right]+R\left(u \right)\qquad\text{where}\qquad R\left(u\right)\in\Sigma_{p}^{N}\overline{ \mathcal{R}}_{q}^{-p} \tag{2.45}\]
and
\[a\left(u;x,\xi\right)=\sum_{j\in\mathbb{Z}}\hat{a}\left(u;j,\xi\right)e^{j \mathrm{i}x},\quad\hat{a}\left(u;j,\xi\right):=\bar{\psi}\left(j,\xi\right) \chi\left(2\xi-j\right)\frac{1}{2\pi}\int_{\mathbb{R}}\hat{\varrho}\left(u;j,z \right)\eta(z)e^{-\left\left[\xi-\frac{j}{2}\right]z}\mathrm{d}z\,. \tag{2.46}\]
In order to prove the lemma, in view of (2.45), it is sufficient to show that \(a\left(u;x,\xi\right)\) defined in (2.46) is a symbol in \(\Sigma\Sigma_{K,0,p}^{-(1+n)}\left[\epsilon_{0},N\right]\) according to Definition 2.2. Notice that \(a\left(u;x,\xi\right)\) satisfies the reality condition (2.20). Moreover, in view of the support properties of \(\bar{\psi}\left(j,\xi\right)\) and \(\chi\left(2\xi-j\right)\), it results that
\[\hat{a}\left(u;j,\xi\right)\neq 0\qquad\Longrightarrow\qquad\left|\xi-\frac{j}{2 }\right|\sim|\xi|\;,\quad|\xi|\gtrsim 1,\quad|\xi|\sim\left\langle\xi\right\rangle\,. \tag{2.47}\]
We decompose the Kernel function
\[\varrho\left(u;x,z\right)\in\Sigma K\mathcal{F}_{K,0,p}^{n}\left[\epsilon_{0},N \right]\qquad\text{as}\qquad\varrho\left(u;x,z\right)=\sum_{q=p}^{N}\varrho_{q }\left(u;x,z\right)+\varrho_{>N}\left(u;x,z\right)\,,\]
where \(\varrho_{q}\left(u;x,z\right)\) are homogenous Kernel functions in \(\overline{\mathcal{K}\mathcal{F}_{q}^{n}}\) and \(\varrho_{>N}\left(u;x,z\right)\) is a non-homogenous Kernel function in \(K\mathcal{F}_{K,0,N+1}^{n}\left[\epsilon_{0}\right]\). Accordingly we decompose the symbol \(a\left(u;x,\xi\right)\) in (2.46) as
\[a\left(u;x,\xi\right)=\sum_{q=p}^{N}a_{q}\left(u;x,\xi\right)+a_{>N}\left(u;x, \xi\right)\]
where
\[a_{q}\left(u;x,\zeta\right)=\sum_{j\in\mathcal{L}}\hat{a}_{q}\left(u ;j,\zeta\right)e^{j\left|jx\right.},\quad\hat{a}_{q}\left(u;j,\zeta\right):= \tilde{\psi}\left(j,\xi\right)\chi\left(2\xi-j\right)\frac{1}{2\pi}\int_{ \mathbb{R}}\hat{e}_{q}\left(u;j,z\right)\eta\left(z\right)e^{-\left[\xi-\frac{ j}{2}\right]z}\mathrm{d}z, \tag{2.48}\] \[a_{>N}\left(u;x,\zeta\right)=\sum_{j\in\mathcal{L}}\hat{a}_{>N} \left(u;j,\xi\right)e^{j\left|jx\right.},\quad\hat{a}_{>N}\left(u;j,\xi\right) :=\tilde{\psi}\left(j,\xi\right)\chi\left(2\xi-j\right)\frac{1}{2\pi}\int_{ \mathbb{R}}\hat{e}_{>N}\left(u;j,z\right)\eta\left(z\right)e^{-\left[\xi-\frac {j}{2}\right]z}\mathrm{d}z.\]
We now prove that, according to Definition 2.2,
\[a_{q}\in\widetilde{\Gamma}_{q}^{-\left(1+n\right)},\quad\forall q =p,\ldots,N\,, \tag{2.49}\] \[a_{>N}\in\Gamma_{K,0,N+1}^{-\left(1+n\right)}\left[\epsilon_{0} \right]. \tag{2.50}\]
**Step 1** (Proof of (2.49)).: In view of (2.32) the \(q\)-homogeneous component \(a_{q}(u;x,\zeta)\) in (2.48) has an expansion as in (2.10) (recall the notation \(\bar{J}_{q}=\left(j_{1},\ldots,j_{q}\right)\))
\[a_{\bar{J}_{q}}(\xi)=\tilde{\psi}\left(j,\xi\right)\chi\left(2\xi-j\right) \frac{1}{2\pi}\int_{\mathbb{R}}\rho_{j_{1},\ldots,j_{q}}(z)\,\eta(z)\,e^{- \mathrm{i}\xi z}\mathrm{d}z,\qquad\qquad\qquad j=j_{1}+\ldots+j_{q}\,.\]
Let us prove it satisfies (2.11) with \(m=-(1+n)\). Decomposing \(1=\chi_{1}(\cdot)+\chi_{2}(\cdot)\) where \(\chi_{1}:\mathbb{R}\to[0,1]\) is a smooth cutoff function supported and equal to \(1\) near \(0\), we decompose
\[a_{\bar{J}_{q}}(\xi)=a_{\bar{J}_{q}}^{\left(1\right)}(\xi)+a_{\bar{J}_{q}}^{ \left(2\right)}(\xi)=\sum_{j=1}^{2}\tilde{\psi}\left(j,\xi\right)\chi\left(2 \xi-j\right)\frac{1}{2\pi}\int_{\mathbb{R}}\chi_{j}\left(\xi\right)z\,\rho_{j _{1},\ldots,j_{q}}(z)\,\eta(z)\,e^{-\left[\xi-\frac{j}{2}\right]z}\mathrm{d}z. \tag{2.51}\]
By (2.33) (with \(l=0\)) and since \(n>-1\) we deduce
\[\left|a_{\bar{J}_{q}}^{\left(1\right)}(\xi)\right|\lesssim\int_{|z|\lesssim 1 /\left\langle\xi\right\rangle}\left|\bar{J}_{q}\right|^{\mu}\left|z\right|^{n }\mathrm{d}z\lesssim\left|\bar{J}_{q}\right|^{\mu}\left\langle\xi\right\rangle ^{-\left(1+n\right)}\,. \tag{2.52}\]
We now estimate \(a_{\bar{J}}^{\left(2\right)}(\xi)\). From for any \(l\in\mathbb{N}_{0}\), we obtain, by an integration by parts (use (2.43) and that \(\chi_{2}\left(\left\langle\xi\right\rangle z\right)\) vanishes near zero), that
\[a_{\bar{J}_{q}}^{\left(2\right)}(\xi)=\left[-\mathrm{i}\left( \xi-\frac{j}{2}\right)\right]^{-l}\tilde{\psi}\left(j,\xi\right)\chi\left(2\xi -j\right)\sum_{l_{1}+l_{2}+l_{3}=l}c_{l_{1},l_{2},l_{3}}\frac{1}{2\pi}\int_{ \mathbb{R}}e^{-\mathrm{i}\left[\xi-\frac{j}{2}\right]z}\,Y_{l_{1},l_{2},l_{3} }(z)\,\mathrm{d}z\\ \text{where}\quad Y_{l_{1},l_{2},l_{3}}(z):=\left\langle\xi\right\rangle^{ l_{1}}\left(\partial_{z}^{l_{2}}\chi_{2}\right)\left(z\right)\partial_{z}^{l_{2}} \eta(z)\partial_{z}^{l_{2}}\rho_{\bar{J}_{q}}(z)\,. \tag{2.53}\]
Since \(\varrho_{q}\left(u;x,z\right)\) is a Kernel function in \(\widetilde{K}\overline{\mathcal{F}}_{p}^{n}\), using (2.33) and exploiting that \(\left\langle\xi\right\rangle^{-1}\sim|z|\) on the support of \(\chi_{2}^{\left(l_{1}\right)}\left(\left\langle\xi\right\rangle z\right)\) for any \(l_{1}\geq 1\), we get
\[\int_{\mathbb{R}}\left|Y_{l_{1},l_{2},l_{3}}(z)\right|\mathrm{d}z\lesssim\left| \bar{J}_{q}\right|^{\mu}\left\langle\xi\right\rangle^{l_{1}}\int_{\frac{1}{ \left\langle\xi\right\rangle}-|z|}|z|^{n-l_{3}}\,\mathrm{d}z\lesssim\left|\bar{ J}_{q}\right|^{\mu}\left\langle\xi\right\rangle^{l_{1}+l_{3}-\left(n+1\right)}. \tag{2.54}\]
When \(l_{1}=0\) we have that
\[\int_{\mathbb{R}}\left|Y_{0,l_{2},l_{3}}(z)\right|\mathrm{d}z\lesssim\left| \bar{J}_{q}\right|^{\mu}\int_{\frac{\xi}{\infty}\leq i\leq\frac{3\pi}{2}}|z|^{ n-l_{3}}\,\mathrm{d}z\lesssim\left|\bar{J}_{q}\right|^{\mu}\left(1+\left\langle \xi\right\rangle^{l_{3}-\left(n+1\right)}\right). \tag{2.55}\]
Then by (2.53), (2.54), (2.55) and (2.47), we deduce, for \(l>n+1\),
\[|a_{\bar{J}_{q}}^{\left(2\right)}(\xi)|\lesssim_{l}\left|\bar{J}_{q}\right|^{ \mu}\left\langle\xi\right\rangle^{-\left(1+n\right)}. \tag{2.56}\]
The bounds (2.52) and (2.56) prove that \(a_{\bar{J}_{q}}(\xi)\) in (2.51) satisfies the estimate (2.11) (for \(\beta=0\) and \(m=-(1+n)\)). Since, for any \(\beta\in\mathbb{N}\),
\[\partial_{\xi}^{\beta}a_{\bar{J}_{q}}(\xi)=\sum_{\beta_{1}+\beta_{2}+\beta_{3}= \beta}C_{\beta_{1},l_{2},\beta_{3}}\partial_{\xi}^{\beta_{1}}\tilde{\psi}\left(j, \xi\right)\partial_{\xi}^{\beta_{2}}\chi\left(2\xi-j\right)\int_{\mathbb{R}} \partial_{\bar{J}_{q}}(z)\left(-\mathrm{i}z\right)^{\beta_{3}}e^{-\mathrm{i} \left[\xi-\frac{j}{2}\right]z}\mathrm{d}z, \tag{2.57}\]
using (2.15), (2.47), the fact that \(\chi\) is supported near \(0\), that \(z^{\beta_{3}}\rho_{J_{3}}(z)\) satisfies (2.33) with (\(n\) replaced by \(n+\beta_{3}\), cf. Remark 2.29) and repeating the bound obtained for \(\beta_{3}=0\) for the integral term in Eq. (2.57), we obtain
\[\left|\partial_{\xi}^{\beta}a_{J_{q}}(\xi)\right|\lesssim_{\beta}\sum_{\beta_ {1}+\beta_{2}+\beta_{3}=\beta}\langle\xi\rangle^{-\beta_{1}}\,\langle\xi \rangle^{-\beta_{2}}\left|J_{q}\right|^{\mu}\langle\xi\rangle^{-(1+n+\beta_{ 3})}\lesssim_{\beta}\left|\bar{J}_{q}\right|^{\mu}\langle\xi\rangle^{-(1+n+ \beta)}\,.\]
Note that actually for any \(j\in\mathbb{Z}\), \(\beta_{2}\geq 1\), the derivative \(\partial_{\xi}^{\beta_{2}}\chi\left(2\xi-j\right)=0\), for any \(|\xi|\geq 2\). This concludes the proof of (2.49).
**Step 2** (Proof of (2.50)).: We argue similarly to the previous step. Recalling (2.48), for any \(0\leq k\leq K\) and \(\gamma\in\mathbb{N}_{0}\), we decompose, with \(\chi_{\mathrm{j}}\); \(\mathrm{j}=1,2\) defined as in (2.51),
\[\begin{split}&\partial_{t}^{k}\partial_{x}^{\gamma}a_{>N}(u;x, \xi)=\,I_{1}+I_{2}\qquad\text{where}\\ & I_{\mathrm{j}}:=\sum_{j\in\mathbb{Z}}\tilde{\psi}\left(j,\xi \right)\chi\left(2\xi-j\right)\frac{1}{2\pi}\int_{\mathbb{R}}\chi_{\mathrm{j} }\left(z\langle\xi\rangle\right)\overline{\partial_{t}^{k}\partial_{x}^{ \gamma}\rho_{>N}}\left(u;j,z\right)\eta\left(z\right)e^{-\left|\left\{\xi-\frac {j}{2}\right\}z}\mathrm{d}z\ e^{\mathrm{j}/x}\,.\end{split} \tag{2.58}\]
Fix \(\mu_{0}>1\), let \(s_{0}>0\) associated to \(\rho_{>N}\) as per Definition 2.26, let \(\gamma\leq s-\left(s_{0}+\mu_{0}\right)\). The term \(I_{1}\) can be estimated using (2.34), the fact that \(\chi_{1}(z\langle\xi\rangle)\) is supported for \(|z|\lesssim 1/\langle\xi\rangle\) and \(n>-1\), as
\[|I_{1}|\lesssim\sum_{j\in\mathbb{Z}}\left\langle j\right\rangle^{-\mu_{0}}\int _{|z|\lesssim 1/\langle\xi\rangle}\left|\partial_{t}^{k}\partial_{x}^{ \gamma}\left(1-\partial_{x}^{2}\right)^{\frac{\mu_{0}}{2}}\rho_{>N}\left(u;j, z\right)\right|\mathrm{d}z\lesssim\|u\|_{k,s_{0}+\mu_{0}}^{N}\|u\|_{k,s}\langle\xi \rangle^{-(1+n)}\,. \tag{2.59}\]
Next we estimate \(I_{2}\). After an integration by parts, setting \(I=\max\left\{0,\left[1+n\right]\right\}\), we have from Eq. (2.58)
\[|I_{2}|\lesssim\sum_{j\in\mathbb{Z}}\left|\tilde{\psi}\left(j,\xi\right)\chi \left(2\xi-j\right)\right|\left|\xi-\frac{j}{2}\right|^{-l}\sum_{l_{1}+\hat{l }_{2}+\hat{l}_{3}=l}\int_{\mathbb{R}}\left|Z_{l_{1},l_{2},l_{3}}\left(z\right) \right|\mathrm{d}z\,. \tag{2.60}\]
where
\[Z_{l_{1},l_{2},l_{3}}\left(z\right):=\langle\xi\rangle^{l_{1}}\left(\partial_ {z}^{l_{1}}\chi_{2}\right)\left(z\langle\xi\rangle\right)\,\partial_{z}^{l_{2 }}\eta\left(z\right)\,\partial_{t}^{k}\overline{\partial_{x}^{k}\partial_{z}^ {k}}\rho_{>N}\left(u;j,z\right)\,. \tag{2.61}\]
For any \(j\in\mathbb{Z}\)
\[\left|\partial_{t}^{k}\partial_{x}^{\gamma}\partial_{z}^{l_{2}}\rho_{>N}\left( u;j,z\right)\right|\lesssim\langle j\rangle^{-\mu_{0}}\sup_{x\in\mathbb{T}} \left|\partial_{t}^{k}\partial_{x}^{\gamma}\partial_{z}^{l_{2}}\left(1- \partial_{x}^{2}\right)^{\frac{\mu_{0}}{2}}\rho_{>N}\left(u;x,z\right)\right| \lesssim\langle j\rangle^{-\mu_{0}}\left\|u\right\|_{k,s_{0}+\mu_{0}}^{N}\|u \|_{k,s}|z|^{n-l_{3}}\,. \tag{2.62}\]
With computations analogous to the ones performed in Eqs. (2.54) and (2.55) we obtain using Eqs. (2.61) and (2.62), that
\[\begin{split}&\int_{\mathbb{R}}\left|Z_{l_{1},l_{2},l_{3}}\left(z \right)\right|\mathrm{d}z\lesssim\langle j\rangle^{-\mu_{0}}\left\|u\right\|_{k,s_{0}+\mu_{0}}^{N}\left\|u\right\|_{k,s}\langle\xi\rangle^{(l_{1}+l_{3})-(n+1) }\,,\qquad\text{if }l_{1}\neq 0,\\ &\int_{\mathbb{R}}\left|Z_{0,l_{3},l_{3}}\left(z\right)\right| \mathrm{d}z\lesssim\langle j\rangle^{-\mu_{0}}\left\|u\right\|_{k,s_{0}+\mu_{ 0}}^{N}\left\|u\right\|_{k,s}\left(1+\langle\xi\rangle^{b-(n+1)}\right)\,. \end{split} \tag{2.63}\]
Since \(l=l_{1}+l_{2}+l_{3}>1+n\) and \(\mu_{0}>1\) we obtain, by Eqs. (2.47), (2.60) and (2.63) that
\[|I_{2}|\lesssim\|u\|_{k,s_{0}+\mu_{0}}^{N}\left\|u\right\|_{k,s}\langle\xi \rangle^{-(1+n)}\sum_{j\in\mathbb{Z}}\left\langle j\right\rangle^{-\mu_{0}}\,. \tag{2.64}\]
Inserting Eqs. (2.59) and (2.64) in (2.58) we conclude that
\[\left|\partial_{t}^{k}\partial_{x}^{\gamma}a_{>N}(u;x,\xi)\right|\lesssim\|u\|_{k,s_{0}+\mu_{0}}^{N}\left\|u\right\|_{k,s}\langle\xi\rangle^{-(1+m)}\,.\]
Arguing as in (2.57) we thus obtain that, for any \(\beta\in\mathbb{N}_{0}\), \(\left|\partial_{t}^{k}\partial_{x}^{\gamma}\partial_{\xi}^{\beta}a_{>N}\left(u;x, \xi\right)\right|\lesssim\|u\|_{k,s_{0}+\mu_{0}}^{N}\|u\|_{k,s}\langle\xi \rangle^{-(1+n+\beta)}\) concluding the proof of (2.50).
The linearized problem at \(f=0\)
The linearized equation (1.11) at \(f=0\) is
\[\partial_{t}f=\partial_{x}\,\mathrm{d}\nabla E_{\alpha}(0)f\,. \tag{3.1}\]
In this section we prove that the linear Hamiltonian operator \(\partial_{x}\,\mathrm{d}\nabla E_{\alpha}(0)\) is a Fourier multiplier with symbol \(-\mathrm{i}\omega_{\alpha}\left(j\right)\) and we provide its asymptotic expansion, see Lemmas 3.1 and 3.6. We also prove a convexity property of the frequency map \(j\mapsto\omega_{\alpha}\left(j\right)\) and that \(\omega_{\alpha}\left(j\right)\) are positive for any \(j\geq 2\), whereas \(\omega_{\alpha}(0)=\omega_{\alpha}(1)=0\), cf. Remark 3.3. These latter results do not rely on oscillatory integrals expansions and enable to prove the absence of three-wave resonances in Lemma 3.5.
The following result extends the computations in [42, Section 3], valid for \(\alpha\in(0,1)\), to the whole range \(\alpha\in(0,2)\).
**Lemma 3.1** (Linearization of \(\nabla E_{\alpha}\) at zero).: _For any \(\alpha\in(0,2)\), it results that_
\[\mathrm{d}\nabla E_{\alpha}(0)=-L_{\alpha}\left(\left|D\right|\right)\,, \tag{3.2}\]
_where \(L_{\alpha}\left(\left|D\right|\right)\) is the Fourier multiplier operator_
\[L_{\alpha}\left(\left|D\right|\right):=\frac{c_{\alpha}}{2\left(1-\frac{ \alpha}{2}\right)}\left[\mathsf{T}_{\alpha}^{1}\left(\left|D\right|\right)- \mathsf{T}_{\alpha}^{2}\left(\left|D\right|\right)-\frac{\Gamma\left(2- \alpha\right)}{\Gamma\left(1-\frac{\alpha}{2}\right)^{2}}\right] \tag{3.3}\]
_with_
\[\mathsf{T}_{\alpha}^{1}\left(\left|j\right|\right) :=\frac{\Gamma\left(2-\alpha\right)}{\Gamma\left(1-\frac{\alpha} {2}\right)\Gamma\left(\frac{\alpha}{2}\right)}\sum_{k=0}^{\left|j\right|-1} \frac{\Gamma\left(\frac{\alpha}{2}+k\right)}{\Gamma\left(1-\frac{\alpha}{2}+k \right)}\frac{1}{1-\frac{\alpha}{2}+k}\,, \mathsf{T}_{\alpha}^{1}\left(0\right)=0\,, \tag{3.4}\] \[\mathsf{T}_{\alpha}^{2}\left(\left|\xi\right|\right) :=\frac{\Gamma\left(2-\alpha\right)}{\Gamma\left(1-\frac{\alpha} {2}\right)\Gamma\left(\frac{\alpha}{2}\right)}\frac{\Gamma\left(\frac{\alpha} {2}+\left|\xi\right|\right)}{\Gamma\left(1-\frac{\alpha}{2}+\left|\xi\right| \right)} =\left[\left|\xi\right|^{2}-\left(1-\frac{\alpha}{2}\right)^{2} \right]\mathsf{M}_{\alpha}\left(\left|\xi\right|\right),\] (3.5) \[\mathsf{M}_{\alpha}\left(\left|\xi\right|\right) :=\frac{\Gamma\left(2-\alpha\right)}{\Gamma\left(1-\frac{\alpha} {2}\right)\Gamma\left(\frac{\alpha}{2}\right)}\frac{1}{\left|\xi\right|^{2}- \left(1-\frac{\alpha}{2}\right)^{2}}\frac{\Gamma\left(\frac{\alpha}{2}+\left| \xi\right|\right)}{\Gamma\left(1-\frac{\alpha}{2}+\left|\xi\right|\right)}. \tag{3.6}\]
_The map \(j\mapsto\mathsf{T}_{\alpha}^{1}\left(\left|j\right|\right)\) is a Fourier multiplier in \(\widetilde{\Gamma}_{0}^{\max(0,\alpha-1)}\) and \(j\mapsto\mathsf{T}_{\alpha}^{2}\left(\left|j\right|\right)\) is a Fourier multiplier in \(\widetilde{\Gamma}_{0}^{\alpha-1}\)._
By the previous lemma, in Fourier, the linear equation (3.1) amounts to the decoupled scalar equations
\[\partial_{t}\hat{f}\left(j\right)+\mathrm{i}\omega_{\alpha}\left(j\right)\hat{f }\left(j\right)=0\,,\qquad j\in\mathbb{Z}\setminus\left\{0\right\}, \tag{3.7}\]
with linear frequencies of oscillations \(\omega_{\alpha}\left(j\right):=jL_{\alpha}\left(\left|j\right|\right)\).
Proof of Lemma 3.1.: By differentiating \(\nabla E_{\alpha}\left(f\right)\) in (1.12) we deduce that
\[\mathrm{d}\nabla E_{\alpha}(0)\,\phi =\frac{c_{\alpha}}{2\left(1-\frac{\alpha}{2}\right)}\!\!\int\frac {2\phi\left(y\right)-\left(\phi\left(x\right)+\phi\left(y\right)\right)\cos \left(x-y\right)}{\left[2\left(1-\cos\left(x-y\right)\right)\right]^{\frac{ \alpha}{2}}}\,\mathrm{d}y \tag{3.8}\] \[\quad+\frac{c_{\alpha}}{2\left(1-\frac{\alpha}{2}\right)}\!\!\int \frac{\phi^{\prime}\left(y\right)\sin\left(x-y\right)}{\left[2\left(1-\cos \left(x-y\right)\right)\right]^{\frac{\alpha}{2}}}\,\mathrm{d}y-\frac{\alpha}{4 }\frac{c_{\alpha}}{2\left(1-\frac{\alpha}{2}\right)}\!\!\int\frac{\phi\left(x \right)+\phi\left(y\right)}{\left[2\left(1-\cos\left(x-y\right)\right)\right]^ {\frac{\alpha}{2}-1}}\,\mathrm{d}y\] \[=\,-\frac{c_{\alpha}}{2\left(1-\frac{\alpha}{2}\right)}\!\!\int \frac{\phi\left(x\right)-\phi\left(y\right)}{\left[2\left(1-\cos\left(x-y \right)\right)\right]^{\frac{\alpha}{2}}}\,\mathrm{d}y+\frac{c_{\alpha}}{2 \left(1-\frac{\alpha}{2}\right)}\!\!\int\frac{\phi^{\prime}\left(y\right)\sin \left(x-y\right)}{\left[2\left(1-\cos\left(x-y\right)\right)\right]^{\frac{ \alpha}{2}}}\,\mathrm{d}y\] \[\quad+\frac{c_{\alpha}}{4}\!\!\int\frac{\phi\left(y\right)}{ \left[2\left(1-\cos\left(x-y\right)\right)\right]^{\frac{\alpha}{2}-1}}\, \mathrm{d}y+\frac{c_{\alpha}}{4}\!\!\int\frac{\mathrm{d}y}{\left[2\left(1- \cos\left(x-y\right)\right)\right]^{\frac{\alpha}{2}-1}}\,\phi\left(x\right)=: \,\sum_{k=1}^{4}L_{\nabla E_{\alpha},x}\phi\,.\]
We now compute these operators.
**Step 1** (Evaluation of \(L_{\nabla E_{\alpha},1}\)).: We claim that
\[\left(L_{\nabla E_{\alpha},1}\phi\right)(x):=-\frac{c_{\alpha}}{2\left(1-\frac{ \alpha}{2}\right)}\!\!\!\int\frac{\phi\left(x\right)-\phi\left(y\right)}{\left[ 2\left(1-\cos\left(x-y\right)\right)\right]^{\frac{\alpha}{2}}}\,\mathrm{d}y=- \frac{c_{\alpha}}{2\left(1-\frac{\alpha}{2}\right)}\mathsf{T}_{\alpha}^{1} \left(\left|D\right|\right)\phi. \tag{3.9}\]
Indeed, setting \(y=x-z\) we have
\[L_{\nabla E_{\alpha},1}\phi=-\frac{c_{\alpha}}{2\left(1-\frac{\alpha}{2} \right)}\!\!\int\frac{\phi\left(x\right)-\phi\left(x-z\right)}{\left[2\left(1- \frac{\alpha}{2}\right)\right]^{\alpha/2}}\mathrm{d}z=-\frac{c_{\alpha}}{2 \left(1-\frac{\alpha}{2}\right)}\!\!\int\frac{\phi\left(x\right)-\phi\left(x- z\right)}{\left|2\sin\left(\frac{z}{2}\right)\right|^{\alpha}}\mathrm{d}z. \tag{3.10}\]
We compute the action of \(L_{\nabla E_{\alpha},1}\) on \(\phi(x)=\sum_{j\in\mathbb{Z}}\hat{\phi}\left(j\right)e^{\mathrm{i}jx}=\hat{ \phi}(0)+\sum_{j\equiv 0_{+}^{\prime}\left(x\right)}\phi\left(j\right)e^{ \mathrm{i}jx}+\sum_{j\equiv\hat{\phi}\left(j\right)}e^{-\mathrm{i}jx}\).
By (3.10) we immediately get \(L_{\nabla E_{\alpha},1}\hat{\phi}(0)=0\). Moreover, by (3.10),
\[L_{\nabla E_{\alpha},1}\phi_{+}\left(x\right)=-\frac{c_{\alpha}}{2\left(1- \frac{\alpha}{2}\right)}\sum_{j\equiv 1}\hat{\phi}\left(j\right)e^{\mathrm{i}jx} \!\!\int\frac{1-e^{-\mathrm{i}jz}}{\left|4\sin^{2}\left(z/2\right)\right|^{ \alpha/2}}\mathrm{d}z. \tag{3.11}\]
We compute
\[\frac{1}{2\pi}\!\int_{0}^{2\pi}\frac{1-e^{-\mathrm{i}jz}}{\left|4 \sin^{2}\left(z/2\right)\right|^{\alpha/2}}\mathrm{d}z =\frac{1}{2\pi}\int_{0}^{2\pi}\frac{1-e^{-\mathrm{i}jz}}{\left|1- e^{-\mathrm{i}jz}\right|^{\alpha}}\mathrm{d}z\] \[=\frac{1}{\pi}\int_{0}^{\pi}\frac{1-e^{-2\mathrm{i}jz}}{\left|1- e^{-2\mathrm{i}z}\right|}\left|1-e^{-2\mathrm{i}z}\right|^{1-\alpha}\mathrm{d}z=- \frac{2^{1-\alpha}}{\mathrm{i}\pi}\sum_{k=0}^{j-1}\int_{0}^{\pi}e^{-\mathrm{i} z\left(2k+1\right)}\left(\sin z\right)^{1-\alpha}\mathrm{d}z \tag{3.12}\]
having also written \(\left|1-e^{-2\mathrm{i}z}\right|=-\mathrm{i}\left(1-e^{-2\mathrm{i}z}\right)e^ {\mathrm{i}z}\), for any \(z\in[0,\pi]\). We use now the identity (cf. [64, p. 8])
\[\int_{0}^{\pi}\sin^{X}\left(z\right)e^{\mathrm{i}Yz}\mathrm{d}z =\frac{\pi e^{\mathrm{i}Yz}\Gamma\left(X+1\right)}{2^{X}\Gamma \left(1+\frac{X+Y}{2}\right)\Gamma\left(1+\frac{X-Y}{2}\right)},\qquad\qquad \qquad\left(X,Y\right)\in(-1,\infty)\times\mathbb{R}. \tag{3.13}\]
Setting \(X=1-\alpha\) and \(Y=-\left(2k+1\right)\), and using \(e^{-\mathrm{i}\left(2k+1\right)\frac{\alpha}{2}}=-\mathrm{i}\left(-1\right)^{k}\), we obtain
\[\int_{0}^{\pi}e^{-\mathrm{i}z\left(2k+1\right)}\left(\sin z\right)^{1-\alpha }\mathrm{d}z=\frac{-\mathrm{i}\left(-1\right)^{k}\pi\Gamma\left(2-\alpha\right) }{2^{1-\alpha}\Gamma\left(1-k-\frac{\alpha}{2}\right)\Gamma\left(2+k-\frac{ \alpha}{2}\right)}. \tag{3.14}\]
The following consequence of Euler's reflection formula (cf. [67])
\[\Gamma\left(z-j\right)=(-1)^{j-1}\frac{\Gamma\left(-z\right)\Gamma\left(1+z \right)}{\Gamma\left(j+1-z\right)}\,,\qquad z\in\mathbb{R}\setminus\mathbb{Z},\quad j\in\mathbb{Z}, \tag{3.15}\]
implies, setting \(z=1-\frac{\alpha}{2}\), \(j=k\), and since \(\Gamma\left(1+y\right)=y\,\Gamma\left(y\right)\),
\[\Gamma\left(1-k-\frac{\alpha}{2}\right)=(-1)^{k-1}\frac{\Gamma\left(\frac{ \alpha}{2}-1\right)\Gamma\left(2-\frac{\alpha}{2}\right)}{\Gamma\left(\frac{ \alpha}{2}+k\right)}=(-1)^{k}\frac{\Gamma\left(1-\frac{\alpha}{2}\right)\Gamma \left(\frac{\alpha}{2}\right)}{\Gamma\left(\frac{\alpha}{2}+k\right)}. \tag{3.16}\]
By (3.14)-(3.16) we deduce
\[\int_{0}^{\pi}e^{-\mathrm{i}z\left(2k+1\right)}\left(\sin z\right)^{1-\alpha }\mathrm{d}z=\frac{-\mathrm{i}\pi}{2^{1-\alpha}}\frac{\Gamma\left(2-\alpha \right)}{\Gamma\left(1-\frac{\alpha}{2}\right)\Gamma\left(\frac{\alpha}{2} \right)}\frac{\Gamma\left(\frac{\alpha}{2}+k\right)}{\Gamma\left(2-\frac{\alpha} {2}+k\right)}. \tag{3.17}\]
Consequently, by (3.12) and (3.17), we conclude that for any \(j\geq 1\)
\[\frac{1}{2\pi}\int_{0}^{2\pi}\frac{1-e^{-\mathrm{i}jz}}{\left|4\sin^{2}\left(z/2 \right)\right|^{\alpha/2}}\mathrm{d}z=\frac{\Gamma\left(2-\alpha\right)}{\Gamma \left(1-\frac{\alpha}{2}\right)\Gamma\left(\frac{\alpha}{2}\right)}\sum_{k=0}^{j -1}\frac{\Gamma\left(\frac{\alpha}{2}+k\right)}{\Gamma\left(1-\frac{\alpha}{2}+k \right)}\frac{1}{1-\frac{\alpha}{2}+k}=\mathsf{T}_{\alpha}^{1}\left(j\right)\]
defined in (3.4), which in turn, recalling (3.11), implies that \(L_{\nabla E_{\alpha},1}\phi_{+}\left(x\right)=-\frac{c_{\alpha}}{2\left(1-\frac{ \alpha}{2}\right)}\sum_{j\geq 1}\mathsf{T}_{\alpha}^{1}\left(j\right)\hat{\phi} \left(j\right)e^{\mathrm{i}jx}\). Since \(L_{\nabla E_{\alpha},1}\) is a real operator \(\mathsf{T}_{\alpha}^{1}\left(-j\right)=\overline{\mathsf{T}_{\alpha}^{1}\left(j \right)}=\mathsf{T}_{\alpha}^{1}\left(j\right)\) which, combined with \(L_{\nabla E_{\alpha},1}\hat{\phi}(0)=0\), gives us (3.9).
**Step 2** (Evaluation of \(L_{\nabla E_{\alpha},2}\)).: We claim that
\[\left(L_{\nabla E_{\alpha},2}\phi\right)(x)=\frac{c_{\alpha}}{2\left(1-\frac{ \alpha}{2}\right)}\ |D|^{2}\ \left(M_{\alpha}\left(|D|\right)\phi\right)(x). \tag{3.18}\]
Setting \(y=x-z\) we have \((L_{\nabla E_{\alpha},2}\phi)(x)=\frac{c_{\alpha}}{2\left(1-\frac{\alpha}{2} \right)}\int\frac{\phi^{\prime}(x-z)\sin z}{\left[2\left(1-\cos z\right)\right] ^{\frac{\alpha}{2}}}\ \mathrm{d}z.\) As \(\frac{\sin z}{\left[2\left(1-\cos z\right)\right]^{\frac{\alpha}{2}}}=\frac{1 }{2\left(1-\frac{\alpha}{2}\right)}\partial_{z}\left(\left[2\left(1-\cos z \right)\right]^{1-\frac{\alpha}{2}}\right)\), integrating by parts
\[(L_{\nabla E_{\alpha},2}\phi)(x)=-\frac{c_{\alpha}}{\left[2\left(1-\frac{ \alpha}{2}\right)\right]^{2}}\!\!\int\frac{\partial_{z}\left[\phi^{\prime}(x- z)\right]}{\left[2\left(1-\cos z\right)\right]^{\frac{\alpha}{2}-1}}\mathrm{d}z= \frac{c_{\alpha}}{\left[2\left(1-\frac{\alpha}{2}\right)\right]^{2}}\!\! \int\frac{\phi^{\prime\prime}(x-z)}{\left[2\left(1-\cos z\right)\right]^{\frac {\alpha}{2}-1}}\mathrm{d}z. \tag{3.19}\]
For \(j\geq 0\) we compute
\[I_{j}:=\!\!\int\frac{e^{-\mathrm{i}jz}}{\left[2\left(1-\cos z\right)\right]^{ \frac{\alpha}{2}-1}}\ \mathrm{d}z=\!\frac{1}{\pi}\!\int_{0}^{\pi}e^{-\mathrm{i}jz}[2\left(1-\cos z \right)]^{\frac{\alpha}{2}-1}\ \mathrm{d}z=\frac{2^{2-\alpha}}{\pi}\!\int_{0}^{\pi}e^{- \mathrm{i}\,2\,jz}\left(\sin z\right)^{2-\alpha}\mathrm{d}z\]
and applying Eq. (3.13) with \(X=2-\alpha\), \(Y=-2\,j\), and using \(\Gamma(x+1)=x\Gamma(x)\), we obtain
\[\int\frac{e^{-\mathrm{i}jz}}{\left[2\left(1-\cos z\right)\right]^{\frac{\alpha }{2}-1}}\ \mathrm{d}z=(-1)^{j}\frac{\left(2-\alpha\right)\Gamma\left(2-\alpha \right)}{\Gamma\left(2-j-\frac{\alpha}{2}\right)\Gamma\left(2+j-\frac{\alpha} {2}\right)}\,.\]
We use now the identities
\[\Gamma\left(2-\frac{\alpha}{2}+j\right)=\left(1-\frac{\alpha}{2}+j\right)\ \Gamma\left(1-\frac{\alpha}{2}+j\right)\,\qquad\quad\Gamma \left(2-\frac{\alpha}{2}-j\right)=\left(1-\frac{\alpha}{2}-j\right)\left(-1 \right)^{j}\frac{\Gamma\left(1-\frac{\alpha}{2}\right)\Gamma\left(\frac{ \alpha}{2}\right)}{\Gamma\left(\frac{\alpha}{2}+j\right)}\,,\]
which follows by \(\Gamma(1+z)=z\ \Gamma\left(z\right)\) and (3.15) (with \(z=-\alpha/2\) and \(j\rightsquigarrow j-1\)), to deduce, for any \(j\geq 0\),
\[I_{j}=\!\!\int\frac{e^{-\mathrm{i}jz}}{\left[2\left(1-\cos z\right)\right]^{ \frac{\alpha}{2}-1}}\ \mathrm{d}z=\frac{\Gamma\left(2-\alpha\right)}{\Gamma\left(1-\frac{\alpha}{2} \right)\Gamma\left(\frac{\alpha}{2}\right)}\frac{2-\alpha}{\left(1-\frac{ \alpha}{2}\right)^{2}-j^{2}}\frac{\Gamma\left(\frac{\alpha}{2}+j\right)}{ \Gamma\left(1-\frac{\alpha}{2}+j\right)}=-2\left(1-\frac{\alpha}{2}\right) \mathsf{M}_{\alpha}\left(j\right)\]
with \(\mathsf{M}_{\alpha}\) defined in (3.6). Since \(I_{j}=I_{-j}\) we conclude that
\[\int\frac{e^{-\mathrm{i}jz}}{\left[2\left(1-\cos z\right)\right]^{\frac{\alpha} {2}-1}}\ \mathrm{d}z=-2\left(1-\frac{\alpha}{2}\right)\mathsf{M}_{\alpha}\left(|j| \right)\,. \tag{3.20}\]
By (3.19) and (3.20) we deduce (3.18).
**Step 3** (Evaluation of \(L_{\nabla E_{\alpha},3}\)).: The action of the operator \(L_{\nabla E_{\alpha},3}\) in (3.8) on a function \(\phi(x)=\sum_{j\in\mathbb{Z}}\hat{\phi}\left(j\right)\ e^{\mathrm{i}jx}\) is, setting \(y=x-z\) and using (3.20),
\[\left(L_{\nabla E_{\alpha},3}\phi\right)(x)=\frac{c_{\alpha}}{4}\sum_{j\in \mathbb{Z}}\hat{\phi}\left(j\right)\ e^{\mathrm{i}jx}\int\frac{e^{-\mathrm{i} jz}}{\left[2\left(1-\cos z\right)\right]^{\frac{\alpha}{2}-1}}\ \mathrm{d}z=-\frac{c_{\alpha}}{2\left(1-\frac{\alpha}{2}\right)}\left(1-\frac{ \alpha}{2}\right)^{2}\ \left(\mathsf{M}_{\alpha}\left(|D|\right)\phi\right)(x)\,. \tag{3.21}\]
**Step 4** (Evaluation of \(L_{\nabla E_{\alpha},4}\)).: By (3.20) with \(j=0\) and (3.6) it results that \(L_{\nabla E_{\alpha},4}\) in (3.8) is
\[\left(L_{\nabla E_{\alpha},4}\phi\right)(x)=\frac{c_{\alpha}}{4}\!\int\frac{ \phi\left(x\right)}{\left[2\left(1-\cos z\right)\right]^{\frac{\alpha}{2}-1}}\ \mathrm{d}z=\frac{c_{\alpha}}{2\left(1-\frac{\alpha}{2}\right)}\frac{\Gamma \left(2-\alpha\right)}{\Gamma\left(1-\frac{\alpha}{2}\right)^{2}}\ \phi\left(x\right)\,. \tag{3.22}\]
In conclusion, by (3.8), (3.9), (3.18), (3.21), (3.22) we deduce that \(\mathrm{d}\nabla E_{\alpha}(0)\) is equal to \(L_{\alpha}\left(|D|\right)\) in (3.3).
**Step 5** (\(\Gamma_{\alpha}^{1}\in\bar{\Gamma}^{\max(0,\alpha-1)}\) and \(\Upsilon_{\alpha}^{2}\in\bar{\Gamma}^{\alpha-1}\)).: We start with \(\Upsilon_{\alpha}^{2}(|\xi|)\) in (3.5) which is defined on \(\mathbb{R}\). We recall the asymptotic expansion for \(|\xi|\to\infty\), see [67, Eq. (5.11.13)],
\[\frac{\Gamma\left(\xi+a\right)}{\Gamma\left(\xi+b\right)}-\xi^{a-b }\left(\sum_{\kappa=0}^{N}\frac{G_{\kappa}\left(a,b\right)}{\xi^{\kappa}} \right)=\mathcal{O}\left(\left|\xi\right|^{\left(a-b\right)-\left(1+N\right)}\right) \\ G_{0}\left(a,b\right)=1\,,\ G_{1}\left(a,b\right)=\frac{ \left(a-b\right)\left(a+b-1\right)}{2}\,,\quad\forall N\in\mathbb{N}\,,\ \left|\arg\xi\right|<\pi\,, \tag{3.23}\]
which involves holomorphic functions. We can focus on the case \(\operatorname{Re}(\xi)>0\). We claim that formula (3.23) implies automatically the estimates for the derivatives
\[\left|\partial_{\xi}^{\mu}\left(\frac{\Gamma\left(\xi+\alpha\right)}{\Gamma \left(\xi+b\right)}\right)\right|\lesssim_{\mu}\xi^{\alpha-b-\mu}\quad\text{ for large $\xi>0$ and for any $\mu\in\mathbb{N}$}. \tag{3.24}\]
Case \(\mu=0\) of (3.24) follows trivially from (3.23). For any \(\mu\in\mathbb{N}\setminus\{0\}\) and for \(N_{1}\gg\mu\geq 1\), let us set
\[M_{1}\left(\xi\right):=\xi^{\alpha-b}\sum_{\kappa=0}^{N}\frac{G_{\kappa}\left( a,b\right)}{\xi^{\kappa}},\quad\quad M_{2}\left(\xi\right):=\xi^{\alpha-b}\sum_{ \kappa=N+1}^{N+N_{1}}\frac{G_{\kappa}\left(a,b\right)}{\xi^{\kappa}},\quad\quad E \left(\xi\right):=\frac{\Gamma\left(\xi+\alpha\right)}{\Gamma\left(\xi+b \right)}-\left(M_{1}\left(\xi\right)+M_{2}\left(\xi\right)\right)\,.\]
Obviously \(M_{1}\left(\xi\right)\in\widetilde{\Gamma}_{0}^{\alpha-b}\) and \(M_{2}\left(\xi\right)\in\widetilde{\Gamma}_{0}^{\alpha-b-\left(N+1\right)}\). For \(\xi\gg 1\), \(E\) is holomorphic in \(B\left(\xi,2\right)\). Thus \(\partial_{\xi}^{\mu}E\left(\xi\right)=\frac{c_{\mu}}{2\pi\mathrm{i}}\int_{ \partial B\left(\xi,1\right)}\frac{E\left(\xi\right)}{\left(\xi-\xi\right)^{1 \alpha}}\,\mathrm{d}\zeta\) by the Cauchy formula. Moreover (3.23) is true in \(B\left(\xi,2\right)\) and so, by \(\left|\zeta\right|\sim\left|\xi\right|\),
\[\left|\partial_{\xi}^{\mu}E\left(\xi\right)\right|\lesssim_{\mu}\int_{ \partial B\left(\xi,1\right)}\frac{\left|\xi\right|^{\alpha-b-\left(1+N+N_{1} \right)}}{\left|\zeta-\xi\right|^{1+\mu}}\,\left|\mathrm{d}\zeta\right| \lesssim_{\mu}\left|\xi\right|^{\alpha-b-\left(1+N+N_{1}\right)}\lesssim_{\mu} \left|\xi\right|^{\alpha-b-\left(1+N+\mu\right)},\]
which implies
\[\left|\partial_{\xi}^{\mu}\left(\frac{\Gamma\left(\xi+\alpha\right)}{\Gamma \left(\xi+b\right)}-M_{1}\left(\xi\right)\right)\right|\leq\left|\partial_{ \xi}^{\mu}M_{2}\left(\xi\right)\right|+\left|\partial_{\xi}^{\mu}E\left(\xi \right)\right|\lesssim_{\mu}\left|\xi\right|^{\alpha-b-\left(1+N+\mu\right)}.\]
This proves (3.24). From (3.24) we conclude that \(\mathsf{T}_{\alpha}^{2}\left(\left|j\right|\right)\) in (3.5) is a Fourier multiplier of order \(\alpha-1\).
We now consider \(\mathsf{T}_{\alpha}^{1}\left(\left|j\right|\right)\) defined in (3.4). For any \(j\in\mathbb{N}_{0}\), the discrete derivative of \(\mathsf{T}_{\alpha}^{1}\left(j\right)\) is
\[\left(\Delta\mathsf{T}_{\alpha}^{1}\right)\left(j\right):=\mathsf{T}_{\alpha }^{1}\left(j+1\right)-\mathsf{T}_{\alpha}^{1}\left(j\right)=\frac{\Gamma\left( 2-\alpha\right)}{\Gamma\left(1-\frac{\alpha}{2}\right)\Gamma\left(\frac{\alpha }{2}\right)}\frac{\Gamma\left(\frac{\alpha}{2}+j\right)}{\Gamma\left(1-\frac{ \alpha}{2}+j\right)}\frac{1}{1-\frac{\alpha}{2}+j}=\frac{\mathsf{T}_{\alpha}^{ 2}\left(j\right)}{1-\frac{\alpha}{2}+j}\,.\]
Since \(\mathsf{T}_{\alpha}^{2}\) is a symbol of order \(\alpha-1\) we deduce that \(\left|\mathsf{T}_{\alpha}^{1}\left(j\right)\right|\lesssim 1+j^{\alpha-1}\) and, for any \(\ell\in\mathbb{N}\), the discrete derivatives satisfy \(\left|\left(\Delta^{\ell}\mathsf{T}_{\alpha}^{1}\right)\left(j\right)\right| \lesssim j^{\alpha-1-\ell}\). By [72, Lemma 7.1.1] there exists a \(C^{\infty}\) extension of \(\mathsf{T}_{\alpha}^{1}\) to the whole \(\mathbb{R}\) which is a symbol of order \(\max\left(\alpha-1,0\right)\).
The proof of Lemma 3.1 is complete.
**Remark 3.2**.: For \(\alpha\neq 1\) the Fourier multiplier \(\mathsf{T}_{\alpha}^{1}\left(\left|j\right|\right)\) in (3.4) is equal to
\[\mathsf{T}_{\alpha}^{1}\left(\left|j\right|\right)=\frac{\Gamma\left(2-\alpha \right)}{\Gamma\left(1-\frac{\alpha}{2}\right)\Gamma\left(\frac{\alpha}{2} \right)}\frac{1}{\alpha-1}\left(\frac{\Gamma\left(\frac{\alpha}{2}+\left|j \right|\right)}{\Gamma\left(1+\left|j\right|-\frac{\alpha}{2}\right)}-\frac{ \Gamma\left(\frac{\alpha}{2}\right)}{\Gamma\left(1-\frac{\alpha}{2}\right)}\right)\]
as follows by induction.
**Remark 3.3**.: The first linear frequency \(\omega_{\alpha}(1)=0\). This is equivalent to prove that \(L_{\alpha}(1)=0\), that, in view of (3.3)-(3.5), amounts to show that
\[\mathsf{T}_{\alpha}^{1}\left(1\right)-\mathsf{T}_{\alpha}^{2}\left(1\right)- \frac{\Gamma\left(2-\alpha\right)}{\Gamma\left(1-\frac{\alpha}{2}\right)^{2}}= \frac{\Gamma\left(2-\alpha\right)}{\Gamma\left(1-\frac{\alpha}{2}\right)} \left[\frac{\Gamma\left(\frac{\alpha}{2}\right)}{\Gamma\left(\frac{\alpha}{2} \right)\Gamma\left(1-\frac{\alpha}{2}\right)\left(1-\frac{\alpha}{2}\right)}- \frac{\Gamma\left(\frac{\alpha}{2}+1\right)}{\Gamma\left(\frac{\alpha}{2}\right) \Gamma\left(2-\frac{\alpha}{2}\right)}-\frac{1}{\Gamma\left(1-\frac{\alpha}{2} \right)}\right]=0\,.\]
This holds true because, using the identity \(\Gamma\left(y+1\right)=y\,\Gamma\left(y\right)\),
\[\frac{1}{\Gamma\left(1-\frac{\alpha}{2}\right)\left(1-\frac{\alpha}{2}\right)}- \frac{\Gamma\left(\frac{\alpha}{2}\right)\frac{\alpha}{2}}{\Gamma\left(\frac{ \alpha}{2}\right)\Gamma\left(1-\frac{\alpha}{2}\right)\left(1-\frac{\alpha}{2} \right)}-\frac{1}{\Gamma\left(1-\frac{\alpha}{2}\right)}=\frac{1}{\Gamma\left(1- \frac{\alpha}{2}\right)\left(1-\frac{\alpha}{2}\right)}\left[1-\frac{\alpha}{2} -\left(1-\frac{\alpha}{2}\right)\right]=0\,.\]
The fact that the first frequency \(\omega_{\alpha}(1)=0\) is zero has a dynamical proof. Indeed, in view of the translation invariance of the problem, the patch equation (1.11) possesses the vector prime integral
\[\int_{\mathbb{T}}\left(\sqrt{1+2f\left(x\right)}-1\right)\widetilde{\gamma}\left( x\right)\,\mathrm{d}x=\int_{\mathbb{T}}f\left(x\right)\left(\cos x,\sin x\right)\, \mathrm{d}x+O\left(\left\|f\right\|^{2}\right). \tag{3.25}\]
Let us consider a dynamical system \(\dot{f}=Y\left(f\right)\) with \(Y(0)=0\) and \(A:=\mathrm{d}Y(0)\). If \(b\left(f\right)\) is a prime integral then \(\nabla b\left(f\right)\cdot Y\left(f\right)=0\), \(\forall f\). Hence, differentiating and since \(Y(0)=0\), we obtain \(\nabla b(0)\cdot Af=0\), \(\forall f\). If \(A\) is non singular then \(\nabla b(0)=0\), i.e. the prime integral \(b\) is quadratic at \(f=0\). Here the linear operator \(A\) (cf. (3.7)) is degenerate in the one-Fourier mode on which (3.25) has a linear component in \(f\).
The other linear frequencies \(\omega_{\alpha}\left(j\right)\), \(j\neq 0,\pm 1\), are all different from zero.
**Lemma 3.4** (Convexity of \(\omega_{\alpha}\left(j\right)\)).: _Let \(\alpha\in\left(0,2\right)\). The frequency map \(j\mapsto\omega_{\alpha}\left(j\right)=j\,L_{\alpha}\left(\left|j\right|\right)\), \(j\in\mathbb{Z}\), where \(L_{\alpha}\) is computed in Lemma 3.1, is odd and satisfies the convexity property_
\[\Delta^{2}\omega_{\alpha}\left(j\right):=\omega_{\alpha}\left(j+1\right)+ \omega_{\alpha}\left(j-1\right)-2\,\omega_{\alpha}\left(j\right)=\frac{\Gamma \left(2-\alpha\right)}{2^{1-\alpha}\Gamma^{2}\left(1-\frac{\alpha}{2}\right)} \frac{\Gamma\left(\frac{\alpha}{2}-1+j\right)}{\Gamma\left(2-\frac{\alpha}{2} +j\right)}\alpha j>0\,,\quad\forall\ j\geq 1\,. \tag{3.26}\]
_The linear frequencies \(\omega_{\alpha}\left(j\right)\) are different from zero for any \(\left|j\right|\geq 2\), in particular \(\omega_{\alpha}\left(j\right)>0\) and increasing for any \(j\geq 2\)._
Proof.: In view of Lemma 3.1, for any \(j\geq 1\), and the identity \(\Gamma(1+y)=y\Gamma(y)\), the second discrete derivative \(\Delta^{2}\omega_{\alpha}\left(j\right)\) is equal to
\[\frac{c_{\alpha}}{2\left(1-\frac{\alpha}{2}\right)}\frac{\Gamma \left(2-\alpha\right)}{\Gamma\left(1-\frac{\alpha}{2}\right)\Gamma\left(\frac {\alpha}{2}\right)}\left\{\left(j+1\right)\sum_{k=0}^{j}\frac{\Gamma\left( \frac{\alpha}{2}+k\right)}{\Gamma\left(2-\frac{\alpha}{2}+k\right)}+\left(j-1 \right)\sum_{k=0}^{j-2}\frac{\Gamma\left(\frac{\alpha}{2}+k\right)}{\Gamma \left(2-\frac{\alpha}{2}+k\right)}-2j\sum_{k=0}^{j-1}\frac{\Gamma\left(\frac {\alpha}{2}+k\right)}{\Gamma\left(2-\frac{\alpha}{2}+k\right)}\\ -\left(j+1\right)\frac{\Gamma\left(\frac{\alpha}{2}+j+1\right)} {\Gamma\left(1-\frac{\alpha}{2}+j+1\right)}-\left(j-1\right)\frac{\Gamma\left( \frac{\alpha}{2}+j-1\right)}{\Gamma\left(1-\frac{\alpha}{2}+j-1\right)}+2j \frac{\Gamma\left(\frac{\alpha}{2}+j\right)}{\Gamma\left(1-\frac{\alpha}{2}+j \right)}\right\}\,. \tag{3.27}\]
The first term inside the above bracket is equal to
\[I =\left(j+1\right)\frac{\Gamma\left(\frac{\alpha}{2}+j\right)}{ \Gamma\left(2-\frac{\alpha}{2}+j\right)}-\left(j-1\right)\frac{\Gamma\left( \frac{\alpha}{2}+j-1\right)}{\Gamma\left(2-\frac{\alpha}{2}+j-1\right)} \tag{3.28}\] \[=\frac{\Gamma\left(\frac{\alpha}{2}+j-1\right)}{\Gamma\left(2- \frac{\alpha}{2}+j\right)}\left[\left(j+1\right)\left(\frac{\alpha}{2}+j-1 \right)-\left(j-1\right)\left(1-\frac{\alpha}{2}+j\right)\right]=\frac{\Gamma \left(\frac{\alpha}{2}+j-1\right)}{\Gamma\left(2-\frac{\alpha}{2}+j\right)} \,\alpha j\,.\]
Writing the terms in the 2nd line of the bracket in (3.27) as
\[-\left(j+1\right)\frac{\Gamma\left(\frac{\alpha}{2}+j+1\right)}{ \Gamma\left(1-\frac{\alpha}{2}+j+1\right)} =-\frac{\Gamma\left(\frac{\alpha}{2}+j-1\right)}{\Gamma\left(2- \frac{\alpha}{2}+j\right)}\left(j+1\right)\left(\frac{\alpha}{2}+j\right) \left(\frac{\alpha}{2}+j-1\right), \tag{3.29}\] \[-\left(j-1\right)\frac{\Gamma\left(\frac{\alpha}{2}+j-1\right)} {\Gamma\left(1-\frac{\alpha}{2}+j-1\right)} =-\frac{\Gamma\left(\frac{\alpha}{2}+j-1\right)}{\Gamma\left(2- \frac{\alpha}{2}+j\right)}\left(j-1\right)\left(-\frac{\alpha}{2}+j\right) \left(1-\frac{\alpha}{2}+j\right),\] \[2j\frac{\Gamma\left(\frac{\alpha}{2}+j\right)}{\Gamma\left(1- \frac{\alpha}{2}+j\right)} =\frac{\Gamma\left(\frac{\alpha}{2}+j-1\right)}{\Gamma\left(2- \frac{\alpha}{2}+j\right)}\,2j\left(j^{2}-\left(1-\frac{\alpha}{2}\right)^{2} \right),\]
we conclude by Eqs. (3.27) to (3.29) and since \(c_{\alpha}=\frac{\Gamma\left(\frac{\alpha}{2}\right)}{2^{1-\alpha}\Gamma \left(1-\frac{\alpha}{2}\right)}\) (cf. (1.5)), that
\[\Delta^{2}\omega_{\alpha}\left(j\right)=\frac{1}{2^{1-\alpha}\left(2-\alpha \right)}\frac{\Gamma\left(2-\alpha\right)}{\Gamma^{2}\left(1-\frac{\alpha}{2} \right)}\frac{\Gamma\left(\frac{\alpha}{2}-1+j\right)}{\Gamma\left(2-\frac{ \alpha}{2}+j\right)}\,X_{\alpha}\left(j\right)\]
where
\[X_{\alpha}\left(j\right):=\alpha j-\left(j+1\right)\left(\frac{\alpha}{2}+j \right)\left(\frac{\alpha}{2}-1+j\right)-\left(j-1\right)\left(-\frac{\alpha} {2}+j\right)\left(1-\frac{\alpha}{2}+j\right)+2j\left(j^{2}-\left(1-\frac{ \alpha}{2}\right)^{2}\right)=\left(2-\alpha\right)\alpha j\,.\]
This proves (3.26). The positivity of \(\Delta^{2}\omega_{\alpha}\left(j\right)\) in (3.26) follows because the function \(\Gamma\) is positive on positive numbers. Finally, the convexity property (3.26) and \(\omega_{\alpha}\left(0\right)=\omega_{\alpha}\left(1\right)=0\) (cf. Remark 3.3) imply that \(\omega_{\alpha}\left(j\right)>0\) and increasing for any \(j\geq 2\).
The next lemma is crucial for the normal form construction of Section 5.
**Lemma 3.5** (Absence of three wave interactions).: _Let \(\alpha\in\left(0,2\right)\). For any \(n,j,k\in\mathbb{Z}\setminus\left\{0\right\}\) satisfying \(k=j+n\), it results_
\[\left|\omega_{\alpha}\left(k\right)-\omega_{\alpha}\left(j\right)-\omega_{\alpha }\left(n\right)\right|\geq\omega_{\alpha}\left(2\right)>0\,. \tag{3.30}\]
Proof.: Since the map \(j\mapsto\omega_{\alpha}(j)\) is odd and strictly increasing for \(j\in\mathbb{N}\), it is sufficient to consider the case \(k\geq j\geq n\geq 1\), \(k=j+n\). Then, using that \(\omega_{\alpha}(0)=\omega_{\alpha}(1)=0\), defining \(A_{\alpha}(\ell):=\omega_{\alpha}(\ell)-\omega_{\alpha}(\ell-1)\), we write by a telescoping expansion,
\[\omega_{\alpha}\left(k\right)-\omega_{\alpha}\left(j\right)- \omega_{\alpha}\left(n\right) =\sum_{q=1}^{j+n}\left(\omega_{\alpha}\left(q\right)-\omega_{ \alpha}\left(q-1\right)\right)-\sum_{q=1}^{j}\left(\omega_{\alpha}\left(q \right)-\omega_{\alpha}\left(q-1\right)\right)-\sum_{q=1}^{n}\left(\omega_{ \alpha}\left(q\right)-\omega_{\alpha}\left(q-1\right)\right)\] \[=\sum_{q=1}^{n}\left(A_{\alpha}\left(q+j\right)-A_{\alpha}\left(q \right)\right)=\sum_{q=1}^{n}\sum_{q^{\prime}=1}^{j}\left(A_{\alpha}\left(q+q ^{\prime}\right)-A_{\alpha}\left(q+q^{\prime}-1\right)\right)\] \[=\sum_{q=1}^{n}\sum_{q^{\prime}=1}^{j}\triangle^{2}\omega_{ \alpha}\left(q+q^{\prime}-1\right)\geq\triangle^{2}\omega_{\alpha}\left(1 \right)=\omega_{\alpha}\left(2\right)>0\]
by (3.26) and Lemma 3.4. This proves (3.30).
We finally prove an asymptotic expansion of the frequencies \(\omega_{\alpha}\left(j\right)\). We use the notation \(\sum_{j=p_{1}}^{p_{2}}a_{j}\equiv 0\) if \(p_{2}<p_{1}\). We denote \(m_{\beta}\) a real Fourier multiplier of order \(\beta\in\mathbb{R}\), and \(c_{\alpha}^{\kappa}\) real constants, which may vary from line to line.
**Lemma 3.6** (Asymptotic behavior of \(L_{\alpha}\left(\left|j\right|\right)\)).: _Let_
\[\mathbb{V}_{\alpha}:=\begin{cases}\dfrac{\alpha c_{\alpha}}{2-\alpha}\dfrac{ \Gamma\left(1-\alpha\right)}{\Gamma\left(1-\frac{\alpha}{2}\right)^{2}}&\alpha \neq 1\,,\\ \dfrac{1}{\pi}\left\{\left(\gamma_{\mathrm{EM}}-\dfrac{\pi^{2}}{12}-2\right)+ \sum_{k=1}^{\infty}\left[\dfrac{1}{\frac{1}{2}+k}-\dfrac{1}{k}\left(1-\dfrac {1}{2k}\right)\right]\right\}&\alpha=1\,,\end{cases} \tag{3.31}\]
_where \(\gamma_{\mathrm{EM}}:=\left(\lim_{n\to+\infty}\sum_{k=1}^{n}\frac{1}{k} \right)-\log n\) is the Euler-Mascheroni constant._
_Then the symbol \(L_{\alpha}\left(\left|j\right|\right)\) in Lemma 3.1 has the following asymptotic expansion: for any \(\mathcal{K}\in\mathbb{N}\), \(\mathcal{K}\geq 3\),_
* _If_ \(\alpha\in(0,1)\cup(1,2)\) _there exists real constants_ \(c_{\alpha}^{\kappa}\)_,_ \(\kappa\in\{3,\ldots,\mathcal{K}-1\}\) _and a Fourier multiplier_ \(m_{\alpha-\mathcal{K}}\) _of order_ \(\alpha-\mathcal{K}\) _such that_ \[L_{\alpha}\left(\left|j\right|\right)=\mathbb{V}_{\alpha}+\underbrace{\dfrac{c_ {\alpha}}{2\left(1-\frac{\alpha}{2}\right)}\dfrac{\Gamma\left(3-\alpha\right)} {\Gamma\left(1-\frac{\alpha}{2}\right)\Gamma\left(\frac{\alpha}{2}\right)} \dfrac{1}{\alpha-1}}_{:=c_{\alpha}^{\kappa}}\left|j\right|^{\alpha-1}+\sum_{ \kappa=3}^{\mathcal{K}-1}c_{\alpha}^{\kappa}\left|j\right|^{\alpha-\kappa}+m_{ \alpha-\mathcal{K}}\left(\left|j\right|\right)\,.\] (3.32)
* _If_ \(\alpha=1\) _there exists real constants_ \(c_{1}^{\kappa}\)_,_ \(\kappa\in\{3,\ldots,\mathcal{K}-1\}\) _and a Fourier multiplier_ \(m_{1-\mathcal{K}}\) _of order_ \(1-\mathcal{K}\) _such that_ \[L_{1}\left(\left|j\right|\right)=\mathbb{V}_{1}+\dfrac{1}{\pi}\log\left|j\right| +\sum_{\kappa=3}^{\mathcal{K}-1}c_{1}^{\kappa}\left|j\right|^{1-\kappa}+m_{1 -\mathcal{K}}\left(\left|j\right|\right)\,.\]
Note that in the expansion (3.32) there is not a term as \(c_{\alpha}^{2}\left|j\right|^{\alpha-2}\) and that \(\frac{1}{\alpha-1}\left|j\right|^{\alpha-1}\) is, for \(\alpha\in(1,2)\), positive and tends to infinity, whereas, for \(\alpha\in(0,1)\), it is negative and tends to zero.
We provide for completeness the expansion also in the cases \(\alpha=1\) and \(\alpha\in(0,1)\), although not needed for the proof of Theorem 1.1.
Lemma 3.6 is a direct consequence of (3.5), (3.3) and (1.5) and the following lemma.
**Lemma 3.7**.: _For any \(\mathcal{K}\in\mathbb{N},\mathcal{K}\geq 3\), the following holds:_
* _if_ \(\alpha\in(0,1)\cup(1,2)\)_, there exist real constants_ \(c_{\alpha}^{\kappa}\)_,_ \(\kappa\in\{3,\ldots,\mathcal{K}-1\}\) _such that_ \[\mathsf{T}_{\alpha}^{1}\left(\left|j\right|\right)=\dfrac{\Gamma\left(1-\alpha \right)}{\Gamma\left(1-\frac{\alpha}{2}\right)^{2}}+\dfrac{\Gamma\left(2- \alpha\right)}{\Gamma\left(1-\frac{\alpha}{2}\right)\Gamma\left(\frac{\alpha }{2}\right)}\dfrac{1}{\alpha-1}\left|j\right|^{\alpha-1}+\sum_{\kappa=3}^{ \mathcal{K}-1}c_{\alpha}^{\kappa}\left|j\right|^{\alpha-\kappa}+m_{\alpha- \mathcal{K}}\left(\left|j\right|\right)\,.\] (3.33)
* _if_ \(\alpha=1\) _there exist real constants_ \(c_{1}^{\kappa}\)_,_ \(\kappa\in\{3,\ldots,\mathcal{K}-1\}\) _such that_ \[\mathsf{T}_{1}^{1}\left(\left|j\right|\right)=\dfrac{1}{\pi}\left\{\log\left|j \right|+\left(\gamma_{\mathrm{EM}}-\dfrac{\pi^{2}}{12}\right)+\sum_{k=1}^{ \infty}\left[\dfrac{1}{\frac{1}{2}+k}-\dfrac{1}{k}\left(1-\dfrac{1}{2k}\right) \right]\right\}+\sum_{\kappa=3}^{\mathcal{K}-1}c_{\alpha}^{\kappa}\left|j \right|^{-\kappa}+m_{1-\mathcal{K}}\left(\left|j\right|\right);\] (3.34)
* _if_ \(\alpha\in(0,2)\) _there exist real constants_ \(c_{\alpha}^{\kappa}\)_,_ \(\kappa\in\{3,\ldots,\mathcal{K}-1\}\) _such that_ \[\mathsf{M}_{\alpha}\left(\left|j\right|\right)=\frac{\Gamma\left(2-\alpha \right)}{\Gamma\left(1-\frac{\alpha}{2}\right)\Gamma\left(\frac{\alpha}{2} \right)}\frac{1}{\left|j\right|^{2}-\left(1-\frac{\alpha}{2}\right)^{2}}\ \left[\left|j\right|^{ \alpha-1}+\sum_{\kappa=3}^{\mathcal{K}-1}c_{\alpha}^{\kappa}\ \left|j\right|^{ \alpha-\kappa}+m_{\alpha-\mathcal{K}}\left(\left|j\right|\right)\right].\] (3.35)
Proof.: By the proof of Lemma 3.1 (below (3.23)) we know that
\[\frac{\Gamma\left(\xi+a\right)}{\Gamma\left(\xi+b\right)}-\xi^{a-b}\sum_{ \kappa=0}^{N}\frac{G_{\kappa}\left(a,b\right)}{\xi^{\kappa}}=m_{a-b-\left(1+N \right)}\left(\xi\right)\,,\]
where \(m_{a-b-\left(1+N\right)}\left(\xi\right)\) is a Fourier multiplier in \(\widetilde{\Gamma}_{0}^{a-b-\left(1+N\right)}\), and therefore, for any \(\mathcal{K}\geq 3\),
\[\frac{\Gamma\left(\frac{\alpha}{2}+\left|j\right|\right)}{\Gamma\left(1-\frac {\alpha}{2}+\left|j\right|\right)}=\left|j\right|^{\alpha-1}+\sum_{\kappa=2}^ {\mathcal{K}-1}G_{\kappa}\left(\frac{\alpha}{2},1-\frac{\alpha}{2}\right) \left|j\right|^{\alpha-\left(1+\kappa\right)}+m_{\alpha-\mathcal{K}}\left( \left|j\right|\right)\,, \tag{3.36}\]
where we exploited that \(G_{0}(a,b)=1\) and \(G_{1}\left(\frac{\alpha}{2},1-\frac{\alpha}{2}\right)=0\), by (3.23). By Remark 3.2 and (3.36) we deduce (3.33) for \(\mathcal{K}=3\). Finally (3.34) for \(\alpha=1\) follows by the asymptotic estimate of the harmonic numbers \(\sum_{k=1}^{j}k^{-1}=\gamma_{\mathrm{EM}}+\log\left(j\right)+\frac{1}{2j}+m_{ -2}\left(j\right)\).
## 4 Paralinearization of the Hamiltonian scalar field
The main result of this section is the following.
**Theorem 4.1** (Paralinearization of the \(\alpha\)-SQG patch equation).: _Let \(\alpha\in(0,1)\cup(1,2)\). Let \(N\in\mathbb{N}\) and \(\rho\geq 0\). For any \(K\in\mathbb{N}_{0}\), there exist \(s_{0}>0\), \(\epsilon_{0}>0\) such that, if \(f\in B_{s_{0},\mathbb{R}}^{K}\left(I;\epsilon_{0}\right)\) solves Eq. (1.11) then_
\[\partial_{t}f+\partial_{x}\circ\mathrm{Op}^{BW}\left[\left(1+\nu\left(f;x \right)\right)L_{\alpha}\left(\left|\xi\right|\right)+V\left(f;x\right)+P\left( f;x,\xi\right)\right]\ f=R\left(f\right)f \tag{4.1}\]
_where_
* \(L_{\alpha}\left(\left|\xi\right|\right)\) _is the real valued Fourier multiplier of order_ \(\max\{0,\alpha-1\}\) _defined in Lemma_ 3.1_;_
* \(\nu\left(f;x\right),V\left(f;x\right)\) _are real valued functions in_ \(\Sigma\mathcal{F}_{K,0,1}^{\mathbb{R}}\left[\epsilon_{0},N\right]\) _(see Definition_ 2.7_);_
* \(P\left(f;x,\xi\right)\) _is a symbol in_ \(\Sigma\Gamma_{K,0,1}^{-1}\left[\epsilon_{0},N\right]\) _(see Definition_ 2.2_) satisfying (_2.20_);_
* \(R\left(f\right)\) _is a real smoothing operator in_ \(\Sigma\widetilde{\mathcal{K}}_{K,0,1}^{-\rho}\left[\epsilon_{0},N\right]\) _(see Definition_ 2.17_)._
Note that, since the symbol \(\left(1+\nu\left(f;x\right)\right)L_{\alpha}\left(\left|\xi\right|\right)+V \left(f;x\right)\) is real, the vector field in (4.1) is linearly Hamiltonian up to zero order operators.
### Isolating the integral terms
**Notation.** In this section we use the following auxiliary functions
\[r=r\left(f;x\right):=\sqrt{1+2f\left(x\right)}\,,\qquad\delta_{z}f:=f\left(x \right)-f\left(x-z\right),\qquad\Delta_{z}f:=\frac{\delta_{z}f}{2\sin\left(z/2 \right)},\ \forall z\in\mathbb{T}\setminus\left\{0\right\}. \tag{4.2}\]
We shall denote by \(P\left(f;x,\xi\right)\) a symbol in \(\Sigma\Gamma_{K,0,1}^{-1}\left[\epsilon_{0},N\right]\) (see Definition 2.2) by \(R\left(f\right)\) a smoothing operator in \(\Sigma\mathcal{R}_{K,0,1}^{-\rho}\left[\epsilon_{0},N\right]\) (see Definition 2.17) and by \(R\left(f;z\right)\) a Kernel-smoothing operator in \(\Sigma K\mathcal{R}_{K,0,1}^{-\rho,0}\left[\epsilon_{0},N\right]\) (see Definition 2.33), whose explicit expression may vary from line to line.
Note that \(r\left(f;x\right)\) is a function in \(\Sigma\mathcal{F}_{K,0,0}^{\mathbb{R}}\left[\epsilon_{0},N\right]\) and, according to Definition 2.31,
\[\delta_{z}\in\overline{K\mathcal{M}}_{0}^{1,1}\,. \tag{4.3}\]
In view of (1.12) and performing the change of variable \(y=x-z\), the gradient \(\nabla E_{\alpha}\left(f\right)\) can be decomposed as
\[\begin{split}\nabla E_{\alpha}\left(f\right)&=\nabla E _{\alpha}^{(1)}\left(f\right)+\nabla E_{\alpha}^{(2)}\left(f\right),\\ \nabla E_{\alpha}^{(1)}\left(f\right)&:=\!\frac{c_ {\alpha}}{2\!\left(1\!-\!\frac{\alpha}{2}\right)}\!\int\frac{1+2f\left(x-z \right)-\sqrt{1+2f\left(x\right)}\,\sqrt{1+2f\left(x-z\right)}\cos z}{\left[2 \left(1+f\left(x\right)+f\left(x-z\right)-\sqrt{1+2f\left(x\right)}\sqrt{1+2f \left(x-z\right)}\cos z\right]\right]^{\frac{\alpha}{2}}}\,\mathrm{d}z,\\ \nabla E_{\alpha}^{(2)}\left(f\right)&:=\!\frac{c_ {\alpha}}{2\!\left(1\!-\!\frac{\alpha}{2}\right)}\!\int\frac{\sqrt{\frac{1+2f \left(x\right)}{1+2f\left(x-z\right)}}\,f^{\prime}\left(x-z\right)\sin z}{ \left[2\left(1+f\left(x\right)+f\left(x-z\right)-\sqrt{1+2f\left(x\right)} \sqrt{1+2f\left(x-z\right)}\cos z\right]\right]^{\frac{\alpha}{2}}}\,\mathrm{d}z.\end{split} \tag{4.4}\]
Then, recalling the notation in (4.2), we write
\[\nabla E_{\alpha}^{(1)}\left(f\right)=\,\frac{c_{\alpha}}{2\!\left(1\!-\! \frac{\alpha}{2}\right)}\!\int\frac{r^{2}-2\delta_{z}f-r\,\sqrt{r^{2}-2\delta_ {z}f}\cos z}{\left[2\left(r^{2}-\delta_{z}f-r\sqrt{r^{2}-2\delta_{z}f}\cos z \right)\right]^{\frac{\alpha}{2}}}\,\mathrm{d}z=\,\frac{c_{\alpha}}{2\left(1\! -\!\frac{\alpha}{2}\right)}\,r^{2-\alpha}\!\!\int\mathrm{G}_{\alpha,z}^{1} \left(\frac{\delta_{z}f}{r^{2}}\right)\,\mathrm{d}z \tag{4.5}\]
with
\[\mathrm{G}_{\alpha,z}^{1}\left(\mathcal{X}\right):=\,\frac{1-2\mathcal{X}- \sqrt{1-2\mathcal{X}}\cos z}{\left[2\left(1\!-\!\mathcal{X}-\sqrt{1-2\mathcal{ X}}\cos z\right)\right]^{\frac{\alpha}{2}}}, \tag{4.6}\]
and
\[\nabla E_{\alpha}^{(2)}\left(f\right)=\,\frac{c_{\alpha}}{2\!\left(1\!-\! \frac{\alpha}{2}\right)}\,\frac{1}{r^{\alpha}}\!\!\int\limits_{=\mathcal{Y} \left(f\right)}^{2}\!\!\!\int\limits_{=\mathcal{Y}\left(f\right)}^{\frac{1}{ \sqrt{1-2\mathcal{X}}}}\,\mathrm{d}z=\frac{c_{\alpha}}{2\!\left(1\!-\!\frac{ \alpha}{2}\right)}\,\frac{1}{r^{\alpha}}\,\mathcal{J}\left(f\right) \tag{4.7}\]
with
\[\mathrm{G}_{\alpha,z}^{2}\left(\mathcal{X}\right):=\frac{\frac{1}{\sqrt{1-2 \mathcal{X}}}}{\left[2\left(1\!-\!\mathcal{X}\!-\!\sqrt{1-2\mathcal{X}}\!\cos z \right)\right]^{\frac{\alpha}{2}}}. \tag{4.8}\]
By (4.4), recalling (2.3) and that \(\nabla E_{\alpha}^{(1)}\left(0\right)\) is a constant, using (4.5), (4.7), the equation (1.11) can be written as
\[\begin{split}\partial_{t}f&=\partial_{x}\left[\left( \nabla E_{\alpha}^{(1)}\left(f\right)-\nabla E_{\alpha}^{(1)}\left(0\right) \right)+\nabla E_{\alpha}^{(2)}\left(f\right)\right]\\ &=\frac{c_{\alpha}}{2\!\left(1\!-\!\frac{\alpha}{2}\right)}\, \partial_{x}\left[\left(r^{2-\alpha}\Delta I\left(f\right)\right)+\!\int \mathrm{G}_{\alpha,z}^{1}\left(0\right)\mathrm{d}z\,\left(r^{2-\alpha}-1\right) +\frac{1}{r^{\alpha}}\,\mathcal{J}\left(f\right)\right]\end{split} \tag{4.9}\]
where
\[\Delta I\left(f\right):=\!\int\mathrm{G}_{\alpha,z}^{1}\left(\frac{\delta_{z}f }{r^{2}}\right)-\mathrm{G}_{\alpha,z}^{1}\left(0\right)\mathrm{d}z. \tag{4.10}\]
By (3.22), (4.6) we get
\[\int\mathrm{G}_{z}^{1}\left(0\right)\mathrm{d}z=\frac{1}{2}\!\int\left[2\left( 1-\cos z\right)\right]^{1-\frac{\alpha}{2}}\mathrm{d}z=\frac{1}{1-\frac{\alpha }{2}}\,\frac{\Gamma\left(2-\alpha\right)}{\Gamma\left(1-\frac{\alpha}{2} \right)^{2}}. \tag{4.11}\]
The terms \(\Delta I\left(f\right)\) and \(\mathcal{J}\left(f\right)\) are yet not in a suitable form to be paralinearized, since the nonlinear convolution kernels need to be desingularized at \(z=0\).
**Lemma 4.2**.: _The term \(\Delta I\left(f\right)\) in (4.10) can be written as_
\[\Delta I\left(f\right)=\mathcal{I}\left(f\right)+R\left(f\right)f \tag{4.12}\]
_where \(R\left(f\right)\) is a real smoothing operator in \(\Sigma\mathcal{R}_{K,0,1}^{-\rho}\left[\epsilon_{0},N\right]\), and_
\[\mathcal{I}\left(f\right):=\!\int\mathrm{Op}^{BW}\left[\kappa_{\alpha,z}^{1} \left(\frac{\Delta_{z}f}{r^{2}}\right)\right]\frac{\delta_{z}f}{r^{2}\left|2 \sin\left(z/2\right)\right|^{\alpha}}\mathrm{d}z \tag{4.13}\]
_with_
\[\begin{split}\mathsf{K}^{1}_{\alpha,z}(\mathsf{X})&:= \left(\mathsf{G}^{1}_{\alpha,z}\right)^{\prime}\left(2\mathsf{X}\sin\left(z/2 \right)\right)|2\sin\left(z/2\right)|^{\alpha}\\ &=\left[-\frac{2-\frac{\cos z}{\sqrt{1-4\mathsf{X}\sin\left(z/2 \right)}}}{\left[2\left(1-2\mathsf{X}\sin\left(z/2\right)-\sqrt{1-4\mathsf{X} \sin\left(z/2\right)}\cos z\right)\right]^{\frac{\alpha}{2}}}\right.\\ &\quad\quad\left.+\alpha\frac{\left(1-\frac{\cos z}{\sqrt{1-4 \mathsf{X}\sin\left(z/2\right)}}\right)\left(1-4\mathsf{X}\sin\left(z/2 \right)-\sqrt{1-4\mathsf{X}\sin\left(z/2\right)}\cos z\right)}{\left[2\left(1 -2\mathsf{X}\sin\left(z/2\right)-\sqrt{1-4\mathsf{X}\sin\left(z/2\right)} \cos z\right)\right]^{\frac{\alpha}{2}+1}}\right]\left|2\sin\left(z/2\right) \right|^{\alpha}\,.\end{split} \tag{4.14}\]
_The term \(\mathcal{J}\left(f\right)\) in (4.7) can be written as_
\[\mathcal{J}\left(f\right)=\int\mathsf{K}^{2}_{\alpha,z}\left(\frac{\Delta_{z }f}{r^{2}}\right)\,f^{\prime}\left(x-z\right)\frac{\sin z}{\left|2\sin\left(z /2\right)\right|^{\alpha}}\,\mathrm{d}z \tag{4.15}\]
_where_
\[\mathsf{K}^{2}_{\alpha,z}(\mathsf{X}):=\mathsf{G}^{2}_{\alpha,z}\left( \mathsf{X}\,2\sin\left(z/2\right)\right)\,\left|2\sin\left(z/2\right)\right|^ {\alpha}=\frac{\frac{1}{\sqrt{1-4\mathsf{X}\sin\left(z/2\right)}}\left|2\sin \left(z/2\right)\right|^{\alpha}}{\left[2\left(1-2\mathsf{X}\sin\left(z/2 \right)-\sqrt{1-4\mathsf{X}\sin\left(z/2\right)}\cos z\right)\right]^{\frac{ \alpha}{2}}}. \tag{4.16}\]
_The functions \(z\mapsto\mathsf{K}^{1}_{\alpha,z}\left(\frac{\Delta_{z}f}{r^{2}}\right), \mathrm{j}=1,2\), are \(2\pi\)-periodic._
Proof.: Applying Lemma 2.25 to (4.10) we get
\[\Delta I\left(f\right)=\int\mathsf{Op}^{BW}\left[\left(\mathsf{G}^{1}_{\alpha,z}\right)^{\prime}\left(\frac{\delta_{z}f}{r^{2}}\right)\right]\,\frac{ \delta_{z}f}{r^{2}}\mathrm{d}z+\int R\left(\frac{\delta_{z}f}{r^{2}}\right) \frac{\delta_{z}f}{r^{2}}\mathrm{d}z \tag{4.17}\]
where \(R\) is a smoothing operator in \(\Sigma\mathcal{R}^{-\rho}_{K,0,1}\left[\epsilon_{0},N\right]\) and, recalling (4.6),
\[\left(\mathsf{G}^{1}_{\alpha,z}\right)^{\prime}(\mathsf{X})=-\frac{2-\frac{ \cos z}{\sqrt{1-2\mathsf{X}}}}{\left[2\left(1-\mathsf{X}-\sqrt{1-2\mathsf{X} }\cos z\right)\right]^{\frac{\alpha}{2}}}+\alpha\frac{\left(1-\frac{\cos z}{ \sqrt{1-2\mathsf{X}}}\right)\left(1-2\mathsf{X}-\sqrt{1-2\mathsf{X}}\cos z \right)}{\left[2\left(1-\mathsf{X}-\sqrt{1-2\mathsf{X}}\cos z\right)\right]^ {\frac{\alpha}{2}+1}}. \tag{4.18}\]
In view of (4.18), (4.14) we have that
\[\mathsf{K}^{1}_{\alpha,z}(\mathsf{X})=\left(\mathsf{G}^{1}_{\alpha,z}\right)^{ \prime}\left(2\mathsf{X}\sin\left(z/2\right)\right)\left|2\sin\left(z/2\right) \right|^{\alpha}\]
so that the first term on the right hand side of (4.17) is equal to \(\mathcal{I}\left(f\right)\) in (4.13). Notice that, since \(\Delta_{z+2\pi}f=-\Delta_{z}f\) and \(\mathsf{K}^{1}_{\alpha,z+2\pi}\left(-\mathsf{X}\right)=\mathsf{K}^{1}_{\alpha,z }(\mathsf{X})\) (cf. (4.14)), the map \(z\mapsto\mathsf{K}^{1}_{\alpha,z}\left(\frac{\Delta_{z}f}{r^{2}}\right)\) is \(2\pi\)-periodic. Similarly \(z\mapsto\mathsf{K}^{2}_{\alpha,z}\left(\frac{\Delta_{z}f}{r^{2}}\right)\) is \(2\pi\)-periodic.
We now prove that
\[\int R\left(\frac{\delta_{z}f}{1+2f}\right)\left(\frac{\delta_{z}f}{1+2f} \right)\mathrm{d}z=R\left(f\right)f\qquad\text{where}\qquad R\left(f\right) \in\Sigma\mathcal{R}^{-\rho}_{K,0,1}\left[\epsilon_{0},N\right]\,. \tag{4.19}\]
We write
\[R\left(\frac{\delta_{z}f}{1+2f}\right)\left(\frac{\delta_{z}f}{1+2f}\right)=R \left(M\left(f;z\right)f\right)M\left(f;z\right)f\]
where
\[M\left(f;z\right)=M_{1}\left(f\right)+M_{2}\left(f;z\right),\quad M_{1}\left(f \right):=\frac{1}{1+2f},\quad M_{2}\left(f;z\right)=-\frac{\mathsf{t}_{-z}}{1+2f }\,.\]
Remark 2.16 shows that \(M_{1}\left(f\right)\in\Sigma\mathcal{M}^{0,0}_{K,0,0}\left[\epsilon_{0},N\right]\) and Proposition 2.34, Item 2 proves that \(M_{2}\left(f;z\right)\) belongs to \(\Sigma K\mathcal{M}^{0,0}_{K,0,0}\left[\epsilon_{0},N\right]\). Thus \(M\left(f;z\right)\in\Sigma K\mathcal{M}^{0,0}_{K,0,0}\left[\epsilon_{0},N\right]\) and Proposition 2.34, Items 2 and 4 give that
\[R\left(\frac{\delta_{z}f}{1+2f}\right)\left(\frac{\delta_{z}f}{1+2f}\right)=R \left(M\left(f;z\right)f\right)M\left(f;z\right)f=R\left(f;z\right)f\]
for some Kernel-smoothing operator \(R\left(f;z\right)\) in \(\Sigma K\mathcal{R}^{-\rho,0}_{K,0,1}\left[\epsilon_{0},N\right]\). Finally Lemma 2.35 implies (4.19).
Plugging (4.12) in (4.9) we obtain
\[\partial_{t}f=\frac{c_{\alpha}}{2\left(1-\frac{\alpha}{2}\right)}\,\partial_{x} \left[r^{2-\alpha}\left(\mathcal{I}\left(f\right)+R\left(f\right)f\right)+ \int\mathsf{G}^{1}_{\alpha,z}\left(0\right)\mathrm{d}z\,\left(r^{2-\alpha}-1 \right)+\frac{1}{r^{\alpha}}\,\mathcal{J}\left(f\right)\right]. \tag{4.20}\]
### Analysis of the nonlinear convolution kernels
The goal of this section is to represent the nonlinear convolution kernels in (4.13) and (4.15) as Kernel-functions according to Definition 2.26. In Section 4.4 we shall consider as well the convolution kernel
\[\mathsf{K}^{3}_{\alpha,z}(\mathsf{X}):=\big{(}\mathsf{G}^{2}_{\alpha,z} \big{)}^{\prime}(\mathsf{X}\,2\sin\left(z/2\right))\ \sin z\left|2\sin(z/2)\right|^{\alpha}=\left[\frac{\frac{1}{\left(1-4\mathsf{X }\sin\left(z/2\right)\right)^{2\alpha}}}{\left[2\left(1-2\mathsf{X}\sin\left( z/2\right)-\sqrt{1-4\mathsf{X}\sin\left(z/2\right)}\cos z\right)\right]^{\frac{ \alpha}{2}}}\right.\\ \left.+\alpha\frac{\left(1-\frac{\cos z}{\sqrt{1-4\mathsf{X} \sin\left(z/2\right)}}\right)\frac{1}{\sqrt{1-4\mathsf{X}\sin\left(z/2\right) }}}{\left[2\left(1-2\mathsf{X}\sin\left(z/2\right)-\sqrt{1-4\mathsf{X}\sin \left(z/2\right)}\cos z\right)\right]^{\frac{\alpha}{2}+1}}\right]\left|2 \sin\left(z/2\right)\right|^{\alpha}\sin z\,. \tag{4.21}\]
**Lemma 4.3**.: _Let \(\mathsf{K}^{i}_{\alpha,z}(\mathsf{X})\), \(\alpha\in(0,2)\), \(\mathrm{j}=1,2,3\), be the functions defined in (4.14), (4.16) and (4.21). Then_
\[\mathsf{K}^{i}_{\alpha,z}\left(\frac{\Delta_{z}f}{r^{2}}\right)=\mathsf{K}^{i }_{\alpha,z}\left(\frac{\Delta_{z}f}{1+2f}\right)\in\Sigma K\mathcal{F}^{0}_{K,0,0}\left|\epsilon_{0},N\right| \tag{4.22}\]
_is a Kernel function, which admits the expansion_
\[\mathsf{K}^{i}_{\alpha,z}\left(\frac{\Delta_{z}f}{r^{2}}\right)=\mathsf{K}^{i,0}_{\alpha}\left(f;x\right)+\mathsf{K}^{i,1}_{\alpha}\left(f;x\right)\,\sin z +\mathsf{K}^{i,2}_{\alpha}\left(f;x\right)\left(2\sin\left(z/2\right)\right) ^{2}+\varrho^{\mathrm{j}3}_{\alpha}\left(f;x,z\right)\,, \tag{4.23}\]
_where_
\[\mathsf{K}^{i,l}_{\alpha}\left(f;x\right)\in\Sigma\mathcal{F}^{R}_{K,0,\underline {p}(\mathrm{j},\mathrm{l})}\left[\epsilon_{0},N\right]\,,\qquad\varrho^{ \mathrm{j}3}_{\alpha}\left(f;x,z\right)\in\Sigma K\mathcal{F}^{3}_{K,0, \underline{q}(\mathrm{j})}\left[\epsilon_{0},N\right]\qquad\underline{q} \left(\mathrm{j}\right):=\begin{cases}1&\text{if }\mathrm{j}=1,2\,,\\ 0&\text{if }\mathrm{j}=3\,,\end{cases} \tag{4.24}\]
_with \(\underline{p}\left(\mathrm{j},\mathrm{l}\right)\in\left\{0,1\right\}\) and constant functions_
\[\left(\begin{array}{cccc}\mathsf{K}^{1,0}_{\alpha}\left(0;x\right)&\mathsf{ K}^{2,0}_{\alpha}\left(0;x\right)&\mathsf{K}^{3,0}_{\alpha}\left(0;x\right)\\ \mathsf{K}^{1,2}_{\alpha}\left(0;x\right)&\mathsf{K}^{2,1}_{\alpha}\left(0;x \right)&\mathsf{K}^{3,1}_{\alpha}\left(0;x\right)\\ \mathsf{K}^{1,2}_{\alpha}\left(0;x\right)&\mathsf{K}^{2,2}_{\alpha}\left(0;x \right)&\mathsf{K}^{3,2}_{\alpha}\left(0;x\right)\end{array}\right)=\left( \begin{array}{cccc}-1&1&0\\ 0&0&1+\frac{\alpha}{2}\\ -\frac{1}{2}\left(1-\frac{\alpha}{2}\right)&0&0\end{array}\right). \tag{4.25}\]
Proof.: The statement (4.22) follows by (4.23)-(4.24) that we now prove. We first claim that for any \(R>0\), there exists \(\varepsilon_{R}>0\) such that the functions
\[\mathsf{J}^{i}_{\alpha,w}\left(\mathsf{x},\mathsf{y}\right):=\mathsf{K}^{i}_{ \alpha,z}\left(\frac{\mathsf{y}}{1+2\mathsf{x}}\right)\,,\qquad w:=2\sin\left( z/2\right)\,, \tag{4.26}\]
where \(\mathsf{K}^{i}_{\alpha,z}\left(\cdot\right),\mathrm{j}=1,2,3\) are defined in (4.14), (4.16) and (4.21), are analytic in \((\mathsf{x},\mathsf{y},w)\) in the domain
\[\left|\mathsf{x}\right|\leq\varepsilon_{R}\,,\qquad\left|\mathsf{y}\right|\leq \varepsilon_{R}\,,\qquad\left|w\right|\leq R\,, \tag{4.27}\]
and there exists \(C_{R}>0\) such that \(\left|\mathsf{J}^{i}_{\alpha,w}\left(\mathsf{x},\mathsf{y}\right)\right|\leq C _{R}\) in this domain. Let us prove the analiticity of \(\mathsf{J}^{1}_{\alpha,w}\left(\mathsf{x},\mathsf{y}\right)\). Substituting \(\mathsf{X}=\frac{\mathsf{y}}{1+2\mathsf{x}}\), \(w=2\sin\left(z/2\right)\) and \(\cos(z)=1-\frac{w^{2}}{2}\) in (4.14) we have
\[\mathsf{J}^{1}_{\alpha,w}\left(\mathsf{x},\mathsf{y}\right) =-\frac{2-\frac{1-\frac{w^{2}}{2}}{\sqrt{1-2\mathsf{X}w}}}{\left[2 \left(1-\mathsf{X}w-\sqrt{1-2\mathsf{X}w}+\sqrt{1-2\mathsf{X}w}\frac{w^{2}}{2} \right)\right]^{\frac{\alpha}{2}}}\left|w\right|^{\alpha} \tag{4.28a}\] \[\quad+\alpha\frac{1}{\sqrt{1-2\mathsf{X}w}}\frac{\left(\sqrt{1-2 \mathsf{X}w}-1+\frac{w^{2}}{2}\right)\left(1-2\mathsf{X}w-\sqrt{1-2\mathsf{X}w }+\sqrt{1-2\mathsf{X}w}\frac{w^{2}}{2}\right)}{\left[2\left(1-\mathsf{X}w- \sqrt{1-2\mathsf{X}w}+\sqrt{1-2\mathsf{X}w}\frac{w^{2}}{2}\right)\right]^{ \frac{\alpha+2}{2}}}\left|w\right|^{\alpha}\,. \tag{4.28b}\]
Since the function \(1-\mathsf{X}w-\sqrt{1-2\mathsf{X}w}=(\mathsf{X}w)^{2}+O(\mathsf{X}w)^{3}\) is analytic in \(\mathsf{X}w\) small, the function in (4.28a) is analytic in the domain (4.27) for \(\varepsilon_{R}\) small enough. Furthermore, noting that the functions
\(-\mathsf{X}w+O(\mathsf{X}w)^{2}\) and \(1-2\mathsf{X}w-\sqrt{1-2\mathsf{X}w}=-\mathsf{X}w+O(\mathsf{X}w)^{2}\) are analytic in \(\mathsf{X}w\) small, we deduce that also (4.28b) is analytic in \((\mathsf{x},y,w)\) in the domain (4.27). The analiticity of \(\mathsf{J}_{a,w}^{2}\left(\mathsf{x},\mathsf{y}\right)\) and \(\mathsf{J}_{a,w}^{3}\left(\mathsf{x},\mathsf{y}\right)\) follow similarly.
Then by Cauchy integral formula,
\[\mathsf{J}_{\alpha,w}^{\mathrm{i}}\left(\mathsf{x},\mathsf{y}\right)=\sum_{p_ {1},p_{2}=0}^{\infty}\underbrace{\frac{1}{p_{1}p_{2}!}\delta_{\mathsf{x}}^{p_{ 2}}\mathsf{J}_{\mathsf{y}}^{\mathrm{i}}\mathsf{J}_{a,w}^{\mathrm{i}}\left(0,0 \right)}_{=\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{ x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{ x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x }\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x} \mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x}\mathsf{x
By (4.32), (4.38), we estimate (4.37) as (the constant \(\varepsilon_{4}\) is the one in (4.32) for \(R=4\))
\[\left|\partial_{z}^{I}\hat{\mathrm{K}}_{J_{p}}^{\mathrm{i},p}\left( z\right)\right| \leq\sum_{l_{2}+l_{1,1}+\ldots+l_{1,p_{n}}=l}\sum_{\begin{subarray} {c}p_{1}\geq 1\\ p_{1}+p_{2}=p\end{subarray}}C_{l}\,\varepsilon_{4}^{-p_{1}-p_{2}}\prod_{q=1}^{ p_{1}}\left|j_{q}\right|^{l_{1,q}+1}\] \[\lesssim_{I}\,p^{2}\left(\frac{l}{\varepsilon_{4}}\right)^{p} \left|J_{p}\right|^{l}\prod_{q=1}^{P}\left|j_{q}\right|\leq C_{l}^{p}\left|J_{ p}\right|^{l}\prod_{q=1}^{p}\left|j_{q}\right| \tag{4.39}\]
for some constant \(C_{l}>0\), for any \(z\in\mathbb{T}\). The bound (4.39) implies, recalling Definition 2.26, the claim (4.34).
**Step 2** (Proof of (4.35)).: Recalling (4.33) and (4.36), we have, for any \(0\leq k\leq K\), \(l,\gamma\in\mathbb{N}_{0}\),
\[\left|\partial_{t}^{k}\partial_{\gamma}^{\gamma}\partial_{z}^{ \mathrm{i}}\widetilde{\mathrm{K}}_{\alpha}^{\mathrm{j}>N}\left(f;x,z\right)\right| \leq\sum_{p>N}\sum_{\tilde{J}_{p}\in(\mathcal{I}\setminus \{0\})^{p}}\left|J_{p}\right|^{\gamma}\left|\partial_{z}^{\mathrm{i}} \hat{\mathrm{K}}_{J_{p}}^{\mathrm{i},p}\left(z\right)\right|\left|\partial_{t }^{k}\left(f_{j_{1}}\cdots f_{j_{p}}\right)\right|\] \[\leq\sum_{p>N}\sum_{\tilde{J}_{p}\in(\mathcal{I}\setminus \{0\})^{p}}\sum_{k_{1}+\ldots+k_{p}=k}\left|J_{p}\right|^{\gamma}\left| \partial_{z}^{\mathrm{i}}\hat{\mathrm{K}}_{j_{p}}^{\mathrm{i},p}\left(z\right) \right|\prod_{q=1}^{p}\left|\partial_{t}^{k_{q}}f_{j_{q}}\right|\] \[\leq\sum_{p>N}\sum_{\tilde{J}_{p}\in(\mathcal{I}\setminus\{0\})^ {p}}\sum_{k_{1}+\ldots+k_{p}=k}C_{l}^{p}\left|J_{p}\right|^{\gamma+l}\prod_{q= 1}^{p}\left|j_{q}\right|\left|\partial_{t}^{k_{q}}f_{j_{q}}\right|\]
using (4.39). Then, assuming with no loss of generality that \(\left|J_{p}\right|=\max\{\left|j_{1}\right|,\ldots,\left|j_{p}\right|\}=\left|j _{1}\right|\) we have
\[\left|\partial_{t}^{k}\partial_{\gamma}^{\gamma}\partial_{z}^{ \mathrm{i}}\widetilde{\mathrm{K}}_{\alpha}^{\mathrm{j}>N}\left(f;x,z\right)\right| \leq\sum_{p>N}\sum_{\tilde{J}_{p}\in(\mathcal{I}\setminus\{0\} )^{p}}\sum_{k_{1}+\ldots+k_{p}=k}C_{l}^{p}\left(\prod_{q=2}^{p}\left|j_{q} \right|^{-2}\left|\partial_{t}^{k_{q}}f\right|_{3}\right)\,\left|j_{1}\right| ^{-2}\left|\partial_{t}^{k_{q}}f\right|_{3+\gamma+l},\] \[\leq\sum_{p>N}C_{l}^{p}\,\sum_{k_{1},\ldots,k_{p}=0}^{k}\left( \prod_{q=2}^{p}\left|\partial_{t}^{k_{q}}f\right|_{3}\right)\left|\partial_{t }^{k_{1}}f\right|_{3+\gamma+l}\] \[\leq\sum_{p>N}\left(C_{l}k\right)^{p}\,\left|f\right|\!\!\!\!\! \left|f\right|\!\!\!\!\right|_{k,3+\alpha k}^{p-1}\left|f\right|\!\!\!\!\!\!\!\! \left|f\right|\!\!\!\!\!\!\!\!\left|k,3+\gamma+l+\alpha k\right.\]
recalling (2.5). Summing in \(p\) and setting \(s_{0}:=11+\alpha k\), we get, for any \(l\leq 8\), for any \(0\leq\gamma\leq s-s_{0}\),
\[\left|\partial_{t}^{k}\partial_{\gamma}^{\gamma}\partial_{z}^{\mathrm{i}} \widetilde{\mathrm{K}}_{\alpha}^{\mathrm{j}>N}\left(f;x,z\right)\right|\leq C _{k}^{N+1}\left|f\right|\!\!\!\!\!\!\!\left|f\right|\!\!\!\!\!\!\left|f\right|\! \!\!\!\!\!\!\!\left|k,s\right.,\quad\forall x,z\in\mathbb{T}\,,\]
which, recalling Definition 2.26, proves the claim in (4.35).
Equations (4.34) and (4.35) thus prove (4.22).
[Proof of (4.23)-(4.25)] In view of (4.33), in order to expand \(\mathrm{K}_{\alpha,z}^{\mathrm{i}}\left(\frac{\Delta_{z}f}{r^{2}}\right)\) as in (4.23), we perform a Taylor expansion in \(z\) of the functions \(\mathrm{K}_{\alpha,z}^{\mathrm{j}}\left(0\right)\) and \(\widetilde{\mathrm{K}}_{\alpha}^{\mathrm{i},p}\left(f;x,z\right)\), for any \(p\geq 1\). By (4.31) we have
\[\mathrm{K}_{\alpha,z}^{\mathrm{i}}\left(0\right)=\mathrm{K}_{\alpha}^{\mathrm{i },0}\left(0;x\right)+\mathrm{K}_{\alpha}^{\mathrm{i},1}\left(0;x\right)\sin z+ \mathrm{K}_{\alpha}^{\mathrm{i},2}\left(0;x\right)\left(2\sin\left(z/2\right) \right)^{2}+\varrho_{\alpha}^{\mathrm{i},3}\left(0;x,z\right) \tag{4.40}\]
with \(\mathrm{K}_{\alpha}^{\mathrm{i},l}\left(0;x\right),\mathrm{j}=1,2,3\), \(l=0,1,2\), are the constants computed in (4.25) and
\[\varrho_{\alpha}^{\mathrm{i},3}\left(0;x,z\right)=\varrho_{\alpha}^{2,3}\left(0; x,z\right)=0\qquad\text{and}\qquad\varrho_{\alpha}^{\mathrm{i},3}\left(0;x,z\right) \in\widetilde{\mathcal{K}}\widetilde{\mathcal{F}}_{0}^{3} \tag{4.41}\]
is \(x\)-independent. Then, for any \(p\geq 1\), we expand
\[\widetilde{\mathrm{K}}_{\alpha}^{\mathrm{i},p}\left(f;x,z\right) =\sum_{l=0}^{2}\widetilde{\mathrm{K}}_{\alpha}^{\mathrm{i},p,l} \left(f;x\right)z^{l}+\mathrm{K}_{\alpha}^{\mathrm{i},p,3}\left(f;x,z\right) \tag{4.42}\] \[=\widetilde{\mathrm{K}}_{\alpha}^{\mathrm{i},p,0}\left(f;x\right)+ \widetilde{\mathrm{K}}_{\alpha}^{\mathrm{i},p,1}\left(f;x\right)\sin z+ \widetilde{\mathrm{K}}_{\alpha}^{\mathrm{i},p,2}\left(f;x\right)\left(2\sin \left(z/2\right)\right)^{2}+\varrho_{\alpha}^{\mathrm{i},p,3}\left(f;x,z\right),\]
where, for \(l=0,1,2\),
\[\widetilde{\mathrm{K}}_{\alpha}^{\mathrm{i},p,l}\left(f;x\right):=\left.\frac{1}{l!}\,\varrho_{z}^{l}\widetilde{\mathrm{K}}_{\alpha}^{\mathrm{i},p}\left(f;x,z \right)\right|_{z=0} \tag{4.43}\]
\[\begin{split}\varrho^{\mathrm{i},p,3}_{\alpha}\left(f;x,z\right)& :=\mathsf{R}^{\mathrm{i},p,3}_{\alpha}\left(f;x,z\right)+ \widetilde{\mathsf{R}}^{\mathrm{i},p,3}_{\alpha}\left(f;x,z\right)\\ \mathsf{R}^{\mathrm{i},p,3}_{\alpha}\left(f;x,z\right)& :=\frac{1}{2!}\int_{0}^{1}\left(1-\theta\right)^{2}\partial_{z}^{2} \widetilde{\mathsf{K}}^{\mathrm{i},p}_{\alpha}\left(f;x,\theta z\right) \mathrm{d}\theta\;z^{3}\\ \mathsf{R}^{\mathrm{i},p,3}_{\alpha}\left(f;x,z\right)& :=\widetilde{\mathsf{R}}^{\mathrm{i},p,1}_{\alpha}\left(f;x\right) \left(z-\sin z\right)+\widetilde{\mathsf{R}}^{\mathrm{i},p,2}_{\alpha}\left(f ;x\right)\left(z^{2}-\left(2\sin\left(z/2\right)\right)^{2}\right).\end{split} \tag{4.44}\]
Notice that \(z\to\varrho^{\mathrm{i},p,3}_{\alpha}\left(f;x,z\right)\) is \(2\pi\)-periodic thanks to (4.42). In view of (4.40) and (4.42), we obtain the expansion (4.23) with, for any \(\mathrm{j}=1,2,3\),
\[\begin{split}\mathsf{K}^{\mathrm{i},l}_{\alpha}\left(f;x\right) :=\mathsf{K}^{\mathrm{i},l}_{\alpha}\left(0;x\right)+\sum_{p=1}^{N} \widetilde{\mathsf{K}}^{\mathrm{i},p,l}_{\alpha}\left(f;x\right)+\widetilde{ \mathsf{K}}^{\mathrm{i},j>N,l}_{\alpha}\left(f;x\right),\qquad l=0,1,2\,,\\ \varrho^{\mathrm{i},3}_{\alpha}\left(f;x,z\right)&:= \varrho^{\mathrm{i},3}_{\alpha}\left(0;x,z\right)+\sum_{p=1}^{N}\varrho^{ \mathrm{i},p,3}_{\alpha}\left(f;x,z\right)+\varrho^{\mathrm{i},>N,3}_{\alpha} \left(f;x,z\right)\,,\end{split} \tag{4.45}\]
and
\[\widetilde{\mathsf{K}}^{\mathrm{i}>N,l}_{\alpha}\left(f;x\right):=\sum_{p>N} \widetilde{\mathsf{K}}^{\mathrm{i},p,l}_{\alpha}\left(f;x\right),\qquad\varrho ^{\mathrm{i}>N,3}_{\alpha}\left(f;x,z\right):=\sum_{p>N}\varrho^{\mathrm{i},p,3}_{\alpha}\left(f;x,z\right)\,. \tag{4.46}\]
Let us prove (4.24). We deduce that each \(\widetilde{\mathsf{K}}^{\mathrm{i},p,l}_{\alpha}\left(f;x\right)=\frac{1}{ \pi}\partial^{l}_{z}\widetilde{\mathsf{K}}^{\mathrm{i},p,l}_{\alpha}\left(f;x,0\right)\), \(p\geq 1\), is a homogenous function in \(\widetilde{\mathcal{F}}^{\mathrm{R}}_{p}\) by (4.34) and Remark 2.27. Analogously the non-homogenous term \(\widetilde{\mathsf{K}}^{\mathrm{i}>N,l}_{\alpha}\left(f;x\right)\) is in \(\mathcal{F}^{\mathrm{R}}_{K,0,N+1}\left[\epsilon_{0}\right]\) by (4.35). Next, by (4.34) an integration in \(z\) give that \(\varrho^{\mathrm{i},p,3}_{\alpha}\left(f;x,z\right)\), \(p\geq 1\), defined in (4.44) is a homogenous Kernel-function in \(\overline{K}\widetilde{\mathcal{F}}^{3}_{p}\) and, by (4.35), the non-homogenous term \(\varrho^{\mathrm{i}>N,3}_{\alpha}\left(f;x,z\right)\) in (4.46) is a Kernel function in \(K\mathcal{F}^{3}_{K,0,N+1}\left[\epsilon_{0}\right]\).
Finally the zero-homogenous functions \(\mathsf{K}^{\mathrm{i},l}_{\alpha}\left(0;x\right)\) are the constants in (4.25) (cf. (4.40)) and the Kernel functions \(\varrho^{\mathrm{i},3}_{\alpha}\left(0;x,z\right)\) are in (4.41).
### Paralinearization of the quasilinear integral term \(\mathcal{I}\left(f\right)\)
In this section we paralinearize \(\mathcal{I}\left(f\right)\).
**Lemma 4.4**.: _The term \(\mathcal{I}\left(f\right)\) defined in (4.13) can be written as_
\[\mathcal{I}\left(f\right)=\mathrm{Op}^{BW}\left[-\left(1+\nu_{\mathcal{I}} \left(f;x\right)\right)L_{\mathcal{I}}\left(\left|\xi\right|\right)+\mathrm{i}S_ {\mathcal{I},\alpha-2}\left(f;x,\xi\right)+V\left[\mathcal{I}\right]\left(f; x\right)+P\left(f;x,\xi\right)\right]f+R\left(f\right)f \tag{4.47}\]
_where_
* \(\nu_{\mathcal{I}}\left(f;x\right)\) _is the real function_ \[\nu_{\mathcal{I}}\left(f;x\right):=-\left(r^{-2}\mathsf{K}^{1,0}_{\alpha}\left( f;x\right)+1\right)\in\Sigma\mathcal{F}^{\mathrm{R}}_{K,0,1}\left[\epsilon_{0},N \right];\] (4.48)
* \(L_{\mathcal{I}}\left(\left|\xi\right|\right):=\ \mathsf{T}^{1}_{\alpha}\left(\left|\xi \right|\right)+\frac{\Gamma\left(2-\alpha\right)}{\Gamma\left(1-\frac{\alpha} {2}\right)^{2}}+\left(1-\frac{\alpha}{2}\right)^{2}\mathsf{M}_{\alpha}\left( \left|\xi\right|\right)\) _is a real Fourier multiplier in_ \(\widetilde{\mathsf{T}}^{\max\left\{0,\alpha-1\right\}}_{0}\) _(the Fourier multipliers_ \(\mathsf{T}^{1}_{\alpha}\left(\left|\xi\right|\right)\) _and_ \(\mathsf{M}_{\alpha}\left(\left|\xi\right|\right)\) _are defined in Lemma_ 3.1_);_
* \(V\left[\mathcal{I}\right]\left(f;x\right)\) _is a function in_ \(\Sigma\mathcal{F}^{\mathrm{R}}_{K,0,1}\left[\epsilon_{0},N\right]\)_;_
* \(P\left(f;x,\xi\right)\) _is a symbol in_ \(\Sigma\Gamma^{-1}_{K,0,1}\left[\epsilon_{0},N\right]\)_;_
* \(R\left(f\right)\) _is a real smoothing operator in_ \(\Sigma\mathcal{R}^{-\rho}_{K,0,1}\left[\epsilon_{0},N\right]\)_._
The rest of this section is devoted to prove Lemma 4.4.
By Lemma 2.22 we have
\[\frac{\delta_{z}f}{r^{2}}=\mathrm{Op}^{BW}\left[r^{-2}\right]\delta_{z}f+ \mathrm{Op}^{BW}\left[\delta_{z}f\right]\left(r^{-2}-1\right)+R_{1}\left(r^{-2} -1\right)\delta_{z}f+R_{2}\left(\delta_{z}f\right)\left(r^{-2}-1\right)\]
with smoothing operators \(R_{1},R_{2}\) in \(\widehat{\mathcal{R}}_{1}^{-\rho}\) for any \(\rho\geq 0\). Hence, recalling the definition of \(\mathcal{I}\big{(}f\big{)}\) in (4.13), we write
\[\mathcal{I}\big{(}f\big{)} =\sum_{j=1}^{4}\mathcal{I}_{j}\big{(}f\big{)}\,,\] \[\mathcal{I}_{1}\big{(}f\big{)} :=\int\operatorname{Op}^{BW}\left[\mathsf{K}_{\alpha,z}^{1}\left( \frac{\Delta_{z}f}{r^{2}}\right)\right]\operatorname{Op}^{BW}\big{[}r^{-2} \big{]}\frac{\delta_{z}f}{|2\sin\left(z/2\right)|^{\alpha}}\mathrm{d}z\,,\] \[\mathcal{I}_{2}\big{(}f\big{)} :=\int\operatorname{Op}^{BW}\left[\mathsf{K}_{\alpha,z}^{1}\left( \frac{\Delta_{z}f}{r^{2}}\right)\right]\operatorname{Op}^{BW}\big{[}\delta_{ z}f\big{]}\frac{1}{2\sin\left(z/2\right)|^{\alpha}}\mathrm{d}z\,\left(r^{-2}-1 \right)\,, \tag{4.49}\] \[\mathcal{I}_{3}\big{(}f\big{)} :=\int\operatorname{Op}^{BW}\left[\mathsf{K}_{\alpha,z}^{1}\left( \frac{\Delta_{z}f}{r^{2}}\right)\right]R_{1}\left(r^{-2}-1\right)\frac{\delta_ {z}f}{|2\sin\left(z/2\right)|^{\alpha}}\mathrm{d}z\,,\] \[\mathcal{I}_{4}\big{(}f\big{)} :=\int\operatorname{Op}^{BW}\left[\mathsf{K}_{\alpha,z}^{1}\left( \frac{\Delta_{z}f}{r^{2}}\right)\right]R_{2}\left(\frac{\delta_{z}f}{|2\sin \left(z/2\right)|^{\alpha}}\right)\mathrm{d}z\,\left(r^{-2}-1\right)\,.\]
**Step 1** (Paralinearization of \(\mathcal{I}_{1}\) in (4.49)).: By (4.23) and isolating by (4.25) the zero-order components in \(f\), we write
\[\operatorname{Op}^{BW}\left[\mathsf{K}_{\alpha,z}^{1}\left( \frac{\Delta_{z}f}{r^{2}}\right)\right]\operatorname{Op}^{BW}\big{[}r^{-2} \big{]}=\] \[=-1-\frac{1}{2}\left(1-\frac{\alpha}{2}\right)\left(2\sin\left( z/2\right)\right)^{2}\] \[+\,\operatorname{Op}^{BW}\left[\left(\mathsf{K}_{\alpha}^{1,0} \big{(}f;x\big{)}+1\right)+\mathsf{K}_{\alpha}^{1,1}\left(f;x\right)\sin z+ \left(\mathsf{K}_{\alpha}^{1,2}\big{(}f;x\big{)}+\frac{1}{2}\Big{(}1-\frac{ \alpha}{2}\Big{)}\right)\left(2\sin\left(z/2\right)\right)^{2}+\varrho_{\alpha }^{1,3}\left(f;x,z\right)\right]\] \[+\,\operatorname{Op}^{BW}\left[\mathsf{K}_{\alpha}^{1,0}\left(f; x\right)+\mathsf{K}_{\alpha}^{1,1}\left(f;x\right)\sin z+\mathsf{K}_{\alpha}^{1,2} \left(f;x\right)\left(2\sin\left(z/2\right)\right)^{2}+\varrho_{\alpha}^{1,3} \left(f;x,z\right)\right]\operatorname{Op}^{BW}\big{[}r^{-2}-1\big{]} \tag{4.50}\]
where \(\varrho_{\alpha}^{1,3}\big{(}f;x,z\big{)}\) is a kernel function in \(\Sigma K\mathcal{I}_{K,0,1}^{3}\left[\epsilon_{0},N\right]\) by (4.24). We now expand the last line (4.50). By Proposition 2.21 there exists a smoothing operator \(R\big{(}f\big{)}\) in \(\Sigma\mathcal{R}_{K,0,1}^{-\rho}\left[\epsilon_{0},N\right]\) such that
\[\operatorname{Op}^{BW}\big{[}\mathsf{K}_{\alpha}^{1,0}\left(f;x \right)+\mathsf{K}_{\alpha}^{1,1}\left(f;x\right)\sin z+\mathsf{K}_{\alpha}^{1,2 }\left(f;x\right)\left(2\sin\left(z/2\right)\right)^{2}\big{]}\operatorname{Op }^{BW}\big{[}r^{-2}-1\big{]}\\ =\operatorname{Op}^{BW}\big{[}\big{(}r^{-2}-1\big{)}\left(\mathsf{ K}_{\alpha}^{1,0}\left(f;x\right)+\mathsf{K}_{\alpha}^{1,1}\left(f;x\right)\sin z+ \mathsf{K}_{\alpha}^{1,2}\left(f;x\right)\left(2\sin\left(z/2\right)\right)^{2} \right)\big{]}\\ +R\left(f\right)+\underbrace{\left(\sin\left(z\right)+\left(2 \sin\left(z/2\right)\right)^{2}\right)R\left(f\right)}_{=R_{(1)}\left(f;z\right) \leq K\mathcal{R}_{K,0,1}^{-\rho,1}\left[\epsilon_{0},N\right]}\,. \tag{4.51}\]
Moreover due to Proposition 2.34, Item 1, there exists a Kernel-smoothing operator \(R_{1}\left(f;z\right)\) in \(\Sigma\mathcal{R}_{K,0,1}^{-\rho,3}\left[\epsilon_{0},N\right]\) such that
\[\operatorname{Op}^{BW}\big{[}\varrho_{\alpha}^{1,3}\left(f;x,z\right)\big{]} \operatorname{Op}^{BW}\big{[}r^{-2}-1\big{]}=\operatorname{Op}^{BW}\big{[} \big{(}r^{-2}-1\big{)}\varrho_{\alpha}^{1,3}\left(f;x,z\right)\big{]}+R_{1} \left(f;z\right)\,. \tag{4.52}\]
Plugging (4.51) and (4.52) in (4.50) we get
\[\operatorname{Op}^{BW}\left[\mathsf{K}_{\alpha,z}^{1}\left(\frac{ \Delta_{z}f}{r^{2}}\right)\right]\operatorname{Op}^{BW}\big{[}r^{-2}\big{]}=-1- \frac{1}{2}\left(1-\frac{\alpha}{2}\right)\left(2\sin\left(z/2\right)\right)^{2}\] \[+\,\operatorname{Op}^{BW}\left[\big{(}r^{-2}\mathsf{K}_{\alpha}^{1,0 }\left(f;x\right)+1\big{)}+r^{-2}\mathsf{K}_{\alpha}^{1,1}\left(f;x\right)\sin z +\left(r^{-2}\mathsf{K}_{\alpha}^{1,2}\left(f;x\right)+\frac{1}{2}\left(1-\frac{ \alpha}{2}\right)\right)\left(2\sin\left(z/2\right)\right)^{2}\right]\] \[+\,\operatorname{Op}^{BW}\big{[}r^{-2}\varrho_{\alpha}^{1,3}\left(f; x,z\right)\big{]}+R\left(f\right)+R_{(1)}\left(f;z\right) \tag{4.53}\]
where \(R_{(1)}\big{(}f;z\big{)}\) is a Kernel smoothing operator in \(\Sigma\mathcal{R}_{K,0,1}^{-\rho,1}\left[\epsilon_{0},N\right]\). Inserting the decomposition (4.53) in the expression of \(\mathcal{I}_{1}\big{(}f\big{)}\) in (4.49) we obtain that
\[\mathcal{I}_{1}\big{(}f\big{)}=-\int\frac{\delta_{z}f}{|2\sin\left(z/2\right)|^{ \alpha}}\mathrm{d}z-\frac{1}{2}\Big{(}1-\frac{\alpha}{2}\Big{)}\!\!\!\int\frac{ \delta_{z}f}{|2\sin\left(z/2\right)|^{\alpha-2}}\,\mathrm{d}z+\sum_{j=1}^{5} \mathcal{I}_{1,j}\big{(}f\big{)} \tag{4.54}\]
where
\[\begin{split}\mathcal{I}_{1,1}\left(f\right)&:=\operatorname{ Op}^{BW}\left[r^{-2}\mathsf{K}_{\alpha}^{1,0}\left(f;x\right)+1\right]\int\frac{ \delta_{z}f}{\left[2\sin\left(z/2\right)\right]^{\alpha}}\mathrm{d}z,\\ \mathcal{I}_{1,2}\left(f\right)&:=\operatorname{ Op}^{BW}\left[r^{-2}\mathsf{K}_{\alpha}^{1,1}\left(f;x\right)\right]\int\frac{ \sin z}{\left[2\sin\left(z/2\right)\right]^{\alpha}}\,\delta_{z}f\,\mathrm{d} z,\\ \mathcal{I}_{1,3}\left(f\right)&:=\operatorname{ Op}^{BW}\left[\left(r^{-2}\mathsf{K}_{\alpha}^{1,2}\left(f;x\right)+\frac{1}{2} \left(1-\frac{\alpha}{2}\right)\right)\right]\int\frac{\delta_{z}f}{\left[2 \sin\left(z/2\right)\right]^{\alpha-2}}\,\mathrm{d}z,\\ \mathcal{I}_{1,4}\left(f\right)&:=\int\operatorname{ Op}^{BW}\left[r^{-2}\varrho_{\alpha}^{1,3}\left(f;x,z\right)\right]\frac{ \delta_{z}f}{\left[2\sin\left(z/2\right)\right]^{\alpha-2}}\,\mathrm{d}z,\\ \mathcal{I}_{1,5}\left(f\right)&:=\int\left(R\left( f\right)+R_{\left(1\right)}\left(f;z\right)\right)\frac{\delta_{z}f}{\left|2\sin \left(z/2\right)\right|^{\alpha}}\,\mathrm{d}z.\end{split} \tag{4.55}\]
By recalling (4.2) and (3.9) we have
\[\int\frac{\delta_{z}f}{\left|2\sin\left(z/2\right)\right|^{\alpha}}\mathrm{d} z=\operatorname{Op}^{BW}\left[\mathsf{T}_{\alpha}^{1}\left(\left|\xi\right| \right)\right]f. \tag{4.56}\]
Next, by Eqs. (3.21) and (3.22) we deduce that
\[\frac{1}{2}\left(1-\frac{\alpha}{2}\right)\int\frac{\delta_{z}f}{\left|2\sin \left(z/2\right)\right|^{\alpha-2}}\,\mathrm{d}z=\frac{\Gamma\left(2-\alpha \right)}{\Gamma\left(1-\frac{\alpha}{2}\right)^{2}}f\left(x\right)+\left(1- \frac{\alpha}{2}\right)^{2}\mathsf{M}_{\alpha}\left(\left|D\right|\right)f. \tag{4.57}\]
By (4.56), using also Proposition 2.21 and (2.28), and that \(\mathsf{T}_{\alpha}^{1}\left(\left|\xi\right|\right)\) is a symbol of order \(\alpha-1\), we have
\[\mathcal{I}_{1,1}\left(f\right) =\operatorname{Op}^{BW}\left[r^{-2}\mathsf{K}_{\alpha}^{1,0} \left(f;x\right)+1\right]\operatorname{Op}^{BW}\left[\mathsf{T}_{\alpha}^{1} \left(\left|\xi\right|\right)\right]f \tag{4.58}\] \[=\operatorname{Op}^{BW}\left[\left(r^{-2}\mathsf{K}_{\alpha}^{1,0 }\left(f;x\right)+1\right)\mathsf{T}_{\alpha}^{1}\left(\left|\xi\right|\right) +\frac{\mathrm{i}}{2}\partial_{x}\left(r^{-2}\mathsf{K}_{\alpha}^{1,0}\left( f;x\right)+1\right)\,\partial_{\xi}\mathsf{T}_{\alpha}^{1}\left(\left|\xi \right|\right)+P\left(f;x,\xi\right)\right]f+R\left(f\right)f\]
where \(P\left(f;x,\xi\right)\) is a symbol in \(\Sigma\Gamma_{K,0,1}^{-1}\left[\epsilon_{0},N\right]\).
In order to compute \(\mathcal{I}_{1,2}\left(f\right)\) in (4.55) we need the following lemma.
**Lemma 4.5**.: _We have_
\[\int\frac{\sin z}{\left|2\sin\left(z/2\right)\right|^{\alpha}}\delta_{z}\phi\, \mathrm{d}z=\mathrm{i}\,\mathsf{M}_{\alpha}\left(\left|D\right|\right)D\,\phi, \tag{4.59}\]
_where \(\mathsf{M}_{\alpha}\left(\left|\xi\right|\right)\) is defined in (3.6)._
Proof.: By \(\operatorname{oddness}f\frac{\sin z}{\left|2\sin\left(z/2\right)\right|^{ \alpha}}\mathrm{d}z=0\) and thus, integrating by parts,
\[\begin{split}\int\frac{\sin z}{\left[2\sin\left(z/2\right) \right]^{\alpha}}\delta_{z}\phi\,\mathrm{d}z&=-\int\frac{\sin z}{ \left[2\left(1-\cos z\right)\right]^{\alpha/2}}\phi\left(x-z\right)\,\mathrm{d} z=-\int\partial_{z}\left(\frac{\left[2\left(1-\cos z\right)\right]^{1- \frac{\alpha}{2}}}{2\left(1-\frac{\alpha}{2}\right)}\right)\phi\left(x-z \right)\,\mathrm{d}z\\ &=-\frac{1}{2\left(1-\frac{\alpha}{2}\right)}\int\frac{\phi^{ \prime}\left(x-z\right)}{\left[2\left(1-\cos z\right)\right]^{\frac{\alpha}{2} -1}}\,\mathrm{d}z=\mathrm{i}\,\mathsf{M}_{\alpha}\left(\left|D\right|\right)D \,\phi\end{split}\]
using (3.20). This proves (4.59).
Lemma 4.5 and Proposition 2.21 and since \(\mathsf{M}_{\alpha}\left(\left|\xi\right|\right)\) is a symbol of order \(\alpha-3\) gives that
\[\mathcal{I}_{1,2}\left(f\right)=\operatorname{Op}^{BW}\left[\mathrm{i}\,r^{-2 }\mathsf{K}_{\alpha}^{1,1}\left(f;x\right)\mathsf{M}_{\alpha}\left(\left|\xi \right|\right)\zeta+P\left(f;x,\xi\right)\right]f+R\left(f\right)f \tag{4.60}\]
where \(P\left(f;x,\xi\right)\) is a symbol in \(\Sigma\Gamma_{K,0,1}^{-1}\left[\epsilon_{0},N\right]\) satisfying (2.20).
Let us now compute \(\mathcal{I}_{1,3}\left(f\right)\) in (4.55). Applying Proposition 2.36 we deduce that
\[\mathcal{I}_{1,3}\left(f\right)=\operatorname{Op}^{BW}\left[V\left[\mathcal{I} _{1,3}\right]\left(f;x\right)+P\left(f;x,\xi\right)\right]f \tag{4.61}\]
where
\[V\left[\mathcal{I}_{1,3}\right]\left(f;x\right):=\left(r^{-2}\mathsf{K}_{ \alpha}^{1,2}\left(f;x\right)+\frac{1}{2}\left(1-\frac{\alpha}{2}\right) \right)\int\frac{1}{\left|2\sin\left(z/2\right)\right|^{\alpha-2}}\,\mathrm{d}z \tag{4.62}\]
is a function in \(\Sigma\mathcal{F}_{K,0,1}^{\mathbb{R}}\left[\epsilon_{0},N\right]\) and \(P\left(f;x,\xi\right)\) is a symbol in \(\Sigma\Gamma_{K,0,1}^{-1}\left[\epsilon_{0},N\right]\), being \(\alpha\in(0,2)\).
Similarly, applying Proposition 2.36,
\[\mathcal{I}_{1,4}\left(f\right)=\operatorname{Op}^{BW}\left[V\left[\mathcal{I}_{1,4}\right]\left(f;x\right)+P\left(f;x,\xi\right)\right]f \tag{4.63}\]
where
\[V\left[\mathcal{I}_{1,4}\right]\left(f;x\right):=\int\frac{r^{-2}\varrho_{ \alpha}^{1,3}\left(f;x,z\right)}{|2\sin\left(z/2\right)|^{\alpha-2}}\,\mathrm{ d}z \tag{4.64}\]
is a function in \(\Sigma\mathcal{F}_{K,0,1}^{R}\left[\epsilon_{0},N\right]\) by Remark 2.30, and \(P\left(f;x,\xi\right)\) is a symbol in \(\Sigma\Gamma_{K,0,1}^{-1}\left[\epsilon_{0},N\right]\).
Finally, the last term in (4.55) is, applying Lemma 2.35 since \(\frac{R_{10}\left(f;z\right)}{|2\sin\left(z/2\right)|^{\alpha}}\in\Sigma K \mathcal{R}_{K,0,1}^{-p,1-\alpha}\left[\epsilon_{0},N\right]\),
\[\mathcal{I}_{1,5}\left(f\right)=R\left(f\right)\operatorname{Op}^{BW}\left[ \Gamma_{4}^{1}\left(\xi\right)\right]f+\tilde{R}\left(f\right)f\,,\qquad R \left(f\right),\tilde{R}\left(f\right)\in\Sigma\mathcal{R}_{K,0,1}^{-p}\left[ \epsilon_{0},N\right]\,. \tag{4.65}\]
We thus plug (4.56), (4.57) (4.58), (4.60), (4.61), (4.63), (4.65), in Equation (4.54) and obtain
\[\mathcal{I}_{1}\left(f\right)=-\Gamma_{\alpha}^{1}\left(|D|\right) f-\frac{\Gamma\left(2-\alpha\right)}{\Gamma\left(1-\frac{\alpha}{2}\right)^{2}}f- \left(1-\frac{\alpha}{2}\right)^{2}M_{\alpha}\left(|D|\right)f\\ +\operatorname{Op}^{BW}\left[\left(r^{-2}\mathrm{K}_{\alpha}^{1,0}\left(f;x\right)+1\right)\Gamma_{\alpha}^{1}\left(\xi\right)+\frac{\mathrm{ i}}{2}\partial_{x}\left(r^{-2}\mathrm{K}_{\alpha}^{1,0}\left(f;x\right)+1 \right)\partial_{\xi}\Gamma_{\alpha}^{1}\left(\xi\right)+\mathrm{i}\,r^{-2} \mathrm{K}_{\alpha}^{1,1}\left(f;x\right)M_{\alpha}\left(|\xi|\right)\xi\right]f \\ +\operatorname{Op}^{BW}\left[V\left[\mathcal{I}_{1}\right]\left( f;x\right)+P\left(f;x,\xi\right)\right]f+R\left(f\right)f \tag{4.66}\]
where \(V\left[\mathcal{I}_{1}\right]\left(f;x\right)\) is the function (cf. Eqs. (4.62) and (4.64))
\[V\left[\mathcal{I}_{1}\right]\left(f;x\right):=V\left[\mathcal{I}_{1,3}\right] \left(f;x\right)+V\left[\mathcal{I}_{1,4}\right]\left(f;x\right)\in\Sigma \mathcal{F}_{K,0,1}^{R}\left[\epsilon_{0},N\right]\,. \tag{4.67}\]
**Step 2** (Paralinearization of \(\mathcal{I}_{2}\) in (4.49)).: Since \(\mathrm{K}_{\alpha,z}^{1}\left(\frac{\Delta_{z}f}{r^{2}}\right)\in\Sigma K \mathcal{F}_{K,0,0}^{0}\left[\epsilon_{0},N\right]\) (cf. Lemma 4.3) and \(\delta_{z}f\in\overline{\mathcal{K}}\mathcal{F}_{1}^{1}\) we apply Proposition 2.34, Item 1 and obtain that, for some \(R_{2}\left(f;z\right)\in\Sigma K\mathcal{R}_{K,0,1}^{-p,1-\alpha}\left[\epsilon_ {0},N\right]\)
\[\int\operatorname{Op}^{BW}\left[\mathrm{K}_{\alpha,z}^{1}\left( \frac{\Delta_{z}f}{r^{2}}\right)\right]\operatorname{Op}^{BW}\left[\delta_{z} f\right]\frac{\mathrm{d}z}{|2\sin\left(z/2\right)|^{\alpha}}= \int\operatorname{Op}^{BW}\left[\mathrm{K}_{\alpha,z}^{1}\left( \frac{\Delta_{z}f}{r^{2}}\right)\delta_{z}f\right]\frac{\mathrm{d}z}{|2\sin \left(z/2\right)|^{\alpha}}\mathrm{d}z+\int R_{2}\left(f;z\right)\mathrm{d}z\] \[=\operatorname{Op}^{BW}\left[\tilde{V}\left[\mathcal{I}_{2}\right] \right]+R\left(f\right)\]
where \(R\left(f\right)\) is a smoothing operator in \(\Sigma\mathcal{R}_{K,0,1}^{-p}\left[\epsilon_{0},N\right]\) (by Lemma 2.35) and
\[\tilde{V}\left[\mathcal{I}_{2}\right]\left(f;x\right):=\int\mathrm{K}_{\alpha, z}^{1}\left(\frac{\Delta_{z}f}{r^{2}}\right)\delta_{z}f\frac{1}{|2\sin\left(z/2 \right)|^{\alpha}}\mathrm{d}z\]
is a function in \(\Sigma\mathcal{F}_{K,0,1}^{R}\left[\epsilon_{0},N\right]\), by Remark 2.30 and since since \(\mathrm{K}_{\alpha,z}^{1}\left(\frac{\Delta_{z}f}{r^{2}}\right)\delta_{z}f\frac{ 1}{|2\sin\left(z/2\right)|^{\alpha}}\) is in \(\Sigma K\mathcal{F}_{K,0,1}^{1-\alpha}\left[\epsilon_{0},N\right]\).
Finally, using the identity (cf. Lemma 2.25)
\[r^{\tilde{\beta}}-1=\operatorname{Op}^{BW}\left[\beta r^{\tilde{\beta}-2} \right]f+R\left(f\right)f\,,\qquad\forall\beta\in\mathbb{R}\,, \tag{4.68}\]
we write the term \(\mathcal{I}_{2}\left(f\right)\) in (4.49) as, using Propositions 2.21 and 2.23,
\[\mathcal{I}_{2}\left(f\right) =\left(\operatorname{Op}^{BW}\left[\tilde{V}\left[\mathcal{I}_{2} \right]\left(f;x\right)\right]+R\left(f\right)\right)\left(\operatorname{ Op}^{BW}\left[-2r^{-4}\right]f+R\left(f\right)f\right)\] \[=\operatorname{Op}^{BW}\left[V\left[\mathcal{I}_{2}\right] \left(f;x\right)\right]f+R\left(f\right)f \tag{4.69}\]
where
\[V\left[\mathcal{I}_{2}\right]\left(f;x\right):=-2r^{-4}\tilde{V}\left[\mathcal{I} _{2}\right]\left(f;x\right)\in\Sigma\mathcal{F}_{K,0,1}^{R}\left[\epsilon_{0},N \right]\,. \tag{4.70}\]
**Step 3** (Paralinearization of \(\mathcal{I}_{3}\) in (4.49)).: We first note that, in view of (4.68), the fact that \(\operatorname{Op}^{BW}\left[\beta r^{\tilde{\beta}-2}\right]\) and \(R\left(f\right)\) are \(0\)-operators, and Proposition 2.23-(ii), we deduce that
\[R_{1}\left(r^{-2}-1\right)=\tilde{R}\left(f\right)\in\Sigma\mathcal{R}_{K,0,1}^{-p }\left[\epsilon_{0},N\right]\]
is a smoothing operator, that we may also regard as a Kernel-smoothing operator in \(\Sigma K\mathcal{R}_{K,0,1}^{-p,0}\left[\epsilon_{0},N\right]\). Furthermore by (4.3) \(\frac{\delta_{z}}{|2\sin\left(z/2\right)|^{\alpha}}\) is in \(\overline{\mathcal{K}\mathcal{M}_{0}^{1,1-\alpha}}\) and \(\mathrm{K}_{\alpha,z}^{i}\left(\frac{\Delta_{z}f}{r^{2}}\right)\) is a Kernel function in \(\Sigma K\mathcal{F}_{K,0,0}^{0}\left[\epsilon_{0},N\right]\) by (4.22). Therefore by Proposition 2.34 Items 2 and 3 and Lemma 2.35 we obtain that
\[\mathcal{I}_{3}\left(f\right)=\int R\left(f;z\right)f\mathrm{d}z=R\left(f \right)f \tag{4.71}\]
where \(R\left(f\right)\) is a smoothing operator in \(\Sigma\mathcal{R}_{K,0,p}^{-p}\left[\epsilon_{0},N\right]\).
**Step 4** (Paralinearization of \(\mathcal{I}_{4}\) in (4.49)).: Reasoning as in the previous step there is a smoothing operator \(\tilde{R}\big{(}f\big{)}\) in \(\Sigma\mathcal{R}_{K,0,p}^{-\rho}\left[\epsilon_{0},N\right]\) such that
\[\mathcal{I}_{4}\big{(}f\big{)}=\tilde{R}\big{(}f\big{)}\big{(}r^{-2}-1\big{)}=R \big{(}f\big{)}f \tag{4.72}\]
(use (4.68)) where \(R\big{(}f\big{)}\) is a smoothing operator in \(\Sigma\mathcal{R}_{K,0,p}^{-\rho}\left[\epsilon_{0},N\right]\).
**Step 5** (Conclusion).: Inserting Eqs. (4.66), (4.69), (4.71) and (4.72) in Eq. (4.49), recalling the definition of \(L_{\mathcal{I}}(|\xi|)\) in Lemma 4.4, and that \(\nu_{\mathcal{I}}\big{(}f;x\big{)}:=-\big{(}r^{-2}\kappa_{\alpha}^{1,0}\big{(} f;x\big{)}+1\big{)}\) (cf. Eq. (4.48)) we obtain
\[\mathcal{I}\big{(}f\big{)}=-L_{\mathcal{I}}(|\xi|)f+\mathrm{Op} ^{BW}\left[-\nu_{\mathcal{I}}\big{(}f;x\big{)}\,\mathsf{T}_{\alpha}^{1}\left( |\xi|\right)-\frac{\mathrm{i}}{2}\left(\nu_{\mathcal{I}}\big{(}f;x\big{)} \right)_{x}\,\partial_{\xi}\mathsf{T}_{\alpha}^{1}\left(|\xi|\right)+1\,r^{-2} \mathsf{K}_{\alpha}^{1,1}\left(f;x\right)\mathsf{M}_{\alpha}\left(|\xi|\right) \xi\right]f\\ +\mathrm{Op}^{BW}\left[V\left[\mathcal{I}_{1}\right]\big{(}f;x \big{)}+V\left[\mathcal{I}_{2}\right]\big{(}f;x\big{)}+P\left(f;x,\xi\right) \right]f+R\left(f\right)f\,. \tag{4.73}\]
Finally, substituting \(\mathsf{T}_{\alpha}^{1}\left(|\xi|\right)=L_{\mathcal{I}}(|\xi|)-\frac{\Gamma( 2-\alpha)}{\Gamma(1-\frac{\alpha}{2})^{2}}-(1-\frac{\alpha}{2})^{2}M_{\alpha} (|\xi|)\), we deduce that (4.73) is the paralinearization (4.47) with (cf. Eqs. (4.67) and (4.70))
\[V\left[\mathcal{I}\right]\big{(}f;x\big{)}:=V\left[\mathcal{I}_{1}\right] \big{(}f;x\big{)}+V\left[\mathcal{I}_{2}\right]\big{(}f;x\big{)}+\nu_{ \mathcal{I}}\left(f;x\right)\frac{\Gamma(2-\alpha)}{\Gamma(1-\frac{\alpha}{2 })^{2}}\in\Sigma\mathcal{F}_{K,0,1}^{\mathrm{R}}\left[\epsilon_{0},N\right] \tag{4.74}\]
and another symbol \(P\left(f;x,\xi\right)\) in \(\Sigma\Gamma_{K,0,1}^{-1}\left[\epsilon_{0},N\right]\) satisfying (2.20).
### Paralinearization of the quasilinear integral term \(\mathcal{J}\big{(}f\big{)}\)
In this section we paralinearize \(\mathcal{J}\big{(}f\big{)}\).
**Lemma 4.6**.: _The term \(\mathcal{J}\big{(}f\big{)}\) defined in (4.15) can be written as_
\[\mathcal{J}\big{(}f\big{)}=\mathrm{Op}^{BW}\left[-\big{(}1+\nu_{\mathcal{J}} \big{(}f;x\big{)}\big{)}\,L_{\mathcal{J}}\left(|\xi|\right)+\mathrm{i}\,S_{ \mathcal{J},\alpha-2}\big{(}f;x,\xi\big{)}+V\left[\mathcal{J}\right]\big{(}f; x\big{)}+P\left(f;x,\xi\right)\right]f+R\big{(}f\big{)}f \tag{4.75}\]
_where_
* \(\nu_{\mathcal{J}}\big{(}f;x\big{)}\) _is the real function_ \[\nu_{\mathcal{J}}\big{(}f;x\big{)}:=\big{(}\mathsf{K}_{\alpha}^{2,0}\big{(}f;x \big{)}-1\big{)}+\frac{1}{\alpha-1}\frac{f^{\prime}(x)}{r^{2}}\mathsf{K}_{ \alpha}^{3,0}\big{(}f;x\big{)}\,\in\Sigma\mathcal{F}_{K,0,1}^{\mathrm{R}}\left[ \epsilon_{0},N\right];\] (4.76)
* \(L_{\mathcal{J}}\left(|\xi|\right):=-|\xi|^{2}\,\mathsf{M}_{\alpha}\left(|\xi|\right)\) _is a real Fourier multiplier in_ \(\Gamma_{0}^{\alpha-1}\) _(the Fourier multiplier_ \(\mathsf{M}_{\alpha}\left(|\xi|\right)\) _is defined in Lemma_ 3.1_);_
* \(S_{\mathcal{J},\alpha-2}\big{(}f;x,\xi\big{)}:=-\frac{\partial_{\mathcal{I}}}{2 }\big{(}\nu_{\mathcal{J}}\big{(}f;x\big{)}\big{)}\,\partial_{\xi}L_{\mathcal{J }}\left(|\xi|\right)+\Big{(}(\alpha-2)\,\mathsf{K}_{\alpha}^{2,1}\left(f;x \right)+\frac{1}{r^{2}}\left(f^{\prime}\mathsf{K}_{\alpha}^{3,1}\left(f;x \right)-f^{\prime\prime}\mathsf{K}_{\alpha}^{3,0}\big{(}f;x\big{)}\right)\Big{)} \big{)}\,\mathsf{M}_{\alpha}\left(|\xi|\right)\xi\) _is a real symbol in_ \(\Sigma\Gamma_{K,0,1}^{\mathrm{R}-2}\left[\epsilon_{0},N\right]\)_;_
* \(V\left[\mathcal{J}\right]\big{(}f;x\big{)}\) _is a real function in_ \(\Sigma\mathcal{F}_{K,0,1}^{\mathrm{R}}\left[\epsilon_{0},N\right]\)_;_
* \(P\big{(}f;x,\xi\big{)}\) _is a symbol in_ \(\Sigma\Gamma_{K,0,1}^{-1}\left[\epsilon_{0},N\right]\) _satisfying (_2.20_);_
* \(R\big{(}f\big{)}\) _is a real smoothing operator in_ \(\Sigma\mathcal{R}_{K,0,1}^{-\rho}\left[\epsilon_{0},N\right]\)_._
The rest of this section is devoted to the proof of Lemma 4.6.
By Lemma 2.22 we obtain that
\[\mathsf{K}_{\alpha,z}^{2}\left(\frac{\Delta_{z}f}{r^{2}}\right)f^{ \prime}\left(x-z\right)=\,\mathrm{Op}^{BW}\left[\mathsf{K}_{\alpha,z}^{2} \left(\frac{\Delta_{z}f}{r^{2}}\right)\right]f^{\prime}\left(x-z\right)+ \mathrm{Op}^{BW}\left[f^{\prime}\left(x-z\right)\right]\,\left(\mathsf{K}_{ \alpha,z}^{2}\left(\frac{\Delta_{z}f}{r^{2}}\right)-\mathsf{K}_{\alpha,z}^{2} \left(0\right)\right)\\ +R_{1}\left(\mathsf{K}_{\alpha,z}^{2}\left(\frac{\Delta_{z}f}{r^{ 2}}\right)-\mathsf{K}_{\alpha,z}^{2}\left(0\right)\right)f^{\prime}\left(x-z \right)+R_{2}\left(f^{\prime}\left(x-z\right)\right)\left(\mathsf{K}_{\alpha,z}^{2 }\left(\frac{\Delta_{z}f}{r^{2}}\right)-\mathsf{K}_{\alpha,z}^{2}\left(0 \right)\right) \tag{4.77}\]
where \(R_{1},R_{2}\) are smoothing operators in \(\widehat{\mathcal{R}_{1}}^{\rho}\). Hence, recalling the definition of \(\mathcal{J}\left(f\right)\) in (4.15), we have
\[\mathcal{J}\left(f\right) :=\sum_{j=1}^{4}\mathcal{J}_{\mathrm{j}}\left(f\right), \tag{4.78}\] \[\mathcal{J}_{1}\left(f\right) :=\int\mathrm{Op}^{BW}\left[\mathcal{K}_{\alpha,z}^{2}\left( \frac{\Delta_{z}f}{r^{2}}\right)\right]f^{\prime}\left(x-z\right)\frac{\sin z}{ \left|2\sin\left(z/2\right)\right|^{\alpha}}\,\mathrm{d}z,\] \[\mathcal{J}_{2}\left(f\right) :=\int\mathrm{Op}^{BW}\left[f^{\prime}\left(x-z\right)\right] \,\left(\mathcal{K}_{\alpha,z}^{2}\left(\frac{\Delta_{z}f}{r^{2}}\right)- \mathcal{K}_{\alpha,z}^{2}\left(0\right)\right]\frac{\sin z}{\left|2\sin\left( z/2\right)\right|^{\alpha}}\,\mathrm{d}z,\] \[\mathcal{J}_{3}\left(f\right) :=\int R_{1}\left(\mathcal{K}_{\alpha,z}^{2}\left(\frac{\Delta_ {z}f}{r^{2}}\right)-\mathcal{K}_{\alpha,z}^{2}\left(0\right)\right]f^{\prime} \left(x-z\right)\frac{\sin z}{\left|2\sin\left(z/2\right)\right|^{\alpha}}\, \mathrm{d}z,\] \[\mathcal{J}_{4}\left(f\right) :=\int R_{2}\left(f^{\prime}\left(x-z\right)\right)\left( \mathcal{K}_{\alpha,z}^{2}\left(\frac{\Delta_{z}f}{r^{2}}\right)-\mathcal{K}_{ \alpha,z}^{2}\left(0\right)\right]\frac{\sin z}{\left|2\sin\left(z/2\right) \right|^{\alpha}}\,\mathrm{d}z.\]
**Step 1** (Paralinearization of \(\mathcal{J}_{1}\) in (4.78)).: By (4.23) and (4.25) we obtain that
\[\mathcal{J}_{1}\left(f\right) :=\int f^{\prime}\left(x-z\right)\frac{\sin z}{\left|2\sin\left( z/2\right)\right|^{\alpha}}\,\mathrm{d}z+\sum_{j=1}^{3}\mathcal{J}_{1,j}\left(f \right), \tag{4.79}\] \[\mathcal{J}_{1,1}\left(f\right) :=\mathrm{Op}^{BW}\left[\mathcal{K}_{\alpha}^{2,0}\left(f;x \right)-1\right]\int f^{\prime}\left(x-z\right)\frac{\sin z}{\left|2\sin\left( z/2\right)\right|^{\alpha}}\,\mathrm{d}z,\] \[\mathcal{J}_{1,2}\left(f\right) :=\mathrm{Op}^{BW}\left[\mathcal{K}_{\alpha}^{2,1}\left(f;x \right)\right]\int f^{\prime}\left(x-z\right)\frac{\sin^{2}z}{\left|2\sin \left(z/2\right)\right|^{\alpha}}\,\mathrm{d}z,\] \[\mathcal{J}_{1,3}\left(f\right) :=\int\mathrm{Op}^{BW}\left[\rho_{\alpha}^{\left[3-\alpha\right]} \left(f;x,z\right)\right]f^{\prime}\left(x-z\right)\,\mathrm{d}z,\]
where, by (4.24), (4.25) and Remark 2.29,
\[\rho_{\alpha}^{\left[3-\alpha\right]}\left(f;x,z\right):=\left(\mathcal{K}_{ \alpha}^{2,2}\left(f;x\right)\left(2\sin\left(z/2\right)\right)^{2}+\rho_{ \alpha}^{2,3}\left(f;x,z\right)\right)\frac{\sin z}{\left|2\sin\left(z/2 \right)\right|^{\alpha}}\in\Sigma\,K\mathcal{F}_{K,0,1}^{3-\alpha}\left[e_{0}, N\right]. \tag{4.80}\]
Now, by (3.18), the first term in (4.79) is
\[\int f^{\prime}\left(x-z\right)\frac{\sin z}{\left|2\sin\left(z/2\right)\right| ^{\alpha}}\,\mathrm{d}z=\left|D\right|^{2}\,\mathrm{M}_{\alpha}\left(\left|D \right|\right)f \tag{4.81}\]
and, using Proposition 2.21,
\[\mathcal{J}_{1,1}\left(f\right) =\,\mathrm{Op}^{BW}\left[\left(\mathcal{K}_{\alpha}^{2,0}\left( f;x\right)-1\right)\right]\left|D\right|^{2}\,\mathrm{M}_{\alpha}\left(\left|D \right|\right)f \tag{4.82}\] \[=\,\mathrm{Op}^{BW}\left[\left(\mathcal{K}_{\alpha}^{2,0}\left(f ;x\right)-1\right)\,\left|\xi\right|^{2}\,\mathrm{M}_{\alpha}\left(\left|\xi \right|\right)+\frac{\mathrm{i}}{2}\partial_{x}\left(\mathcal{K}_{\alpha}^{2,0} \left(f;x\right)\right)\partial_{\xi}\left(\left|\xi\right|^{2}\,\mathrm{M}_{ \alpha}\left(\left|\xi\right|\right)+P\left(f;x,\xi\right)\right)\right]f+R \left(f\right)f\]
where \(P\left(f;x,\xi\right)\) is a symbol in \(\Sigma\Gamma_{K,0,1}^{-1}\left[e_{0},N\right]\). In order to expand \(\mathcal{J}_{1,2}\left(f\right)\) in (4.79) we write
\[\frac{\sin^{2}z}{\left|2\sin\left(z/2\right)\right|^{\alpha}}=\frac{\cos^{2} \left(z/2\right)}{\left|2\sin\left(z/2\right)\right|^{\alpha-2}}=\frac{1}{ \left|2\left(1-\cos\left(z\right)\right|^{\frac{\alpha}{2}-1}+\varrho_{1,2} \left(z\right),\quad\varrho_{1,2}\left(z\right)\in\overline{\mathcal{K}} \mathcal{F}_{0}^{3-\alpha}\right.} \tag{4.83}\]
As a consequence of (4.83), using also (3.21), Propositions 2.36 and 2.21, for any \(\alpha\in\left(0,2\right)\), we get
\[\mathcal{J}_{1,2}\left(f\right) =\,\mathrm{Op}^{BW}\left[\mathcal{K}_{\alpha}^{2,1}\left(f;x \right)\right]\!\!\int f^{\prime}\left(x-z\right)\frac{\mathrm{d}z}{\left|2 \left(1-\cos\left(z\right)\right|^{\frac{\alpha}{2}-1}+\right.}\!\!\int\mathrm{ Op}^{BW}\left[\mathcal{K}_{\alpha}^{2,1}\left(f;x\right)\varrho_{1,2}\left(z \right)\right]f^{\prime}\left(x-z\right)\,\mathrm{d}z\] \[=\,\mathrm{Op}^{BW}\left[\mathcal{K}_{\alpha}^{2,1}\left(f;x \right)\right]\mathrm{i}\left(\alpha-2\right)M_{\alpha}\left(\left|D\right| \right)Df+\mathrm{Op}^{BW}\left[a\left(f;x,\xi\right)\right]\partial_{\alpha}f\] \[=\,\mathrm{Op}^{BW}\left[\mathrm{i}\left(\alpha-2\right)\mathcal{K}_{ \alpha}^{2,1}\left(f;x\right)\,\mathrm{M}_{\alpha}\left(\left|\xi\right|\right) \xi+P\left(f;x,\xi\right)\right]f+R\left(f\right)f \tag{4.84}\]
where \(a\left(f;x,\xi\right)\) is a symbol in \(\Sigma\Gamma_{K,0,1}^{4-\alpha}\left[e_{0},N\right]\) and \(P\left(f;x,\xi\right)\) is a symbol in \(\Sigma\Gamma_{K,0,1}^{-1}\left[e_{0},N\right]\) satisfying (2.20). Furthermore, by (4.80), Propositions 2.36 and 2.21, for any \(\alpha\in\left(0,2\right)\), the last term in (4.79) is
\[\mathcal{J}_{1,3}\left(f\right)=\,\mathrm{Op}^{BW}\left[P\left(f;x,\xi\right) \right]f+R\left(f\right)f. \tag{4.85}\]
In conclusion, by Eqs. (4.25), (4.81), (4.82), (4.84) and (4.85) defining
\[\nu_{2}\left(f;x\right):=K_{\alpha}^{2,0}\left(f;x\right)-1\in\Sigma\mathcal{F}_ {K,0,1}^{\mathbb{R}}\left[\epsilon_{0},N\right]\,, \tag{4.86}\]
the term \(\mathcal{J}_{1}\left(f\right)\) in (4.79) is
\[\mathcal{J}_{1}\left(f\right) =\operatorname{Op}^{BW}\left[\left(1+\nu_{2}\left(f;x\right) \right)|\xi|^{2}\operatorname{M}_{\alpha}\left(|\xi|\right)\right]f\] \[+\operatorname{Op}^{BW}\left[\frac{\partial_{x}}{2}\big{(}\nu_{2 }\left(f;x\right)\big{)}\ \partial_{\xi}\left(|\xi|^{2}\operatorname{M}_{\alpha}\left(|\xi|\right) \right)+\left(\alpha-2\right)K_{\alpha}^{2,1}\left(f;x\right)\ \operatorname{M}_{\alpha}\left(|\xi|\right)\xi+P\left(f;x,\xi\right)\right]f+ R\left(f\right)f\,. \tag{4.87}\]
**Step 2** (Paralinearization of \(\mathcal{J}_{2}\) in (4.78)).: Using (4.16), (4.2), the paralinearization formula (2.25) and (4.21), we write
\[K_{\alpha,z}^{2}\left(\frac{\Delta_{z}f}{r^{2}}\right)-K_{\alpha,z}^{2}\left(0\right) =\left(G_{\alpha,z}^{2}\left(\frac{\delta_{z}f}{r^{2}}\right)-G_{ \alpha,z}^{2}\left(0\right)\right)|2\sin\left(z/2\right)|^{\alpha}\] \[=\operatorname{Op}^{BW}\left[\left(G_{\alpha,z}^{2}\right)^{ \prime}\left(\frac{\delta_{z}f}{r^{2}}\right)|2\sin\left(z/2\right)|^{\alpha} \right]\frac{\delta_{z}f}{r^{2}}+R\left(\frac{\delta_{z}f}{r^{2}}\right)\frac{ \delta_{z}f}{r^{2}}\ |2\sin\left(z/2\right)|^{\alpha} \tag{4.88}\] \[=\operatorname{Op}^{BW}\left[K_{\alpha,z}^{3}\left(\frac{\Delta_ {z}f}{r^{2}}\right)\right]\frac{\delta_{z}f}{r^{2}\sin\left(z\right)}+R\left( \frac{\delta_{z}f}{r^{2}}\right)\frac{\delta_{z}f}{r^{2}}\ |2\sin\left(z/2\right)|^{\alpha}\]
where \(R\) is a smoothing operator in \(\Sigma\mathcal{R}_{K,0,1}^{-\rho}\left[\epsilon_{0},N\right]\) for any \(\rho\). By Eq. (4.21) it results \(K_{\alpha,z}^{3}\left(\mathsf{X}\right)=K_{\alpha,z+2n}^{3}\left(-\mathsf{X}\right)\) and the map \(z\mapsto K_{\alpha,z}^{3}\left(\frac{\Delta_{z}f}{r^{2}}\right)\) is \(2\pi\)-periodic. Therefore, by (4.78) and (4.88) we obtain that
\[\mathcal{J}_{2}\left(f\right) =\int\operatorname{Op}^{BW}\left[f^{\prime}\left(x-z\right) \right]\operatorname{Op}^{BW}\left[K_{\alpha,z}^{3}\left(\frac{\Delta_{z}f}{r ^{2}}\right)\right]\ \frac{\delta_{z}f}{r^{2}|2\sin\left(z/2\right)|^{\alpha}}\,\mathrm{d}z \tag{4.89a}\] \[\quad+\int\operatorname{Op}^{BW}\left[f^{\prime}\left(x-z\right) \right]R\left[\frac{\delta_{z}f}{r^{2}}\right]\frac{\delta_{z}f}{r^{2}}\,\sin z \,\mathrm{d}z\,. \tag{4.89b}\]
By (4.3) and Remark 2.16 we deduce that \(M_{2}\left(f;z\right):=r^{-2}\delta_{z}\) is an operator in \(\Sigma K\mathcal{M}_{K,0,0}^{1,1}\left[\epsilon_{0},N\right]\). As a consequence by Proposition 2.34 we obtain that
\[R\left(\frac{\delta_{z}f}{r^{2}}\right)\frac{\delta_{z}f}{r^{2}}=R\left(f;z \right)f\qquad\text{where}\qquad R\left(f;z\right)\in\Sigma K\mathcal{R}_{K,0,1} ^{-\rho,1}\left[\epsilon_{0},N\right]\,, \tag{4.90}\]
and, by Proposition 2.34, Item 3, Lemma 2.35, being \(\alpha\in(0,2)\), we deduce that the integral (4.89b) is
\[\int\operatorname{Op}^{BW}\left[f^{\prime}\left(x-z\right)\sin z\right]R\left( \frac{\delta_{z}f}{r^{2}}\right)\frac{\delta_{z}f}{r^{2}}\mathrm{d}z=R\left(f \right)f \tag{4.91}\]
where \(R\left(f\right)\) is a smoothing operator in \(\Sigma K_{K,0,1}^{-\rho}\left[\epsilon_{0},N\right]\).
We now consider the term (4.89a). By Lemma 2.22 we write
\[\frac{\delta_{z}f}{r^{2}}=\operatorname{Op}^{BW}\left[r^{-2}\right]\delta_{z }f+\operatorname{Op}^{BW}\left[\delta_{z}f\right]\left(r^{-2}-1\right)+R_{1} \left(r^{-2}-1\right)\delta_{z}f+R_{2}\left(\delta_{z}f\right)\left[r^{-2}-1\right]\]
where \(R_{1},R_{2}\) are smoothing operators in \(\widehat{\mathcal{R}}_{1}^{-\rho}\) for any \(\rho\geq 0\), and thus
(4.89a) \[+\int\operatorname{Op}^{BW}\left[f^{\prime}\left(x-z\right) \right]\operatorname{Op}^{BW}\left[K_{\alpha,z}^{3}\left(\frac{\Delta_{z}f}{r^ {2}}\right)\right]\ \operatorname{Op}^{BW}\left[\delta_{z}f\right]\left(r^{-2}-1\right)\frac{ \mathrm{d}z}{|2\sin\left(z/2\right)|^{\alpha}}\] (4.92b) \[+\int\operatorname{Op}^{BW}\left[f^{\prime}\left(x-z\right) \right]\operatorname{Op}^{BW}\left[K_{\alpha,z}^{3}\left(\frac{\Delta_{z}f}{r^ {2}}\right)\right]\left(R_{1}\left(r^{-2}-1\right)\delta_{z}f+R_{2}\left( \delta_{z}f\right)\left[r^{-2}-1\right]\right)\frac{\mathrm{d}z}{|2\sin\left(z/ 2\right)|^{\alpha}}.\] (4.92c)
Proposition 2.34 give that
\[\eqref{eq:2.34}=\int\operatorname{Op}^{BW}\left[r^{-2}f^{\prime}\left(x-z\right) K_{\alpha,z}^{3}\left(\frac{\Delta_{z}f}{r^{2}}\right)\right]\frac{\delta_{z}f}{|2\sin \left(z/2\right)|^{\alpha}}\,\mathrm{d}z+\underbrace{\int R\left(f;z\right)f\, \mathrm{d}z}_{=R\left(f\right)f}\]
where \(R\left(f;z\right)\) is a kernel-smoothing operator in \(\Sigma\mathcal{R}_{K,0,1}^{-\rho,1-\alpha}\left[\epsilon_{0},N\right]\) and, since \(\alpha\in\left(0,2\right)\), the operator \(R\left(f\right)\) is in \(\Sigma\mathcal{R}_{K,0,1}^{-\rho}\left[\epsilon_{0},N\right]\) by Lemma 2.35. Then by Proposition 2.34, Eq. (4.68), and Remark 2.30, we get
\[\eqref{eq:
Furthermore \(f^{\prime}(x-z)=\partial_{x}\circ\mathfrak{t}_{-z}f\) and \(\partial_{x}\circ\mathfrak{t}_{-z}\) is in \(\overline{\mathcal{KM}}_{0}^{1,0}\). By Remark 2.29 and Proposition 2.34 we obtain (after relabeling \(\rho\)) that
\[R_{1}\left[\mathsf{K}_{\alpha,z}^{2}\left(\frac{\Delta_{z}f}{r^{2}}\right)- \mathsf{K}_{\alpha,z}^{2}(0)\right]\frac{\sin z}{|2\sin\left(z/2\right)|^{ \alpha}}\partial_{x}\circ\mathfrak{t}_{-z}:=R^{\star}\left(f;z\right)\in \Sigma K\mathcal{R}_{K,0,1}^{-\rho,1-\alpha}\left[\epsilon_{0},N\right]\;.\]
Finally Lemma 2.35 implies that
\[\mathcal{J}_{3}\left(f\right)=\int R^{\star}\left(f;z\right)f\;\mathrm{d}z=R \left(f\right)f\qquad\text{where}\qquad R\left(f\right)\in\Sigma\mathcal{R}_{ K,0,1}^{-\rho}\left[\epsilon_{0},1\right]\;. \tag{4.99}\]
**Step 4** (Paralinearization of \(\mathcal{J}_{4}\) in (4.78)).: We similar arguments one obtains
\[\mathcal{J}_{4}\left(f\right)=R\left(f\right)f\qquad\text{where}\qquad R\left( f\right)\in\Sigma\mathcal{R}_{K,0,1}^{-\rho}\left[\epsilon_{0},1\right]\;. \tag{4.100}\]
**Step 5** (Conclusion).: We plug Equations (4.87), (4.97), (4.99) and (4.100) in Eq. (4.78) and, recalling that \(L_{\mathcal{J}}\left(\left|\xi\right|\right)=-\left|\xi\right|^{2}\mathsf{M}_{ \alpha}\left(\left|\xi\right|\right)\), defining the real functions \(V\left[\mathcal{J}\right]:=V\left[\mathcal{J}_{2}\right]\) in \(\Sigma\mathcal{F}_{K,0,1}^{\mathbb{R}}\left[\epsilon_{0},N\right]\) and \(\nu_{\mathcal{J}}:=\nu_{2}+\nu_{3}\) in \(\Sigma\mathcal{F}_{K,0,1}^{\mathbb{R}}\left[\epsilon_{0},N\right]\) (cf. Eqs. (4.86) and (4.95)) we obtain the paralinearization formula (4.75) stated in Lemma 4.6.
### Proof of Theorem 4.1
We now paralinearize the scalar field in Equation (4.20). We apply Lemma 2.22
\[r^{2-\alpha}\mathcal{I}\left(f\right)= \mathrm{Op}^{BW}\left[r^{2-\alpha}\right]\mathcal{I}\left(f\right) +\mathrm{Op}^{BW}\left[\mathcal{I}\left(f\right)\right]\left(r^{2-\alpha}-1 \right)+R_{1}\left(r^{2-\alpha}-1\right)\mathcal{I}\left(f\right)+R_{2}\left( \mathcal{I}\left(f\right)\right)\left(r^{2-\alpha}-1\right),\] \[r^{-\alpha}\mathcal{J}\left(f\right)= \mathrm{Op}^{BW}\left[r^{-\alpha}\right]\mathcal{J}\left(f\right) +\mathrm{Op}^{BW}\left[\mathcal{J}\left(f\right)\right]\left(r^{-\alpha}-1 \right)+R_{1}\left(r^{-\alpha}-1\right)\mathcal{J}\left(f\right)+R_{2}\left( \mathcal{J}\left(f\right)\right)\left(r^{-\alpha}-1\right).\]
We thus apply (4.68), Lemmas 2.19, 4.4 and 4.6 and Propositions 2.21 and 2.23 and obtain that there exist real functions \(\tilde{V}_{\mathcal{I}},\tilde{V}_{\mathcal{J}}\) in \(\Sigma\mathcal{F}_{K,0,1}^{\mathbb{R}}\left[\epsilon_{0},N\right]\) such that
\[r^{2-\alpha}\mathcal{I}\left(f\right) =\mathrm{Op}^{BW}\left[r^{2-\alpha}\right]\mathcal{I}\left(f \right)+\mathrm{Op}^{BW}\left[\tilde{V}_{\mathcal{I}}\left(f;x\right)\right] f+R\left(f\right)f, \tag{4.101}\] \[r^{-\alpha}\mathcal{J}\left(f\right) =\mathrm{Op}^{BW}\left[r^{-\alpha}\right]\mathcal{J}\left(f \right)+\mathrm{Op}^{BW}\left[\tilde{V}_{\mathcal{J}}\left(f;x\right)\right] f+R\left(f\right)f,\]
for some smoothing operator \(R\left(f\right)\) in \(\Sigma\mathcal{R}_{K,0,1}^{-\rho}\left[\epsilon_{0},N\right]\).
A key fact proved in the next lemma is that the imaginary part of the symbol in (4.102) has order at most \(-1\). This is actually an effect of the linear Hamiltonian structure, see Remark 4.8.
**Lemma 4.7**.: _It results_
\[\mathrm{Op}^{BW}\left[r^{2-\alpha}\right]\mathcal{I}\left(f\right) +\mathrm{Op}^{BW}\left[r^{-\alpha}\right]\mathcal{J}\left(f\right)=- \mathrm{Op}^{BW}\left[\left(1+\tilde{\nu}_{\mathcal{I}}\left(f;x\right) \right)L_{\mathcal{I}}\left(\left|\xi\right|\right)+\left(1+\tilde{\nu}_{ \mathcal{J}}\left(f;x\right)\right)L_{\mathcal{J}}\left(\left|\xi\right| \right)\right]f\\ +\mathrm{Op}^{BW}\left[V_{\mathcal{I}}\left(f;x\right)+V_{\mathcal{J }}\left(f;x\right)+P\left(f;x,\xi\right)\right]f+R\left(f\right)f \tag{4.102}\]
_where_
* \(L_{\mathcal{I}}\left(\left|\xi\right|\right)\) _and_ \(L_{\mathcal{J}}\left(\left|\xi\right|\right)\) _are the real Fourier multipliers defined in Lemmas_ 4.4 _and_ 4.6_;_
* \(\tilde{\nu}_{\mathcal{I}}\left(f;x\right)\)_,_ \(\tilde{\nu}_{\mathcal{J}}\left(f;x\right)\)_,_ \(V_{\mathcal{I}}\left(f;x\right)\)_,_ \(V_{\mathcal{J}}\left(f;x\right)\) _are real functions in_ \(\Sigma\mathcal{F}_{K,0,1}^{\mathbb{R}}\left[\epsilon_{0},N\right]\)_;_
* \(P\left(f;x,\xi\right)\) _is a symbol in_ \(\Sigma\Gamma_{K,0,1}^{-\rho}\left[\epsilon_{0},N\right]\)_;_
* \(R\left(f\right)\) _is a smoothing operator in_ \(\Sigma\mathcal{R}_{K,0,1}^{-\rho}\left[\epsilon_{0},N\right]\)_._
Proof.: Proposition 2.21 and Lemmas 4.4 and 4.6 give that
\[\mathrm{Op}^{BW}\left[r^{2-\alpha}\right]\mathcal{I}\left(f\right) =\mathrm{Op}^{BW}\left[r^{2-\alpha}\left(-\left(1+\nu_{\mathcal{I }}\left(f;x\right)\right)L_{\mathcal{I}}\left(\left|\xi\right|\right)+ \mathrm{i}\,S_{\mathcal{I},\alpha-2}\left(f;x,\xi\right)\right)\right]f\] \[\quad+\mathrm{Op}^{BW}\left[\frac{1}{2\mathrm{i}}\left(r^{2- \alpha}\right)_{x}\left(1+\nu_{\mathcal{I}}\left(f;x\right)\right)\partial_{ \xi}L_{\mathcal{I}}\left(\left|\xi\right|\right)\right]f\] \[\quad+\mathrm{Op}^{BW}\left[V_{\mathcal{I}}\left(f;x\right)+P \left(f;x,\xi\right)\right]f+R\left(f\right)f,\] \[\mathrm{Op}^{BW}\left[r^{-\alpha}\right]\mathcal{J}\left(f\right) =\mathrm{Op}^{BW}\left[r^{-\alpha}\left(-\left(1+\nu_{\mathcal{J }}\left(f;x\right)\right)L_{\mathcal{J}}\left(\left|\xi\right|\right)+ \mathrm{i}\,S_{\mathcal{I},\alpha-2}\left(f;x,\xi\right)\right)\right]f\]
\[+\operatorname{Op}^{BW}\left[V_{\mathcal{J}}\left(f;x\right)+V_{ \mathcal{J}}\left(f;x\right)\right]f+R\left(f\right)f\,.\]
We now prove that the sum of (4.103a) and (4.103b) give a paradifferential term of order \(-1\). We first note that, by Lemma 3.7, we have the asymptotic expansions
\[\left|\xi\right|^{2}\mathsf{M}_{\alpha}\left(\left|\xi\right| \right)=\,\xi_{\alpha}\left|\xi\right|^{\alpha-1}+m_{\alpha-3}\left(\left|\xi \right|\right),\qquad\xi\mathsf{M}_{\alpha}\left(\left|\xi\right|\right)=\, \xi_{\alpha}\left|\xi\right|^{\alpha-3}\xi+m_{\alpha-4}\left(\left|\xi\right| \right), \tag{4.104}\] \[\mathsf{T}_{\alpha}^{1}\left(\left|\xi\right|\right)=\frac{1}{ \alpha-1}\check{\xi}_{\alpha}\left|\xi\right|^{\alpha-1}+\check{\mathbb{V}}_{ \alpha}+m_{\alpha-3}\left(\left|\xi\right|\right),\;\;\;\;\text{where}\;\;\; \check{\xi}_{\alpha}:=\frac{\Gamma\left(2-\alpha\right)}{\Gamma\left(1- \frac{\alpha}{2}\right)\Gamma\left(\frac{\alpha}{2}\right)}\,,\]
so that
\[\partial_{\xi}L_{\mathcal{I}}\left(\left|\xi\right|\right)=\check{\xi}_{ \alpha}\left|\xi\right|^{\alpha-3}\xi+m_{\alpha-4}\left(\left|\xi\right| \right),\qquad\partial_{\xi}L_{\mathcal{J}}\left(\left|\xi\right|\right)=- \left(\alpha-1\right)\check{\xi}_{\alpha}\left|\xi\right|^{\alpha-3}\xi+m_{ \alpha-4}\left(\left|\xi\right|\right)\,.\]
By the explicit definition of the symbols \(S_{\mathcal{I},\alpha-2}\) and \(S_{\mathcal{J},\alpha-2}\) in Lemmas 4.4 and 4.6 and (4.104) we have the expansion of the symbol in (4.103a)
\[\mathrm{i}\left(r^{2-\alpha}\;S_{\mathcal{I},\alpha-2}\left(f;x, \xi\right)+r^{-\alpha}\;S_{\mathcal{J},\alpha-2}\left(f;x,\xi\right)\right) \tag{4.105}\] \[=\mathrm{i}\left[-r^{2-\alpha}\;\frac{1}{2}\left(\nu_{\mathcal{ I}}\right)_{x}\left(f;x\right)+\left(\alpha-1\right)r^{-\alpha}\;\frac{1}{2} \left(\nu_{\mathcal{J}}\right)_{x}\left(f;x\right)+A_{\alpha,1}\left(f;x\right) \right]\check{\xi}_{\alpha}\left|\xi\right|^{\alpha-3}\xi+\mathrm{i}P\left(f; x,\xi\right),\]
where
\[A_{\alpha,1}\left(f;x\right):=\frac{1}{r^{\alpha}}\left[\mathsf{K}_{\alpha}^{ 1,1}\left(f;x\right)+\left(\alpha-2\right)\mathsf{K}_{\alpha}^{2,1}\left(f;x \right)+\frac{1}{r^{2}}\left(f^{\prime}\;\mathsf{K}_{\alpha}^{3,1}\left(f;x \right)-f^{\prime\prime}\;\mathsf{K}_{\alpha}^{3,0}\left(f;x\right)\right)\right] \tag{4.106}\]
is a function in \(\Sigma\mathcal{F}_{K,0,1}^{\mathbb{R}}\left[\epsilon_{0},N\right]\), recalling (4.25). Then the sum of (4.103a) and (4.103b) gives
\[r^{2-\alpha}\;S_{\mathcal{I},\alpha-2}\left(f;x,\xi\right)+r^{- \alpha}\;S_{\mathcal{J},\alpha-2}\left(f;x,\xi\right)\\ -\frac{1}{2}\left[\left(r^{2-\alpha}\right)_{x}\left(1+\nu_{ \mathcal{I}}\left(f;x\right)\right)\partial_{\xi}L_{\mathcal{I}}\left(\left| \xi\right|\right)+\left(r^{-\alpha}\right)_{x}\left(1+\nu_{\mathcal{J}}\left( f;x\right)\right)\partial_{\xi}L_{\mathcal{J}}\left(\left|\xi\right|\right)\right]\\ =\left\{\frac{1}{2}\underbrace{\left[-r^{2-\alpha}\left(1+\nu_{ \mathcal{I}}\left(f;x\right)\right)+\left(\alpha-1\right)r^{-\alpha}\left(1+\nu _{\mathcal{J}}\left(f;x\right)\right)\right]_{x}}_{=\left(A_{\alpha,0}\left(f ;x\right)\right)_{x}}+A_{\alpha,1}\left(f;x\right)\right\}\check{\xi}_{\alpha} \left|\xi\right|^{\alpha-3}\xi+P\left(f;x,\xi\right), \tag{4.107}\]
where, having substituting the explicit values of \(\nu_{\mathcal{I}}\), \(\nu_{\mathcal{J}}\) in (4.48), (4.76), we define
\[A_{\alpha,0}\left(f;x\right):=\frac{1}{r^{\alpha}}\left[\mathsf{K}_{\alpha}^{ 1,0}\left(f;x\right)+\left(\alpha-1\right)\mathsf{K}_{\alpha}^{2,0}\left(f;x \right)+\frac{f^{\prime}}{r^{2}}\;\mathsf{K}_{\alpha}^{3,0}\left(f;x\right) \right]+\left(2-\alpha\right) \tag{4.108}\]
which is a function in \(\Sigma\mathcal{F}_{K,0,1}^{\mathbb{R}}\left[\epsilon_{0},N\right]\). We finally write
\[\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:
**Remark 4.8**.: The algebraic reason of the cancellation (4.110) is that a symbol of the form \(\mathrm{i}g\left(f;x\right)|\xi|^{\alpha-3}\xi\), as in (4.105), with a real function \(g\left(f;x\right)\), does not respect the Hamiltonicity condition (2.21).
The next lemma enables to highlight the quasilinear structure of the vector field in (4.20).
**Lemma 4.9**.: _It results_
\[\mathrm{Op}^{BW}\left[r^{2-\alpha}\right]\mathcal{I}\left(f \right)+\mathrm{Op}^{BW}\left[r^{-\alpha}\right]\mathcal{J}\left(f\right)+ \int\mathsf{G}_{\alpha,z}^{1}\left(0\right)\mathrm{d}z\,\left(r^{2-\alpha}-1\right)\] \[\qquad\qquad=-\left(\frac{c_{\alpha}}{2\left(1-\frac{\alpha}{2} \right)}\right)^{-1}\mathrm{Op}^{BW}\left[\left(1+\nu\left(f;x\right)\right)L _{\alpha}\left(\left|\xi\right|\right)\right]f+\mathrm{Op}^{BW}\left[\tilde{V }\left(f;x\right)+P\left(f;x,\xi\right)\right]f+R\left(f\right)f \tag{4.111}\]
_where \(L_{\alpha}\left(\left|\xi\right|\right)\) is the Fourier multiplier defined in Lemma 3.1 and_
* \(\nu\left(f;x\right),\tilde{V}\left(f;x\right)\) _are real functions in_ \(\Sigma\mathcal{F}_{K,0,1}^{\mathbb{R}}\left[\epsilon_{0},N\right]\)_;_
* \(P\left(f;x,\xi\right)\) _is a symbol in_ \(\Sigma\Gamma_{K,0,1}^{-1}\left[\epsilon_{0},N\right]\)_;_
* \(R\left(f\right)\) _is a smoothing operator in_ \(\Sigma\mathcal{R}_{K,0,1}^{-p}\left[\epsilon_{0},N\right]\)_._
Proof.: By (4.11) and (4.68) we have
\[\int\mathsf{G}_{\alpha,z}^{1}\left(0\right)\mathrm{d}z\,\left(r^{2-\alpha}-1 \right)=2\,\frac{\Gamma\left(2-\alpha\right)}{\Gamma\left(1-\frac{\alpha}{2} \right)^{2}}\left(f+\mathrm{Op}^{BW}\left[r^{-\alpha}-1\right]f\right)+R\left( f\right)f. \tag{4.112}\]
Notice now, from Lemmas 4.4 and 4.6 and Lemma 3.1, that
\[L_{\mathcal{I}}\left(\left|\xi\right|\right)+L_{\mathcal{J}}\left(\left|\xi \right|\right)-2\,\frac{\Gamma\left(2-\alpha\right)}{\Gamma\left(1-\frac{ \alpha}{2}\right)^{2}}=\left(\frac{c_{\alpha}}{2\left(1-\frac{\alpha}{2} \right)}\right)^{-1}L_{\alpha}\left(\left|\xi\right|\right)\,. \tag{4.113}\]
Now we claim that
\[\tilde{\nu}_{\mathcal{I}}\left(f;x\right)L_{\mathcal{I}}\left(\left|\xi \right|\right)+\tilde{\nu}_{\mathcal{J}}\left(f;x\right)L_{\mathcal{J}}\left( \left|\xi\right|\right)=\left(\frac{c_{\alpha}}{2\left(1-\frac{\alpha}{2} \right)}\right)^{-1}\nu\left(f;x\right)L_{\alpha}\left(\left|\xi\right| \right)+\tilde{V}\left(f;x\right)+P\left(f;x,\xi\right), \tag{4.114}\]
for a suitable real functions \(\nu,\tilde{V}\) in \(\Sigma\mathcal{F}_{K,0,1}^{\mathbb{R}}\left[\epsilon_{0},N\right]\) and a symbol \(P\) in \(\Sigma\Gamma_{K,0,1}^{-1}\left[\epsilon_{0},N\right]\). From Lemmas 4.4 and 4.6 and the asymptotic decomposition of \(\Gamma_{\alpha}^{1}\) and \(\mathsf{M}_{\alpha}\) in Lemma 3.7, we have that
\[\mathrm{l.h.s.\ of \eqref{eq:l.h.s.of}} \eqref{eq:l.h.s.of} =\tilde{\nu}_{\mathcal{I}}\left(f;x\right)\Gamma_{\alpha}^{1}\left( \left|\xi\right|\right)-\tilde{\nu}_{\mathcal{J}}\left(f;x\right)\left|\xi \right|^{2}\mathsf{M}_{\alpha}\left(\left|\xi\right|\right)+V\left(f;x\right)+P \left(f;x,\xi\right)\] \[=\frac{\Gamma\left(2-\alpha\right)}{\Gamma\left(1-\frac{\alpha}{2 }\right)\Gamma\left(\frac{\alpha}{2}\right)}\frac{\left|\xi\right|^{\alpha-1}}{ \alpha-1}\,\left(\tilde{\nu}_{\mathcal{I}}\left(f;x\right)-\left(\alpha-1 \right)\tilde{\nu}_{\mathcal{J}}\left(f;x\right)\right)+V\left(f;x\right)+P \left(f;x,\xi\right).\]
Defining \(\nu\left(f;x\right):=\frac{\tilde{\nu}_{\mathcal{I}}\left(f;x\right)-\left( \alpha-1\right)\tilde{\nu}_{\mathcal{J}}\left(f;x\right)}{2-\alpha}\) and using the identity \(\Gamma\left(3-\alpha\right)=\left(2-\alpha\right)\Gamma\left(2-\alpha\right)\), we get
\[\mathrm{l.h.s.\ of} \tag{4.115}\]
By Lemma 3.6 we have
\[\frac{c_{\alpha}}{2\left(1-\frac{\alpha}{2}\right)}\frac{\Gamma\left(3-\alpha \right)}{\Gamma\left(1-\frac{\alpha}{2}\right)\Gamma\left(\frac{\alpha}{2} \right)}\frac{\left|\xi\right|^{\alpha-1}}{\alpha-1}\,\nu\left(f;x\right)=\nu \left(f;x\right)L_{\alpha}\left(\left|\xi\right|\right)+V\left(f;x\right)+P \left(f;x,\xi\right). \tag{4.116}\]
Finally plugging (4.116) in (4.115) we deduce (4.114).
Equations (4.102) and (4.112) to (4.114) give that for suitable \(\nu,\tilde{V}\in\Sigma\mathcal{F}_{K,0,1}^{\mathbb{R}}\left[\epsilon_{0},N\right]\) the desired decomposition provided in Equation (4.111).
We can finally paralinearize Equation (4.20). Using (4.101) and (4.111) we have
\[r^{2-\alpha}\left(\mathcal{I}\left(f\right)+R\left(f\right)f \right)+\int\mathsf{G}_{\alpha,z}^{1}\left(0\right)\mathrm{d}z\,\left(r^{2- \alpha}-1\right)+r^{-\alpha}\mathcal{J}\left(f\right)\\ =-\left(\frac{c_{\alpha}}{2\left(1-\frac{\alpha}{2}\right)} \right)^{-1}\mathrm{Op}^{BW}\left[\left(1+\nu\left(f;x\right)\right)L_{\alpha} \left(\left|\xi\right|\right)+V\left(f;x\right)+P\left(f;x,\xi\right)\right]f+R \left(f\right)f\]
where \(V\left(f;x\right)\) is a real function in \(\Sigma\mathcal{F}_{K,0,1}^{\mathbb{R}}\left[\epsilon_{0},N\right]\). This, combined with the observation that \(\partial_{x}\circ R\left(f\right)\in\Sigma\mathcal{R}_{K,0,1}^{-p+1}\left[ \epsilon_{0},N\right]\), proves that Equation (4.20) has the form (4.1).
Birkhoff normal form reduction up to cubic terms
In this section we reduce the equation (4.1) to its Birkhoff normal form up to a cubic smoothing vector field, from which Theorem 1.1 easily follows. From now on we consider \(\alpha\in(1,2)\).
**Proposition 5.1** (Cubic Birkhoff normal form).: _Let \(\alpha\in(1,2)\) and \(N\in\mathbb{N}\). There exists \(\underline{\rho}:=\underline{\rho}\left(N,\alpha\right)\), such that for any \(\rho\geq\underline{\rho}\) there exists \(\underline{K}^{\prime}:=\underline{K}^{\prime}\left(\rho,\alpha\right)>0\) such that for any \(K\geq\underline{K}^{\prime}\) there is \(s_{0}>0\) such that for any \(s\geq\underline{s_{0}}\), there is \(\underline{\epsilon_{0}}(s)>0\) such that for any \(0<\epsilon_{0}\leq\underline{\epsilon_{0}}(s)\) and any solution \(f\in B_{\underline{s},\mathbb{R}}^{K}\left(I;\epsilon_{0}\right)\cap C_{*}^{K} \left(I;H_{0}^{s}\left(\mathbb{T};\mathbb{R}\right)\right)\) of the equation (4.1) the following holds:_
* _there exists a real invertible operator_ \(\underline{\Psi}\left(f;t\right)\) _on_ \(H_{0}^{s}\left(\mathbb{T},\mathbb{R}\right)\) _satisfying the following: for any_ \(s\in\mathbb{R}\) _there are_ \(C:=C(s,\epsilon_{0},K)\) _and_ \(\epsilon_{0}^{\prime}(s)\in(0,\epsilon_{0})\)_, such that for any_ \(f\in B_{\underline{s},\mathbb{R}}^{K}\left(I;\epsilon_{0}^{\prime}(s)\right)\) _and_ \(v\in C_{*}^{K-\underline{K}^{\prime}}\left(I;H_{0}^{s}\left(\mathbb{T}, \mathbb{R}\right)\right)\)_, for any_ \(0\leq k\leq K-\underline{K}^{\prime}\)_,_ \(t\in I\)_,_ \[\left\|\partial_{t}^{k}\left\{\underline{\Psi}\left(f;t\right)v\right\}\right\| _{s-k}+\left\|\partial_{t}^{k}\left\{\underline{\Psi}\left(f;t\right)^{-1}v \right\}\right\|_{s-k}\leq C\left\|v\right\|_{k,s};\] (5.1)
* _the variable_ \(y:=\underline{\Psi}\left(f;t\right)f\) _solves the equation_ \[\partial_{t}y+\mathrm{i}\,\omega_{\alpha}\left(D\right)y+\mathrm{i}\,\mathrm{ Op}^{BW}\left[d\left(f;t,\xi\right)\right]y=R_{\geq 2}\left(f;t\right)y\] (5.2) _where_ \(\bullet\)__\(\omega_{\alpha}\left(\xi\right)=\xi L_{\alpha}\left(\left|\xi\right|\right)\)_, with_ \(L_{\alpha}\left(\left|\xi\right|\right)\) _defined in Lemma_ 3.1_, is a Fourier multiplier of order_ \(\alpha\)_;_ \(\bullet\)__\(d\left(f;t,\xi\right)\) _is a symbol in_ \(\Sigma\Gamma_{K,\underline{K}^{\prime},2}^{\alpha}\left[\epsilon_{0},N\right]\) _independent of_ \(x\)_, satisfying (_2.20_), with_ \(\mathrm{lm}\,d\left(f;t,\xi\right)\) _in the space_ \(\Sigma\Gamma_{K,\underline{K}^{\prime},2}^{0}\left[\epsilon_{0},N\right]\)_;_ \(\bullet\)__\(R_{\geq 2}\left(f;t\right)\) _is a real smoothing operator in_ \(\Sigma\tilde{\mathcal{K}}_{K,\underline{K}^{\prime},2}^{-\left(\rho-\rho-\alpha \right)}\left[\epsilon_{0},N\right]\)_._
The bounds (5.1) imply in particular that for any \(s\geq s_{0}\), there exists \(C:=C_{s,K,\alpha}>0\) such that
\[C^{-1}\left\|f\left(t\right)\right\|_{s}\leq\left\|y\left(t\right)\right\|_{s} \leq C\left\|f\left(t\right)\right\|_{s},\quad\forall t\in I. \tag{5.3}\]
Note that the \(x\)-independent symbol \(d\left(f;t,\xi\right)\) in (5.2) has homogeneity at least \(2\) by Remark 2.6.
Reduction to constant coefficients up to a smoothing operator.The first step is to reduce the symbol of the paradifferential operator in (4.1) to a constant coefficient one, up to a smoothing operator.
**Proposition 5.2** (Reduction to constant coefficients up to smoothing operators).: _Let \(\alpha\in(1,2)\) and \(N\in\mathbb{N}\). There exists \(\underline{\rho}:=\underline{\rho}\left(N,\alpha\right)\), such that for any \(\rho\geq\underline{\rho}\) there exists \(\underline{K}^{\prime}:=\underline{K}^{\prime}\left(\rho,\alpha\right)>0\) such that for any \(K\geq\underline{K}^{\prime}\) there are \(s_{0}>0\), \(\epsilon_{0}>0\) such that for any solution \(f\in B_{\underline{s}_{0},\mathbb{R}}^{K}\left(I;\epsilon_{0}\right)\) of (4.1) the following holds:_
* _there exists a real invertible operator_ \(\Psi\left(f;t\right)\) _on_ \(H_{0}^{s}\left(\mathbb{T},\mathbb{R}\right)\) _satisfying (_5.1_);_
* _the variable_ \(g:=\Psi\left(f;t\right)f\) _solves the equation_ \[\partial_{t}g+\partial_{x}\circ\mathrm{Op}^{BW}\left[\left(1+\alpha_{0}\left( f\right)\right)L_{\alpha}\left(\left|\xi\right|\right)+\mathsf{H}_{\alpha}\left(f;t,\xi \right)\right]_{g}=R\left(f;t\right)g\] (5.4) _where_ \(\bullet\)__\(L_{\alpha}\left(\left|\xi\right|\right)\) _is the Fourier multiplier of order_ \(\alpha-1\) _defined in Lemma_ 3.1_;_ \(\bullet\)__\(\omega_{0}\left(f\right)\) _is a_ \(x\)_-independent real function in_ \(\Sigma\mathcal{F}_{K,0,2}^{\mathbb{R}}\left[\epsilon_{0},N\right]\)_;_ \(\bullet\)__\(\mathsf{H}_{\alpha}\left(f;t,\xi\right)\) _is an_ \(x\)_-independent symbol in_ \(\Sigma\Gamma_{K,\underline{K}^{\prime},2}^{0}\left[\epsilon_{0},N\right]\) _satisfying (_2.20_), with_ \(\mathrm{lm}\mathsf{H}_{\alpha}\left(f;t,\xi\right)\) _in_ \(\Sigma\Gamma_{K,\underline{K}^{\prime},2}^{-1}\left[\epsilon_{0},N\right]\)_;_ \(\bullet\)__\(R\left(f;t\right)\) _is a real smoothing operator in_ \(\Sigma\tilde{\mathcal{K}}_{K,\underline{K}^{\prime},1}^{-\left(\rho-\underline{\rho} \right)}\left[\epsilon_{0},N\right]\)_._
Proposition 5.2 relies on general results (given in Appendix B) that describe how paradifferential operators are conjugated under the flow generated by a paradifferential operator, which is Hamiltonian up to zero order operators. We shall use repeatedly the following result.
**Lemma 5.3** (Flows of Hamiltonian operators up to order zero).: _Let \(p,N\in\mathbb{N}\), \(0\leq K^{\prime}\leq K\) and \(\delta\geq 0\). Let us consider a "Hamiltonian operators up to order zero"_
\[\Lambda\left(f,\tau;t\right):=\partial_{x}\circ\operatorname{Op}^{BW}\left[ \lambda\left(f,\tau;t,x,\xi\right)\right]\]
_where \(\lambda\left(f,\tau;t,x,\xi\right)\) is a symbol in \(\Sigma\Gamma_{K,K^{\prime},p}^{-\delta}\left[\epsilon_{0},N\right]\), uniformly in \(\left|\tau\right|\leq 1\), with \(\operatorname{Im}\lambda\left(f,\tau;t,x,\xi\right)\in\Sigma\Gamma_{K,K^{ \prime},p}^{-1}\left[\epsilon_{0},N\right]\) satisfying (2.20). Then there exists \(s_{0}>0\) such that, for any \(f\in B_{s_{0},\mathbb{R}}^{K^{\prime}}\left(I;\epsilon_{0}\right)\), the equation_
\[\frac{\mathrm{d}}{\mathrm{d}\tau}\Phi_{\Lambda}\left(f,\tau;t\right)=\Lambda \left(f,\tau;t\right)\,\Phi_{\Lambda}\left(f,\tau;t\right)\,,\qquad\Phi_{ \Lambda}\left(f,0;t\right)=\mathrm{Id}\, \tag{5.5}\]
_has a unique solution \(\Phi_{\Lambda}\left(f,\tau\right):=\Phi_{\Lambda}\left(f,\tau;t\right)\) satisfying the following properties: for any \(s\in\mathbb{R}\) the linear map \(\Phi_{\Lambda}\left(f,\tau;t\right)\) is bounded and invertible on \(H_{0}^{s}(\mathbb{T},\mathbb{R})\) and there are a constant \(C:=C(s,\epsilon_{0},K)\) and \(\epsilon_{0}^{\prime}(s)\in(0,\epsilon_{0})\) such that, for any \(f\in B_{s_{0},\mathbb{R}}^{K}\left(I;\epsilon_{0}^{\prime}(s)\right)\), for any \(0\leq k\leq K-K^{\prime}\), \(v\in C_{\epsilon}^{K-K^{\prime}}\left(I;H_{0}^{s}(\mathbb{T},\mathbb{R}) \right)\), \(t\in I\),_
\[\left\|\partial_{t}^{k}\left(\Phi_{\Lambda}\left(f,\tau;t\right)v\right)\right\| _{s-k}+\left\|\partial_{t}^{k}\left(\Phi_{\Lambda}\left(f,\tau;t\right)^{-1} v\right)\right\|_{s-k}\leq C\left\|v\right\|_{k,s} \tag{5.6}\]
_uniformly in \(\left|\tau\right|\leq 1\)._
Proof.: Since the imaginary part of the symbol \(\lambda\) has order \(-1\), the flow \(\Phi_{\Lambda}\) of (5.5) is well-posed and satisfies (5.6) arguing as in [5, Lemma 3.22]. Moreover it preserves the subspace of real functions since \(\lambda\left(f,\tau;t,x,\xi\right)\) satisfies (2.20).
In the proof of Proposition 5.2 it is convenient to preserve the linear Hamiltonian structure of (4.1) up to order zero along the reduction which leads to (5.4), since it guarantees that the symbol \(\left(1+\epsilon_{0}\left(f\right)\right)L_{\alpha}\left(\left|\xi\right| \right)+\mathsf{H}_{\alpha}\left(f;t,\xi\right)\), as well as those obtained in the intermediate reduction steps, are real, at least up to order \(-1\).
**Reduction to constant coefficients at principal order.** We first reduce to constant coefficients the highest order paradifferential operator in (4.1). We conjugate (4.1) via the transformation
\[f^{\left[1\right]}:=\Phi_{B}\left(f,1\right)f \tag{5.7}\]
where \(\Phi_{B}\left(f,\tau\right)\) is the flow generated as in Lemma 5.3 by the Hamiltonian operator
\[B\left(f,\tau\right):=\partial_{x}\circ\operatorname{Op}^{BW}\left[b\left(f, \tau;x\right)\right]\,,\qquad b\left(f,\tau;x\right):=\frac{\beta\left(f;x \right)}{1+\tau\,\partial_{x}\left(\beta\left(f;x\right)\right)}\,, \tag{5.8}\]
where \(\beta\left(f;x\right)\) is a real function to be chosen.
**Lemma 5.4** (Reduction to constant coefficients at principal order).: _Let \(\beta\left(f;x\right)\in\Sigma\mathcal{F}_{K,0,1}^{R}\left[\epsilon_{0},N\right]\) be the periodic function of the diffeomorphism \(x\mapsto x+\beta\left(f;x\right)\) of \(\mathbb{T}\) whose inverse diffeomorphism is \(y\mapsto y+\tilde{\beta}\left(f;y\right)\), where_
\[\tilde{\beta}\left(f;y\right):=\partial_{y}^{-1}\left[\left(\frac{1+\alpha_{0 }\left(f\right)}{1+\nu\left(f;y\right)}\right)^{\frac{1}{\alpha}}-1\right] \in\Sigma\mathcal{F}_{K,0,1}^{R}\left[\epsilon_{0},N\right]\,,\qquad\epsilon _{0}\left(f\right):=\left(\int\left(1+\nu\left(f;y\right)\right)^{-\frac{1}{ \alpha}}\mathrm{d}y\right)^{-\alpha}-1\,, \tag{5.9}\]
_and \(\nu\left(f;y\right)\) is the real function defined in Theorem 4.1. Then, if \(f\) solves (4.1), the variable \(f^{\left[1\right]}\) defined in (5.7) satisfies the equation_
\[\partial_{t}f^{\left[1\right]}+\partial_{x}\circ\operatorname{Op}^{BW}\left[ \left(1+\alpha_{0}\left(f\right)\right)\,L_{\alpha}\left(\left|\xi\right|\right)+V ^{1}\left(f;t,x\right)+P\left(f;x,\xi\right)\,\right]f^{\left[1\right]}=R\left(f ;t\right)f^{\left[1\right]} \tag{5.10}\]
_where_
* \(\epsilon_{0}\left(f\right)\) _is the_ \(x\)_-independent function in_ \(\Sigma\mathcal{F}_{K,0,1}^{R}\left[\epsilon_{0},N\right]\) _defined in (_5.9_);_
* \(V^{1}\left(f;t,x\right)\) _is a real function in_ \(\Sigma\mathcal{F}_{K,1,1}^{\mathbb{R}}\left[\epsilon_{0},N\right]\)_;_
* \(P\left(f;x,\xi\right)\) _is a symbol in_ \(\Sigma\Gamma_{K,0,1}^{-1}\left[\epsilon_{0},N\right]\) _satisfying (_2.20_);_
* \(R\left(f;t\right)\) _is a real smoothing operator in_ \(\Sigma\hat{\mathcal{R}}_{K,1,1}^{-\left(\rho-N\right)}\left[\epsilon_{0},N\right]\)_._
Proof.: If \(f\) solves (4.1) then, the variable \(f^{\left[1\right]}:=\Phi_{B}\left(f,1\right)f:=\Phi_{B}\left(1\right)f\) satisfies, using also the expansion \(L_{\alpha}\left(\left|\xi\right|\right)=\mathbb{V}_{\alpha}+c_{\alpha}^{1} \left|\xi\right|^{\alpha-1}+m_{\alpha-3}\left(\left|\xi\right|\right)\) in (3.32), and \(\partial_{t}\Phi_{B}\left(1\right)\circ\Phi_{B}\left(1\right)^{-1}=-\Phi_{B} \left(1\right)\circ\left(\partial_{t}\Phi_{B}\left(1\right)^{-1}\right)\), the equation
\[\partial_{t}f^{\left[1\right]}+\Phi_{B}\left(1\right)\circ \partial_{x}\circ\operatorname{Op}^{BW}\left[\left(1+\nu\left(f;x\right) \right)\left(c_{\alpha}^{1}\left|\xi\right|^{\alpha-1}+\mathbb{V}_{\alpha}+m _{\alpha-3}\left(\left|\xi\right|\right)\right)+V\left(f;x\right)+P\left(f;x, \xi\right)\right]\circ\Phi_{B}\left(1\right)^{-1}f^{\left[1\right]}\\ +\Phi_{B}\left(1\right)\circ\left(\partial_{t}\Phi_{B}\left(1 \right)^{-1}\right)f^{\left[1\right]}=\Phi_{B}\left(1\right)\circ R\left(f \right)\circ\Phi_{B}\left(1\right)^{-1}f^{\left[1\right]}. \tag{5.11}\]
By (B.1), (B.2) the principal order operator in (5.11) is
\[\Phi_{B}\left(1\right)\circ\partial_{x}\circ\operatorname{Op}^{ BW}\left[\left(1+\nu\left(f;x\right)\right)c_{\alpha}^{1}\left|\xi\right|^{ \alpha-1}\right]\circ\Phi_{B}\left(1\right)^{-1}\\ =\left.\partial_{x}\circ\operatorname{Op}^{BW}\left[c_{\alpha}^ {1}\left(1+\nu\left(f;y\right)\right)\left(1+\partial_{y}\tilde{\rho}\left(f ;y\right)\right)^{\alpha}\right]_{y=x+\beta\left(f;x\right)}\left|\xi\right|^{ \alpha-1}+P_{1}\left(f;x,\xi\right)\right]+R\left(f\right) \tag{5.12}\]
where \(y\rightharpoonup y+\tilde{\beta}\left(f;y\right)\) is the inverse diffeomorphism of \(x\rightharpoonup x+\beta\left(f;y\right)\) given by Lemma 2.9, \(P_{1}\left(f;x,\xi\right)\) is a symbol in \(\Sigma\Gamma_{K,0,1}^{\alpha-3}\left[\epsilon_{0},N\right]\) and \(R\left(f\right)\) is a smoothing operator in \(\Sigma\hat{\mathcal{R}}_{K,0,1}^{-\left(\rho-N\right)}\left[\epsilon_{0},N\right]\). By (5.9) we deduce that the symbol of highest order in (5.12) is independent of the variable \(x\), that is
\[\Phi_{B}\left(1\right)\partial_{x}\operatorname{Op}^{BW}\left[\left(1+\nu \left(f;x\right)\right)c_{\alpha}^{1}\left|\xi\right|^{\alpha-1}\right]\Phi_{B }\left(1\right)^{-1}=\partial_{x}\operatorname{Op}^{BW}\left[c_{\alpha}^{1} \left(1+\omega_{0}\left(f\right)\right)\left|\xi\right|^{\alpha-1}+P_{1}\left( f;x,\xi\right)\right]+R_{1}\left(f\right). \tag{5.13}\]
The lower order conjugated operator in (5.11) is, by (B.1) and Lemma 2.8,
\[\Phi_{B}\left(1\right)\circ\partial_{x}\circ\operatorname{Op}^{BW }\left[\left(1+\nu\left(f;x\right)\right)\left(\mathbb{V}_{\alpha}+m_{\alpha-3} \left(\left|\xi\right|\right)\right)+V\left(f;x\right)+P\left(f;x,\xi\right) \right]\circ\Phi_{B}\left(1\right)^{-1}\\ =\partial_{x}\circ\operatorname{Op}^{BW}\left[\mathbb{V}_{\alpha }+m_{\alpha-3}\left(\left|\xi\right|\right)+\bar{V}^{1}\left(f;x\right)+P_{2} \left(f;x,\xi\right)\right]+R\left(f\right) \tag{5.14}\]
where \(\bar{V}^{1}\left(f;x\right)\) is a function in \(\Sigma\mathcal{F}_{K,0,1}^{\mathbb{R}}\left[\epsilon_{0},N\right]\), \(P_{2}\left(f;x,\xi\right)\) is a symbol in \(\Sigma\Gamma_{K,0,1}^{-1}\left[\epsilon_{0},N\right]\), since \(\alpha<2\) (note that \(m_{\alpha-3}\left(\left|\xi\right|\left(1+\tilde{\beta}\left(f;y\right)\right) \right|_{y=x+\beta\left(f;x\right)}\right)-m_{\alpha-3}\left(\left|\xi\right|\right)\) is a symbol in \(\Sigma\Gamma_{K,0,1}^{-3}\left[\epsilon_{0},N\right]\) and \(R\left(f\right)\) is a smoothing operator in \(\Sigma\hat{\mathcal{R}}_{K,0,1}^{-\rho}\left[\epsilon_{0},N\right]\), by renaming \(\rho\). Finally by (B.3) there exists a real function \(\mathfrak{V}\left(f;t,x\right)\) in \(\Sigma\mathcal{F}_{K,1,1}^{\mathbb{R}}\left[\epsilon_{0},N\right]\) and a smoothing operator \(R\left(f;t\right)\) in \(\Sigma\hat{\mathcal{R}}_{K,1,1}^{-\rho}\left[\epsilon_{0},N\right]\) such that
\[\Phi_{B}\left(1\right)\circ\left(\partial_{t}\Phi_{B}\left(1\right)^{-1}\right) =\partial_{x}\circ\operatorname{Op}^{BW}\left[\mathfrak{V}\left(f;t,x\right) \right]+R\left(f;t\right). \tag{5.15}\]
Lemma 5.4 follows by (5.11), (5.13), (5.14) and (5.15) with \(V^{1}\left(f;x\right):=\bar{V}^{1}\left(f;x\right)+\mathfrak{V}\left(f;x\right)- \mathfrak{V}_{\alpha}\left(f\right)\mathbb{V}_{\alpha}\), which belongs to \(\mathcal{F}_{K,1,1}^{\mathbb{R}}\left[\epsilon_{0},N\right]\), and \(P\left(f;x,\xi\right):=\left(P_{1}+P_{2}\right)\left(f;x,\xi\right)-\mathfrak{V }_{\alpha}\left(f\right)m_{\alpha-3}\left(\left|\xi\right|\right)\) in \(\Sigma\Gamma_{K,0,1}^{-1}\left[\epsilon_{0},N\right]\).
**Reduction to constant coefficients at arbitrary-order.** We now reduce (5.10) to constant coefficients up to a smoothing operator, implementing an inductive process which, at each step, regularizes the symbol of \(\delta:=\alpha-1>0\). We distinguish two regimes.
**Lemma 5.5** (Reduction to constant coefficients up order \(0\)).: _Let \(\delta:=\alpha-1\) and \(\mathrm{j}_{*}:=\left\lceil 1/\delta\right\rceil+1\). For any \(\mathrm{j}\in\{1,\ldots,\mathrm{j}_{*}-1\}\), there exist \(\rho_{\mathrm{j}}\) defined inductively as \(\rho_{\mathrm{j}}:=N\) and \(\rho_{\mathrm{j}+1}:=\rho_{\mathrm{j}}+N\left(1-\mathrm{j}\delta\right)\) such that for any \(K\geq\mathrm{j}\) there exist \(s_{0}>0\) and \(a\)_
* _symbol_ \(d^{\left[\mathrm{j}\right]}\left(f;t,\xi\right):=\left(1+\mathfrak{\alpha}_{0} \left(f\right)\right)L_{\alpha}\left(\left|\xi\right|\right)+\mathrm{H}_{\alpha} ^{\left[\mathrm{j}\right]}\left(f;t,\xi\right)\) _where_ \(\mathrm{H}_{\alpha}^{\left[\mathrm{j}\right]}\left(f;t,\xi\right)\in\Sigma\Gamma_{K,\mathrm{j}-1,2}^{0}\left[\epsilon_{0},N\right]\)_, independent of_ \(x\)_, real, even in_ \(\xi\)_;_
* _symbol_ \(P^{\left[\mathrm{j}\right]}\left(f;t,x,\xi\right)\) _in_ \(\Sigma\Gamma_{K,\mathrm{j}-1,1}^{-\left(\mathrm{j}-1\right)\delta}\left[\epsilon_{0},N\right]\)
* _real smoothing operator_ \(R^{[\bar{\mathrm{ij}}]}\left(f;t\right)\) _in_ \(\Sigma\mathcal{R}^{-\left(\rho-\rho_{j}\right)}_{\kappa,\downarrow,1}\)_;_
* _Hamiltonian operator_ \(W^{[\bar{\mathrm{ij}}]}\left(f\right):=\partial_{x}\circ\mathrm{Op}^{BW}\left[ w^{[\bar{\mathrm{ij}}]}\left(f;t,x,\xi\right)\right]\) _where_ \(w^{[\bar{\mathrm{ij}}]}\) _is the real and even in_ \(\xi\) _symbol_ \[w^{[\bar{\mathrm{ij}}]}\left(f;t,x,\xi\right):=-\partial_{x}^{-1}\left[ \frac{r^{[\bar{\mathrm{ij}}]}\left(f;t,x,\xi\right)-f\,r^{[\bar{\mathrm{ij}}]} \left(f;t,x,\xi\right)\,\mathrm{d}x}{\left(1+\alpha_{0}\left(f\right)\right) \,c_{\alpha}^{1}\alpha\left|\xi\right|^{\alpha-1}}\right]\in\Sigma\Gamma^{- \mathrm{j}\delta}_{K,\downarrow,1}\left[\epsilon_{0},N\right];\] (5.16)
_such that if \(f\in B^{K}_{s_{0},\mathbb{R}}\left(I;\epsilon_{0}\right)\) is a solution of (4.1) then \(f^{[\bar{\mathrm{ij}}]}:=\prod_{j=1}^{j-1}\Phi_{W^{[\bar{\mathrm{ij}}]}}\left( f;1\right)^{-1}\circ\Phi_{B}\left(f;1\right)f\) solves_
\[\partial_{t}f^{[\bar{\mathrm{ij}}]}+\partial_{x}\circ\mathrm{Op}^{BW}\left[ d^{[\bar{\mathrm{ij}}]}\left(f;t,\xi\right)+r^{[\bar{\mathrm{ij}}]}\left(f;t,x,\xi \right)+P^{[\bar{\mathrm{ij}}]}\left(f;t,x,\xi\right)\right]f^{[\bar{\mathrm{ ij}}]}=R^{[\bar{\mathrm{ij}}]}\left(f;t\right)f^{[\bar{\mathrm{ij}}]}. \tag{5.17}\]
Proof.: Note that (5.10) has the form (5.17) for \(\mathrm{j}=1\) with \(\mathrm{H}_{\alpha}^{[1]}\left(f;t,\xi\right):=0\), \(d^{[1]}\left(f;t,\xi\right):=\left(1+\iota_{0}\left(f\right)\right)L_{\alpha} \left(\left|\xi\right|\right)\), \(r^{[1]}\left(f;t,x,\xi\right):=V^{1}\left(f;t,x\right)\), \(P^{[1]}\left(f;t,x,\xi\right):=P\left(f;t,x,\xi\right)\) and \(R^{[1]}\left(f;t\right):=R\left(f;t\right)\). We now prove that, if \(f^{[\bar{\mathrm{ij}}]}\) solves (5.17) then
\[f^{[\bar{\mathrm{ij}}]}:=\Phi_{W^{[\bar{\mathrm{ij}}]}}\left(f,1\right)^{-1}f^ {[\bar{\mathrm{ij}}]} \tag{5.18}\]
solves (5.17) with \(\mathrm{j}+1\) instead of \(\mathrm{j}\). By conjugation, from (5.18), setting \(\Phi_{W^{[\bar{\mathrm{ij}}]}}\left(1\right):=\Phi_{W^{[\bar{\mathrm{ij}}]}} \left(f,1\right)\), we have
\[\partial_{t}f^{[\bar{\mathrm{ij}}+1]}+\Phi_{W^{[\bar{\mathrm{ ij}}]}}\left(1\right)^{-1}\circ\partial_{x}\circ\mathrm{Op}^{BW}\left[ d^{[\bar{\mathrm{ij}}]}\left(f;t,\xi\right)+r^{[\bar{\mathrm{ij}}]}\left(f;t,x,\xi \right)+P^{[\bar{\mathrm{ij}}]}\left(f;t,x,\xi\right)\right]\circ\Phi_{W^{[ \bar{\mathrm{ij}}]}}\left(1\right)f^{[\bar{\mathrm{ij}}+1]}\\ -\partial_{t}\Phi_{W^{[\bar{\mathrm{ij}}]}}\left(1\right)^{-1} \circ\Phi_{W^{[\bar{\mathrm{ij}}]}}\left(1\right)f^{[\bar{\mathrm{ij}}+1]}= \Phi_{W^{[\bar{\mathrm{ij}}]}}\left(1\right)^{-1}\circ R^{[\bar{\mathrm{ij}}]} \left(f;t\right)\circ\Phi_{W^{[\bar{\mathrm{ij}}]}}\left(1\right)f^{[\bar{ \mathrm{ij}}+1]}. \tag{5.19}\]
Using (B.6) we expand the highest order operator in (5.19) as
\[\Phi_{W^{[\bar{\mathrm{ij}}]}}\left(1\right)^{-1}\circ\partial_{x} \circ\mathrm{Op}^{BW}\left[d^{[\bar{\mathrm{ij}}]}\left(f;t,\xi\right)\right] \circ\Phi_{W^{[\bar{\mathrm{ij}}]}}\left(1\right)=\partial_{x}\circ\mathrm{ Op}^{BW}\left[d^{[\bar{\mathrm{ij}}]}\left(f;t,\xi\right)\right]\\ -\left[\partial_{x}\circ\mathrm{Op}^{BW}\left[w^{[\bar{\mathrm{ij }}]}\left(f;t,x,\xi\right)\right],\,\partial_{x}\circ\mathrm{Op}^{BW}\left[d^{ [\bar{\mathrm{ij}}]}\left(f;t,\xi\right)\right]\right]+\partial_{x}\circ\mathrm{ Op}^{BW}\left[Q_{-\left(2\mathrm{j}-1\right)\delta}\left(f;t,x,\xi\right) \right]+R\left(f;t\right) \tag{5.20}\]
where, in view of (5.16), \(Q_{-\left(2\mathrm{j}-1\right)\delta}\) is a real and even in \(\xi\) valued symbol in \(\Sigma\Gamma^{-\left(2\mathrm{j}-1\right)\delta}_{K,\downarrow,2}\left[\epsilon_{0},N\right]\) and \(R\left(f;t\right)\) is a smoothing operator in \(\Sigma\hat{\mathcal{R}}^{-\rho}_{K,\downarrow,2}\left[\epsilon_{0},N\right]\). By symbolic calculus, (5.16) and since \(d^{[\bar{\mathrm{ij}}]}\) is \(x\)-independent we have
\[\left[\partial_{x}\circ\mathrm{Op}^{BW}\left[w^{[\bar{\mathrm{ ij}}]}\left(f;t,x,\xi\right)\right],\,\partial_{x}\circ\mathrm{Op}^{BW}\left[d^{[\bar{ \mathrm{ij}}]}\left(f;t,\xi\right)\right]\right]=\partial_{x}\circ\left[ \mathrm{Op}^{BW}\left[w^{[\bar{\mathrm{ij}}]}\left(f;t,x,\xi\right)\right],\, \mathrm{Op}^{BW}\left[\xi d^{[\bar{\mathrm{ij}}]}\left(f;t,\xi\right)\right]\right] \right]\\ =\partial_{x}\circ\mathrm{Op}^{BW}\left[-w^{[\bar{\mathrm{ij}}]} _{x}\left(f;t,x,\xi\right)\,\partial_{\xi}\left(\xi\,\,d^{[\bar{\mathrm{ij}}]} \left(f;t,\xi\right)\right)+Q_{-2-\left(\mathrm{j}-1\right)\delta}\left(f;t,x, \xi\right)\right]+R\left(f;t\right) \tag{5.21}\]
where \(Q_{-2-\left(\mathrm{j}-1\right)\delta}\left(f;t,x,\xi\right)\) is a real and even symbol in \(\Sigma\Gamma^{-2-\left(\mathrm{j}-1\right)\delta}_{K,\downarrow,1}\left[\epsilon_{0},N\right]\) and \(R\left(f;t\right)\) is a smoothing operator in \(\Sigma\hat{\mathcal{R}}^{-\rho}_{K,\downarrow,2}\left[\epsilon_{0},N\right]\). Using the asymptotic expansion (3.32) we have that
\[\partial_{\xi}\left\{\xi\,\,d^{[\bar{\mathrm{ij}}]}\left(f;t,\xi\right)\right\}= \left(1+\iota_{0}\left(f\right)\right)\,c_{\alpha}^{1}\alpha\left|\xi\right|^{ \alpha-1}+\bar{Q}^{[\bar{\mathrm{ij}}]}\left(f;t,\xi\right)\qquad\text{ where}\qquad\bar{Q}^{[\bar{\mathrm{ij}}]}\left(f;t,\xi\right)\in\Sigma\Gamma^{0}_{K, \downarrow-1,0}\left[\epsilon_{0},N\right]. \tag{5.22}\]
So, by (5.20), (5.21), (5.22), (B.6), the definition of \(w^{[\bar{\mathrm{ij}}]}\left(f;t,x,\xi\right)\in\Sigma\Gamma^{-\mathrm{j} \delta}_{K,\downarrow,1}\left[\epsilon_{0},N\right]\) provided in Eq. (5.16), (B.29), we obtain
\[\Phi_{W^{[\bar{\mathrm{ij}}]}}\left(1\right)^{-1}\circ\partial_{x} \circ\mathrm{Op}^{BW}\left[d^{[\bar{\mathrm{ij}}]}\left(f;t,\xi\right)+r^{[ \bar{\mathrm{ij}}]}\left(f;t,x,\xi\right)\right]\circ\Phi_{W^{[\bar{\mathrm{ ij}}]}}\left(1\right)\] \[=\partial_{x}\circ\mathrm{Op}^{BW}\left[d^{[\bar{\mathrm{ij}}]}\left(f;t,x,\xi \right)+w^{[\bar{\mathrm{ij}}]}_{x}\left(f;t,x,\xi\right)\right]+R\left(f;t \right)\] \[=\partial_{x}\circ\mathrm{Op}^{BW}\left[d^{[\bar{\mathrm{ij}}]}\left(f;t,x,\xi\right
Now, implementing an analogous algorithmic procedure for the symbols of order \(\leq-1\), we reduce the equation (5.17) for \(\mathrm{j}=\mathrm{j}_{*}\) to constant coefficients up to a smoothing operator.
**Lemma 5.6** (Reduction to constant coefficients up to smoothing operators).: _For any integer \(\mathrm{j}\geq\mathrm{j}_{*}\), for any \(K\geq\mathrm{j}\) there exist a_
* \(\text{symbol }d^{\mathrm{[ij]}}\left(f;t,\xi\right):=\left(1+\alpha_{0}\left(f \right)\right)L_{\alpha}\left(\left|\xi\right|\right)+\mathrm{H}_{\alpha}^{ \mathrm{[ij]}}\left(f;t,\xi\right)\text{ with }\mathrm{H}_{\alpha}^{\mathrm{[ij]}}\left(f;t,\xi \right)\in\Sigma\Gamma_{K,\mathrm{j}-1,2}^{0}\left[\epsilon_{0},N\right]\text{ and }\mathrm{Im}\mathrm{H}_{\alpha}^{\mathrm{[ij]}}\left(f;t,\xi\right)\) _in_ \(\Sigma\Gamma_{K,\mathrm{j}-1,2}^{-1}\left[\epsilon_{0},N\right]\)_, independent of_ \(x\) _and satisfying (_2.20_);_
* \(\text{symbol }P^{\mathrm{[ij]}}\left(f;t,x,\xi\right)\text{ in }\Sigma\Gamma_{K, \mathrm{j},1}^{-1-(\mathrm{j}_{*}),\delta}\left[\epsilon_{0},N\right]\) _satisfying (_2.20_);_
* _a real smoothing operator_ \(R^{\mathrm{[ij]}}\left(f;t\right)\text{ in }\Sigma\mathcal{R}_{K, \mathrm{j},1}^{-(\rho-\rho_{\mathrm{j}_{*}})}\left[\epsilon_{0},N\right]\)_;_
* _bounded linear operators_ \(W^{\mathrm{[ij]}}\left(f\right):=\partial_{x}\circ\mathrm{Op}^{BW}\left[w^{ \mathrm{[ij]}}\left(f;t,x,\xi\right)\right]\) _where_ \[w^{\mathrm{[ij]}}\left(f;t,x,\xi\right):=-\partial_{x}^{-1}\left[\frac{P^{ \mathrm{[ij]}}\left(f;t,x,\xi\right)-fP^{\mathrm{[ij]}}\left(f;t,x,\xi\right) \mathrm{d}x}{\left(1+\alpha_{0}\left(f\right)\right)\,c_{\alpha}^{1}\alpha \left|\xi\right|^{\alpha-1}}\right]\in\Sigma\Gamma_{K,\mathrm{j},1}^{-1-( \mathrm{j}_{*}+1)\delta}\left[\epsilon_{0},N\right];\] (5.26)
_and \(s_{0}>0\), such that if \(f\in B_{s_{0},R}^{K}\left(I;\epsilon_{0}\right)\) is a solution of (4.1) then \(f^{\mathrm{[ij]}}:=\Pi_{j=1}^{\mathrm{j}-1}\Phi_{W^{\mathrm{[ij]}}}\left(f;1 \right)^{-1}\circ\Phi_{B}\left(f;1\right)f\) solves_
\[\partial_{t}f^{\mathrm{[ij]}}+\partial_{x}\circ\mathrm{Op}^{BW}\left[d^{ \mathrm{[ij]}}\left(f;t,\xi\right)+P^{\mathrm{[ij]}}\left(f;t,x,\xi\right) \right]f^{\mathrm{[ij]}}=R^{\mathrm{[ij]}}\left(f;t\right)f^{\mathrm{[ij]}}. \tag{5.27}\]
We now conclude the proof of Proposition 5.2. Let \(\mathrm{j}^{*}:=\mathrm{j}^{*}\left(\rho\right):=\min\left\{\mathrm{j}\in \mathbb{N}_{0}\mid\left(\mathrm{j}-\mathrm{j}_{*}\right)\delta>\rho-\rho_{ \mathrm{j}_{*}}\right\}\), which is explicitly \(\mathrm{j}^{*}:=\left[\frac{\rho-\rho_{\mathrm{j}_{*}}}{\alpha-1}\right]+ \mathrm{j}_{*}=\left[\frac{\rho-\rho_{\mathrm{j}_{*}}}{\alpha-1}\right]+ \left[\frac{1}{\alpha-1}\right]+1\), so that \(\mathrm{Op}^{BW}\left[P^{\mathrm{[ij]}}\left(f;t,x,\xi\right)\right]\) is a smoothing operator in \(\Sigma\mathcal{R}_{K,\mathrm{j}^{*},1}^{-(\rho-\rho_{\mathrm{j}_{*}})}\left[ \epsilon_{0},N\right]\) by Remark 2.18. Then the equation (5.27) with \(\mathrm{j}=\mathrm{j}^{*}\) has the form (5.4) with
\[g=f^{\mathrm{[ij^{*}]}}=\Psi\left(f;t\right)f\,,\qquad\Psi\left(f;t\right):= \prod_{j=1}^{\mathrm{j}^{*}-1}\Phi_{W^{\mathrm{[ij]}}}\left(f,1\right)^{-1} \circ\Phi_{B}\left(f,1\right)\,,\]
symbol \(\mathrm{H}_{\alpha}\left(f;t,\xi\right):=\mathrm{H}_{\alpha}^{\mathrm{[ij^{*} ]}}\left(f;t,\xi\right)\), smoothing operator \(R\left(f;t\right):=R^{\mathrm{[ij^{*}]}}\left(f;t\right)+\mathrm{Op}^{BW} \left[P^{\mathrm{[ij^{*}]}}\left(f;t,x,\xi\right)\right]\), and defining \(\underline{\rho}\left(N,\alpha\right):=\rho_{\mathrm{j}_{*}}\) and \(\underline{K}^{\prime}\left(\rho,\alpha\right):=\mathrm{j}^{*}\).
Birkhoff normal form step.We now perform one step of Birkhoff normal form to cancel out the quadratic term in (5.4) which, since \(c_{0}\left(f\right)\) and \(H_{\alpha}\left(f;t,\xi\right)\) vanish quadratically at \(f=0\), comes only from \(R\left(f;t\right)g\).
By Proposition 5.1 and using Proposition 2.21 we first rewrite (5.4) as
\[\partial_{t}g+\mathrm{i}\omega_{\alpha}\left(D\right)g+\mathrm{i}\mathrm{Op}^{ BW}\left[d\left(f;t,\xi\right)\right]g=R_{1}\left(f\right)g+R_{\geq 2}\left(f;t\right)g \tag{5.28}\]
where
* \(d\left(f;t,\xi\right):=\alpha_{0}\left(f\right)\omega_{\alpha}\left(\xi\right) +\xi\)\(\mathrm{H}_{\alpha}\left(f;t,\xi\right)\) is a symbol in \(\Sigma\Gamma_{K,\underline{K}^{\prime},2}^{\alpha}\left[\epsilon_{0},N\right]\) independent of \(x\), with imaginary part \(\mathrm{Im}\,d\left(f;t,\xi\right)\) in \(\Sigma\Gamma_{K,\underline{K}^{\prime},2}^{0}\left[\epsilon_{0},N\right]\);
* \(R_{1}\left(f\right)\) is a real homogenous smoothing operator in \(\hat{\mathcal{R}}_{1}^{-(\underline{\rho}-\underline{\rho})}\), that we expand (cf. (2.22)) as \[R_{1}\left(f\right)v=\sum_{\begin{subarray}{c}\kappa,k\leq r\leq\mathrm{Im},\\ n\neq r-k\end{subarray}}\left(r_{1}\right)_{n,j,k}f_{n}v_{j}e^{\mathrm{i}kx}\,, \qquad\left(r_{1}\right)_{n,j,k}\in\mathbb{C}\,,\] (5.29) and \(R_{\geq 2}\left(f;t\right)\) is a real smoothing operator in \(\Sigma\hat{\mathcal{R}}_{K,\underline{K}^{\prime},2}^{-(\underline{\rho}- \underline{\rho})}\left[\epsilon_{0},N\right]\).
In order to remove \(R_{1}\left(f\right)\) we conjugate (5.28) with the flow
\[\partial_{\tau}\Phi_{Q}^{\mathrm{T}}\left(f\right)=Q\left(f\right)\Phi_{Q}^{ \mathrm{T}}\left(f\right)\,,\qquad\Phi_{Q}^{0}\left(f\right)=\mathrm{Id}\,, \tag{5.30}\]
generated by the \(1\)-homogenous smoothing operator
\[Q\left(f\right)v=\sum_{\begin{subarray}{c}n,k\in\mathbb{Z}\setminus 0,\\ n+j-k\end{subarray}}q_{n,j,k}f_{n}v_{j}e^{\mathrm{i}kx},\quad q_{n,j,k}:=\frac{- (r_{1})_{n,j,k}}{\mathrm{i}\left[\omega_{\alpha}(k)-\omega_{\alpha}\left(j \right)-\omega_{\alpha}(n)\right]}\,, \tag{5.31}\]
which is well-defined by Lemma 3.5. Note also that by (3.30) and since (cf. (2.24), (2.23))
\[\overline{(r_{1})_{n,j,k}}=(r_{1})_{-n,-j,-k},\qquad|(r_{1})_{n,j,k}|\leq C \frac{\max_{2}\left(|n|,|j|\right)^{\mu}}{\max\left(|n|,|j|\right)^{\rho-\frac{ \rho}{\rho}}}\,, \tag{5.32}\]
also \(Q\left(f\right)\) is a real smoothing operator in \(\dot{\mathcal{R}}_{1}^{-\rho+\rho}\)as \(R_{1}\left(f\right)\).
**Lemma 5.7** (Birkhoff step).: _If \(g\) solves (5.28) then the variable \(y:=\Phi_{Q}^{1}\left(f\right)g\) solves the equation (5.2)._
Proof.: To conjugate (5.28) we apply a Lie expansion (similarly to Proposition B.2). We have
\[-\mathrm{i}\Phi_{Q}^{1}\left(f\right)\omega_{\alpha}(D)\left( \Phi_{Q}^{1}\right)^{-1} =-\mathrm{i}\omega_{\alpha}(D)+\left[Q\left(f\right),-\mathrm{i} \omega_{\alpha}(D)\right]\] \[+\int_{0}^{1}(1-\tau)\Phi_{Q}^{\tau}\left(f\right)\left[Q\left(f \right),\,\left[Q\left(f\right),\,-\mathrm{i}\omega_{\alpha}(D)\right]\right] \left(\Phi_{Q}^{\tau}\left(f\right)\right)^{-1}d\tau\,. \tag{5.33}\]
Using that \(Q\left(f\right)\) belongs to \(\dot{\mathcal{R}}_{1}^{-\rho+\rho}\) the term in (5.33) is a smoothing operator in \(\Sigma\dot{\mathcal{R}}_{K,0,2}^{-\rho+\rho+\alpha}\left[e_{0},N\right]\). Similarly we obtain
\[-\mathrm{i}\Phi_{Q}^{1}\left(f\right)\mathrm{Op}^{BW}\left[d\left(f;t,\xi \right)\right]\left(\Phi_{Q}^{1}\left(f\right)\right)^{-1}=-\mathrm{i}\mathrm{ Op}^{BW}\left[d\left(f;t,\xi\right)\right] \tag{5.34}\]
up to a smoothing operator in \(\Sigma\dot{\mathcal{R}}_{K,K_{0}^{-2}}^{-\rho+\alpha}\left[e_{0},N\right]\), and
\[\Phi_{Q}^{1}\left(f\right)\left(R_{1}\left(f\right)+R_{2\geq}\left(f;t\right) \right)\left(\Phi_{Q}^{1}\left(f\right)\right)^{-1}=R_{1}\left(f\right) \tag{5.35}\]
plus a smoothing operator in \(\Sigma\dot{\mathcal{R}}_{K,K_{0}^{-2}}^{-\rho+\rho+\alpha}\left[e_{0},N\right]\). Next we consider the contribution coming from the conjugation of \(\partial_{t}\). By a Lie expansion (similarly to Proposition B.2) we get
\[\partial_{t}\Phi_{Q}^{1}\left(f\right)\left(\Phi_{Q}^{1}\left(f \right)\right)^{-1} =\partial_{t}Q\left(f\right)\] \[+\frac{1}{2}\left[Q\left(f\right),\partial_{t}Q\left(f\right) \right]+\frac{1}{2}\int_{0}^{1}(1-\tau)^{2}\Phi_{Q}^{\tau}\left(f\right)\left[ Q\left(f\right),\,\left[Q\left(f\right),\,\partial_{t}Q\left(f\right)\right] \right]\left(\Phi_{Q}^{\tau}\left(f\right)\right)^{-1}d\tau\,. \tag{5.36}\]
Since the Eq. (4.1) can be written as \(\partial_{t}f=-\mathrm{i}\omega_{\alpha}(D)f+M\left(f\right)f\) where \(M\left(f\right)\) is a real \(\alpha\)-operator in \(\Sigma\dot{\mathcal{M}}_{K,0,1}^{\alpha}\) by Remark 2.18 and 2.11, we deduce by Proposition 2.23 that
\[\partial_{t}Q\left(f\right)=Q\left(-\mathrm{i}\omega_{\alpha}(D)f+M\left(f \right)f\right)=Q\left(-\mathrm{i}\omega_{\alpha}(D)f\right) \tag{5.37}\]
up to a smoothing operator in \(\Sigma\dot{\mathcal{R}}_{K,0,2}^{-\rho+\rho+\alpha}\left[e_{0},N\right]\). Since \(Q\left(-\mathrm{i}\omega_{\alpha}(D)f\right)\) is in \(\dot{\mathcal{R}}_{1}^{-\rho+\rho+\alpha}\) we have that the line (5.36) belongs to \(\Sigma\dot{\mathcal{R}}_{K,0,2}^{-\rho+\rho+\alpha}\left[e_{0},N\right]\).
We now prove that \(Q\left(f\right)\) solves the homological equation
\[Q\left(-\mathrm{i}\omega_{\alpha}(D)f\right)+\left[Q\left(f\right),\,-\mathrm{i }\omega_{\alpha}(D)\right]+R_{1}\left(f\right)=0\,. \tag{5.38}\]
Writing (5.31) as \(Q\left(f\right)v=\sum_{k,j\in\mathbb{Z}\setminus 0}\left[Q\left(f\right) \right]_{k}^{j}v_{j}e^{\mathrm{i}kx}\) with \(\left[Q\left(f\right)\right]_{k}^{j}:=q_{n,j,k}f_{n}\), we see that the homological equation (5.38) amounts to \(\left[Q(-\mathrm{i}\omega_{\alpha}(D)f)\right]_{k}^{j}+\left[Q\left(f\right) \right]_{k}^{j}\left(\mathrm{i}\omega_{\alpha}(k)-\mathrm{i}\omega_{\alpha} \left(j\right)\right)+\left[R_{1}\left(f\right)\right]_{k}^{j}=0\), for any \(j,k\in\mathbb{Z}\setminus\left\{0\right\}\), and then, recalling (5.29), to \(q_{n,j,k}\mathrm{i}\left[\omega_{\alpha}(k)-\omega_{\alpha}(j)-\omega_{\alpha} (n)\right]+\left(r_{1}\right)_{n,j,k}=0\). This proves (5.38).
In conclusion, by (5.33), (5.34), (5.35), (5.36), (5.37) and (5.38) we deduce (5.2) (after renaming \(\rho\)). The bound (5.3) follows by standard theory of Banach space ODEs for the flow (5.30) and (5.3).
In view of Lemma 5.7, Proposition 5.1 follows defining \(\underline{\Psi}\left(f;t\right):=\Phi_{Q}^{1}\left(f\right)\circ\Psi\left(f;t\right)\) where \(\Psi\left(f;t\right)\) is defined in Proposition 5.2 and \(\Phi_{Q}^{1}\left(f\right)\) is defined in (5.31). We now easily deduce Theorem 1.1.
Proof of Theorem 1.1.The following result, analogous to [10, Lemma 8.2], enables to control the time derivatives \(\left\|\partial_{s}^{k}f(t)\right\|_{s-k\alpha}\) of a solution \(f(t)\) of (4.1) via \(\left\|f(t)\right\|_{s}\).
**Lemma 5.8**.: _Let \(K\in\mathbb{N}\). There exists \(s_{0}>0\) such that for any \(s\geq s_{0}\), any \(\epsilon\in\left(0,\overline{\epsilon_{0}}(s)\right)\) small, if \(f\) belongs to \(B_{s_{0},R}^{0}\left(I;\epsilon\right)\cap C_{*}^{0}\left(I;H_{0}^{s}\left( \mathbb{T};\mathbb{R}\right)\right)\) and solves (4.1) then \(f\in C_{*}^{K}\left(I;H_{0}^{s}\left(\mathbb{T};\mathbb{R}\right)\right)\) and there exists \(C_{1}:=C_{1}\left(s,\alpha,K\right)\geq 1\) such that \(\left\|f\left(t\right)\right\|_{s}\leq\left\|f\left(t\right)\right\|_{K,s} \leq C_{1}\left\|f\left(t\right)\right\|_{s}\) for any \(t\in I\)._
The first step is to choose the parameters in Proposition 5.1. Let \(N:=1\). In the statement of Proposition 5.1 we fix \(\rho:=\rho(1,\alpha)+\alpha\) and \(K:=\underline{K^{\prime}}\left(\rho,\alpha\right)\). Then Proposition 5.1 gives us \(s_{0}>0\). For any \(s\geq s_{0}\) we fix \(0<\epsilon_{0}\leq\min\left\{\overline{\epsilon_{0}}\left(s\right),\overline {\epsilon_{0}}\left(s\right)\right\}\) where \(\underline{\epsilon_{0}}\left(s\right)\) is defined in Proposition 5.1 and \(\overline{\epsilon_{0}}\left(s\right)\) in Lemma 5.8.
The key corollary of Proposition 5.1 is the following energy estimate where by the time-reversibility of \(\alpha\)-SQG we may restrict to positive times \(t>0\).
**Lemma 5.9** (Quartic energy estimate).: _Let \(f(t)\) be a solution of equation (4.1) in \(B_{\underline{\kappa},\mathbb{R}}^{K}\left(I;\epsilon_{0}\right)\cap C_{*}^{K} \left(I;H_{0}^{s}\left(\mathbb{T};\mathbb{R}\right)\right)\). Then there exists \(\tilde{C}_{2}\left(s,\alpha\right)>1\) such that_
\[\left\|f\left(t\right)\right\|_{s}^{2}\leq\tilde{C}_{2}\left(s,\alpha\right) \left(\left\|f\left(0\right)\right\|_{s}^{2}+\int_{0}^{t}\left\|f\left(\tau \right)\right\|_{s}^{4}\mathrm{d}\tau\right),\quad\forall 0<t<T\,. \tag{5.39}\]
Proof.: The variable \(y:=\underline{\underline{\Psi}}\left(f;t\right)f\) defined in Proposition 5.1 solves the equation (5.2) where \(\mathrm{Im}\,d\left(f;t,\xi\right)\) is a symbol in \(\Gamma_{K,K^{\prime},2}^{0}\left[\epsilon_{0}\right]\) and, being \(x\)-independent, \(\mathrm{Op}^{BW}\left[d\left(f;t,\xi\right)\right]\) commutes with \(\langle D\rangle^{s}\). Furthermore, for the above choice of \(\rho\) it results that \(R_{\geq 2}\left(f;t\right)\) is in \(\mathcal{R}_{K,K^{\prime},2}^{0}\left[\epsilon_{0}\right]\). Then by (2.25), Lemmata 2.13 and 5.8 we deduce
\[\left\|y\left(t\right)\right\|_{s}^{2}\leq\left\|y\left(0\right)\right\|_{s}^ {2}+\tilde{C}_{1}\left(s,\alpha\right)\int_{0}^{t}\left\|y\left(\tau\right) \right\|_{s}^{4}\mathrm{d}\tau\,,\quad\forall 0<t<T\,,\]
and, by (5.3), we deduce (5.39).
The energy estimate (5.39), (1.10) and the local existence result in [21] (which amounts to a local existence result for the equation (4.1)), imply, by a standard bootstrap argument, Theorem 1.1.
**Acknowledgments.** We thank A. Maspero, R. Montalto, and E. Murgante for many discussions. M.B. and S.C. were supported by PRIN 2020XR3SFL, _Hamiltonian and Dispersive PDEs._ F. G. was partially supported by the MICINN (Spain), grants EUR2020-112271 and PID2020-114703GB-100, by RED2022-134784-T funded by MCIN/AEI/10.13039/501100011033, by the Junta de Andalucia, grant P20-00566, and by the Fundacion de Investigacion de la Universidad de Sevilla, grant FIUS23/0207. F. G. acknowledges support from IMAG, funded by MICINN through Maria de Maeztu Excellence Grant CEX2020-001105-M/AEI/10.13039/501100011033. S.S. is supported by PRIN 2022HSSYPN - _Turbulent Effects vs Stability in Equations from Oceanography_, MTM PID2022-141187NB-100 and FIUS23/0207.
## Appendix A Proof of Equation (4.110)
We now prove the identity Eq. (4.110) where the functions \(A_{\alpha,0},A_{\alpha,1}\) are defined in (4.108), (4.106), and \(\kappa_{\alpha}^{j,l},\ j=1,2,3,\ l=0,1\) are the \(l\)-th order Taylor expansion in \(z\) of the function \(z\mapsto\kappa_{\alpha,z}^{j}\left(\frac{\Delta_{z}f}{r^{2}}\right)\) where the kernel functions \(\kappa_{\alpha,z}^{j}\left(\times\right)\) are defined in (4.14), (4.16), (4.21). The verification of Eq. (4.110) can be _automated_. The next small program in SageMath, a Python-based, open-source Computer-Algebra System, verifies Eq. (4.110).
G2(X,z,a) = (1/ (sqrt(1-2*X)) ))/((2*(1-X-sqrt(1-2*X)*cos(z)))^(a/2)) K2 (X,z,a) = G2 (2*X*sin(z/2), z, a) * (2*(1-cos(z)))^(a/2)
DXG2(X,z,a) = diff(G2(X,z,a),X) K3(X,z,a) = DXG2(X*2*sin(z/2), z, a)* (2*(1-cos(z)))^(a/2) * sin(z)
expansionf_K1(x,z,a) = taylor(K1( Deltaf (x,z) / (1+2*f(x)), z, a ), z, 0, l) expansionf_K2(x,z,a) = taylor(K2( Deltaf (x,z) / (1+2*f(x)), z, a ), z, 0, l) expansionf_K3(x,z,a) = taylor(K3( Deltaf (x,z) / (1+2*f(x)), z, a ), z, 0, l)
C10(x,a) = expansionf_K1.coefficient(z, n=0) C11(x,a) = expansionf_K1.coefficient(z, n=1) C20(x,a) = expansionf_K2.coefficient(z, n=0) C21(x,a) = expansionf_K2.coefficient(z, n=1) C30(x,a) = expansionf_K3.coefficient(z, n=0) C31(x,a) = expansionf_K3.coefficient(z, n=1)
A0(x,a) = ((1+2*f(x))^(-a/2)) * ( C10(x,a) + (a-1)*C20(x,a) + (diff(f(x),x) / (1+2*f(x))) * C30(x,a) ) Al(x,a) = ((1+2*f(x))^(-a/2)) * ( C11(x,a) +(a-2)*C21(x,a) + ( l / (1+2*f(x))) * ( diff(f(x),x) * C31(x,a) - diff(f(x),x,x) * C30(x,a)))
bool(Al(x,a) + 1/2 * diff(A0(x,a), x)==0)
Here we comment the lines of code above.
1. Several variables are defined, so that (x, X, z, a) = (x, x, z, a) accordingly to the notation of the present manuscript. The variable \(\alpha\), which is the parameter \(\alpha\), is limited to the range \((0,2)\).
2. We define \(f\) as an implicit function depending on the variable \(x\) only, next we define Deltaf as the periodic finite difference \(\Delta_{z}f\) defined in (4.2).
3. The function G1 is the function \(G^{1}_{\alpha,z}\) defined in (4.6), the function DXG1 is the function \(\left(G^{1}_{\alpha,z}\right)^{\prime}\) defined in (4.18) and finally we define K1 as the function \(K^{1}_{\alpha,z}\) defined in Eq. (4.14).
4. We perform the same computations as \(K^{1}_{\alpha,z}\) for the kernels \(K^{2}_{\alpha,z}\), \(K^{3}_{\alpha,z}\) defined in Eqs. (4.16) and (4.21).
5. The asymptotic expansion in Eq. (4.23) is computed for the three kernels.
6. We ask the computer to extract the coefficients of the expansions in Eq. (4.23) so that \(Cj\,l\,(x,\alpha)=K^{j,l}_{\alpha}\left(f;x\right)\), for any \(j=1,2,3,\ l=0,1\).
7. We define the functions A0 and A1 as in Eqs. (4.106) and (4.108).
8. The last line, line 23, is a statement of truth, which asks the computer whether using algebraic simplifications it can prove that Eq. (4.110) is true.
## Appendix B Conjugation of paradifferential operators under flows
The main results of this section concern transformation rules of paradifferential operators of the form \(\partial_{x}\circ\mathrm{Op}^{BW}\left[\alpha\right]\) under the flow generated by paradifferential operators which are Hamiltonian, or Hamiltonian up to order zero.
**Proposition B.1**.: _Let \(q\in\mathbb{N},\ K^{\prime}\leq K,\ N\in\mathbb{N}\) with \(q\leq N\), \(\epsilon_{0}>0\) and \(\rho\gg N\). Let \(\beta\left(f;t,x\right)\) be a function in \(\Sigma\mathcal{F}^{\mathbb{H}}_{K,K^{\prime},1}\left[\epsilon_{0},N\right]\) and \(\Phi_{B}\left(f,\tau\right)\) be the flow generated by the Hamiltonian operator \(B\left(f,\tau\right)\) defined in (5.8)._
1. **(Conjugation of a paradifferential operator)** _Let \(a\left(f;t,x,\xi\right)\) be a symbol in \(\Sigma\Sigma^{m}_{K,K^{\prime},q}\left[\epsilon_{0},N\right]\). Then_ \[\Phi_{B}\left(f,1\right)\circ\partial_{x}\circ\mathrm{Op}^{BW}\left[\alpha \left(f;t,x,\xi\right)\right]\circ\Phi_{B}\left(f,1\right)^{-1}=\partial_{x} \circ\mathrm{Op}^{BW}\left[\alpha_{0}\left(f,1;t,x,\xi\right)+P\left(f;t,x,\xi \right)\right]+R\left(f;t\right)\] (B.1)
_where_ \[a_{0}\left(f,\tau;t,x,\xi\right):=\left(1+\partial_{y}\breve{\beta}\left(f,\tau;t, y\right)\right)\,\mathrm{a}\left(f;t,y,\xi\left(1+\partial_{y}\breve{\beta} \left(f,\tau;t,y\right)\right)\right)\Big{|}_{y=x+\tau\beta\left(f;t,x\right)}\] (B.2) _is a symbol in_ \(\Sigma\Gamma_{K,K^{\prime},q}^{m}\left[e_{0},N\right]\)_,_ \(P\left(f;t,x,\xi\right)\) _is a symbol in_ \(\Sigma\Gamma_{K,K^{\prime},q+1}^{m-2}\left[e_{0},N\right]\) _and_ \(R\left(f;t\right)\) _is a smoothing operator in_ \(\Sigma\breve{\mathcal{R}}_{K,K^{\prime},q+1}^{-\rho+m+1+N}\left[e_{0},N\right]\)_._
**ii (Conjugation of \(\partial_{t}\))**: _There exists a function_ \(V\left(f;t,x\right)\) _in_ \(\Sigma\mathcal{F}_{K,K^{\prime}+1,1}^{\mathfrak{R}}\left[e_{0},N\right]\) _and a smoothing operator_ \(R\left(f;t\right)\) _in_ \(\Sigma\breve{\mathcal{R}}_{K,K^{\prime}+1,1}^{-\rho}\left[e_{0},N\right]\) _such that_
\[\Phi_{B}\left(f,1\right)\circ\left(\partial_{t}\Phi_{B}\left(f,1\right)^{-1} \right)=\partial_{x}\circ\mathrm{Op}^{BW}\left[V\left(f;t,x\right)\right]+R \left(f;t\right).\] (B.3)
**iii (Conjugation of a smoothing operator)**: _If_ \(R\left(f;t\right)\) _is a smoothing operator in_ \(\Sigma\breve{\mathcal{R}}_{K,K^{\prime},q}^{-\rho}\left[e_{0},N\right]\) _then the composed operator_ \(\Phi_{B}\left(f,1\right)\circ R\left(f;t\right)\circ\Phi_{B}\left(f,1\right)^ {-1}\) _is in_ \(\Sigma\breve{\mathcal{R}}_{K,K^{\prime},q}^{-\rho+N}\left[e_{0},N\right]\)_._
We also prove an analogous result when the paradifferential operator which generates the flow has order strictly less than \(1\).
**Proposition B.2** (Lie expansions).: _Let \(q\in\mathbb{N},\,K^{\prime}\leq K,\,N\in\mathbb{N}\) with \(q\leq N\), \(\epsilon_{0}>0\) and \(\rho\gg N\). Given a symbol \(w:=w\left(f;t,x,\xi\right)\) satisfying_
\[w\left(f;t,x,\xi\right)\in\Sigma\Gamma_{K,K^{\prime},1}^{-\mathrm{a}}\left[e_ {0},N\right],\,\mathrm{d}>0,\quad\mathrm{Im}\,w\left(f;t,x,\xi\right)\in \Gamma_{K,K^{\prime},1}^{-\mathrm{max}\left[1,\mathrm{d}\right]}\left[e_{0},N \right],\] (B.4)
_and (2.20) and denote \(\Phi_{W}\left(f,\tau\right)\) the flow generated by_
\[\partial_{\tau}\Phi_{W}\left(f,\tau\right)=\partial_{x}\circ\mathrm{Op}^{BW} \left[w(f;t,x,\xi)\right]\,\Phi_{W}\left(f,\tau\right),\qquad\Phi_{W}\left(0 \right)=\mathrm{Id}\,.\] (B.5)
**(Conjugation of a paradifferential operator)**: _Let \(\mathrm{a}:=\mathrm{a}\left(f;t,x,\xi\right)\) be a symbol in \(\Sigma\Gamma_{K,K^{\prime},q}^{m}\left[e_{0},N\right]\). Then_
\[\Phi_{W}\left(f,1\right)^{-1}\circ\partial_{x}\circ\mathrm{Op}^{ BW}\left[\mathrm{a}\left(f;t,x,\xi\right)\right]\circ\Phi_{W}\left(f,1\right)=\] \[\partial_{x}\circ\mathrm{Op}^{BW}\left[\mathrm{a}\right]-\left[ \partial_{x}\circ\mathrm{Op}^{BW}\left[w\right],\,\partial_{x}\circ\mathrm{Op }^{BW}\left[\mathrm{a}\right]\right]+\partial_{x}\circ\mathrm{Op}^{BW}\left[P \left(f;t,x,\xi\right)\right]+R\left(f;t\right)\] (B.6)
_where \(P\left(f;t,x,\xi\right)\) is a symbol in \(\Sigma\Gamma_{K,K,q+2}^{m-2\mathrm{d}}\left[e_{0},N\right]\), and \(R\left(f;t\right)\) is a smoothing operator in \(\Sigma\breve{\mathcal{R}}_{K,K^{\prime},q+2}^{-\rho}\left[e_{0},N\right]\). If \(a,w\) are real and even in \(\xi\) then \(\left[\partial_{x}\circ\mathrm{Op}^{BW}\left[w\right],\,\partial_{x}\circ \mathrm{Op}^{BW}\left[\mathrm{a}\right]\right]\) is Hamiltonian and \(P\) is real and even in \(\xi\)._
**ii (Conjugation of \(\partial_{t}\))**: _There exists a symbol_ \(T\left(f;t,x,\xi\right)\) _in_ \(\Sigma\Gamma_{K,K^{\prime}+1,1}^{-\mathrm{d}}\left[e_{0},N\right]\) _satisfying (_2.20_), and a smoothing operator_ \(R\left(f;t\right)\) _in_ \(\Sigma\breve{\mathcal{R}}_{K,K^{\prime}+1,2}^{-\rho}\left[e_{0},N\right]\) _such that_
\[-\partial_{t}\Phi_{W}\left(f,1\right)^{-1}\circ\Phi_{W}\left(f,1\right)= \partial_{x}\circ\mathrm{Op}^{BW}\left[T\left(f;t,x,\xi\right)\right]+R\left(f; t\right).\] (B.7)
_If_ \(w\) _is real and even in_ \(\xi\) _then_ \(\partial_{x}\circ\mathrm{Op}^{BW}\left[T\left(f;t,x,\xi\right)\right]\) _is Hamiltonian, i.e._ \(T\) _is real and even in_ \(\xi\)_._
**iii (Conjugation of a smoothing operator)**: _If_ \(R\left(f;t\right)\) _is a smoothing operator in_ \(\Sigma\breve{\mathcal{R}}_{K,K^{\prime},q}^{-\rho}\left[e_{0},N\right]\) _then the composed operator_ \(\Phi_{W}\left(f,1\right)\circ R\left(f;t\right)\circ\Phi_{W}\left(f,1\right)^ {-1}\) _is in_ \(\Sigma\breve{\mathcal{R}}_{K,K^{\prime},q}^{-\rho+N\mathrm{max}\left\{0,\left(1- \mathrm{d}\right)\right\}}\left[e_{0},N\right]\)_._
The rest of this section is devoted to the proof of Propositions B.1 and B.2.
#### Proof of Proposition B.1
The proof of Propositions B.1 is inspired by the Egorov type analysis in [5, Section 3.5]. The difference is that we highlight the Hamiltonian structure in (B.1) and (B.3) of the conjugated operators.
For simplicity we avoid to track the dependence of \(\beta,b\) and \(\Phi_{B}\) on the variable \(f\), as well as on \(t\), and denote \(\beta_{x}\left(x\right):=\partial_{x}\left(\beta\left(f;t,x\right)\right)\), \(b_{x}\left(\tau;t,x\right):=\partial_{x}\left(b\left(f,\tau;t,x\right)\right)\) and \(\Phi_{B}\left(\tau\right):=\Phi_{B}\left(f,\tau\right)\). In the sequel \(\partial_{x}^{-1}\) is the Fourier multiplier with symbol \((\mathrm{i}\xi)^{-1}\) that maps \(H_{0}^{s}\) onto \(H_{0}^{s+1}\) for any \(s\in\mathbb{R}\).
Proof of item 1: conjugation of a paradifferential operator
The conjugated operator
\[\mathcal{P}(\tau):=\Phi_{B}(\tau)\circ\partial_{x}\circ\mathrm{Op}^{BW}\left[ \mathsf{a}\right]\circ\Phi_{B}(\tau)^{-1}\in\mathcal{L}\left(H_{0}^{s};H_{0}^{s -1-m}\right),\quad\forall s\in\mathbb{R},\] (B.8)
satisfies \(\mathcal{P}(0)=\partial_{x}\circ\mathrm{Op}^{BW}\left[\mathsf{a}\right]\), and using that \(\partial_{\tau}\left(\Phi_{B}(\tau)^{-1}\right)=-\Phi_{B}(\tau)^{-1}\circ \partial_{\tau}\Phi_{B}(\tau)\circ\Phi_{B}(\tau)^{-1}\), it solves the Heisenberg equation
\[\partial_{\tau}\mathcal{P}(\tau)=\left[B(\tau),\,\mathcal{P}(\tau)\right], \qquad\mathcal{P}(0)=\partial_{x}\circ\mathrm{Op}^{BW}\left[\mathsf{a}\right].\] (B.9)
**Lemma B.3**.: _The operator \(A(\tau):=\partial_{x}^{-1}\circ\mathcal{P}(\tau)\in\mathcal{L}\left(H_{0}^{s };H_{0}^{s-m}\right)\) solves_
\[\left\{\begin{aligned} &\partial_{\tau}\,A(\tau)=\mathsf{i} \left[\mathrm{Op}^{BW}\left[b(\tau;x)\xi\right],\,A(\tau)\right]-\frac{1}{2} \left(\mathrm{Op}^{BW}\left[b_{x}(\tau;x)\right]A(\tau)+A(\tau)\,\mathrm{Op}^{ BW}\left[b_{x}(\tau;x)\right]\right)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+R^{ \prime}(\tau)\,\,A(\tau)-A(\tau)\,\,R(\tau)\end{aligned}\right.\] (B.10)
_where \(R(\tau),\,R^{\prime}(\tau)\) are smoothing operators in \(\Sigma\mathcal{R}_{K,K^{\prime},1}^{-\rho}\), uniformly in \(\left|\tau\right|\leq 1\), preserving the zero-average subspaces._
Proof.: By (5.8) and Proposition 2.21 we have
\[B(\tau)-\mathrm{Op}^{BW}\left[\mathsf{i}\,b(\tau;x)\,\xi+\tfrac{ 1}{2}b_{x}(\tau;x)\right]=R(\tau)\] (B.11) \[\partial_{x}^{-1}\circ B(\tau)\circ\partial_{x}=\mathrm{Op}^{BW }\left[b(\tau;x)\right]\circ\partial_{x}=\mathrm{Op}^{BW}\left[\mathsf{i}b( \tau;x)\,\xi-\tfrac{1}{2}b_{x}(\tau;x)\right]+R^{\prime}(\tau)\] (B.12)
where \(R(\tau),R^{\prime}(\tau)\) are smoothing operators in \(\Sigma\mathcal{R}_{K,K^{\prime},1}^{-\rho}\) preserving the zero-average subspaces. Then, by (B.9), (B.11), (B.12) we get
\[\partial_{\tau}A(\tau) =\partial_{x}^{-1}\circ B(\tau)\circ\partial_{x}\circ A(\tau)-A( \tau)\circ B(\tau)\] \[=\left(\mathrm{Op}^{BW}\left[\mathsf{i}\,b(\tau;x)\,\xi-\tfrac{ 1}{2}b_{x}(\tau;x)\right]\right)A(\tau)-A(\tau)\left(\mathrm{Op}^{BW}\left[ \mathsf{i}\,b(\tau;x)\,\xi+\tfrac{1}{2}b_{x}(\tau;x)\right]\right)+R^{\prime}( \tau)\,A(\tau)-A(\tau)\,R(\tau)\]
proving (B.10).
We now look for an approximate solution of (B.10) of the form
\[A^{(J)}(\tau)=\sum_{j=0}^{J}\mathrm{Op}^{BW}\left[a_{j}(\tau)\right],\qquad a _{0}(\tau)\in\Sigma\Gamma_{K,K^{\prime},q}^{m}\left[\epsilon_{0},N\right], \quad a_{j}(\tau)\in\Sigma\Gamma_{K,K^{\prime},q+1}^{m-2j}\left[\epsilon_{0}, N\right],\;\forall j=1,\ldots,J.\] (B.13)
We use the following asymptotic expansions derived by Proposition 2.21 and (2.28).
**Lemma B.4**.: _Let \(a\) be a symbol in \(\Sigma\Gamma_{K,K^{\prime},q}^{m}\left[\epsilon_{0},N\right]\). Then the commutator_
\[\left[\mathrm{Op}^{BW}\left[\mathsf{i}\,b(\tau;x)\xi\right],\mathrm{Op}^{BW} \left[a\right]\right]=\mathrm{Op}^{BW}\left[\left\{b(\tau;x)\xi,\,a\right\} \right]+\mathrm{Op}^{BW}\left[r_{-3}(b(\tau),a)\right]+R(\tau)\]
_with symbols_
\[\left\{b(\tau;x)\xi,\,a\right\}\in\Sigma\Gamma_{K,K^{\prime},q+1}^{m}\left[ \epsilon_{0},N\right],\qquad r_{-3}(b(\tau),a)\in\Sigma\Gamma_{K,K^{\prime},q+1 }^{m-2}\left[\epsilon_{0},N\right],\]
_and a smoothing operator \(R(\tau)\) in \(\Sigma\mathcal{R}_{K,K^{\prime},q+1}^{-\rho+m+1}\left[r,N\right]\), uniformly in \(\tau\). Moreover_
\[\tfrac{1}{2}\mathrm{Op}^{BW}\left[b_{x}(\tau;x)\right]\mathrm{Op}^{BW}\left[ a\right]+\tfrac{1}{2}\mathrm{Op}^{BW}\left[a\right]\mathrm{Op}^{BW}\left[b_{x}( \tau;x)\right]=\mathrm{Op}^{BW}\left[b_{x}(\tau;x)\,a+r_{-2}(b(\tau),a)\right]+R (\tau)\]
_where \(r_{-2}(b(\tau),a)\) is a symbol in \(\Sigma\Gamma_{K,K^{\prime},q+1}^{m-2}\left[\epsilon_{0},N\right]\), and \(R(\tau)\) is a smoothing operator in \(\Sigma\mathcal{R}_{K,K^{\prime},q+1}^{-\rho+m}\left[\epsilon_{0},N\right]\)._
We shall also use the following lemma concerning solutions of a transport equation.
**Lemma B.5**.: _Let \(W\left(f,\tau;x,\xi\right)\) by a symbol in \(\Sigma\Sigma_{K,K^{\prime},q}^{m}\left[\epsilon_{0},N\right]\) uniformly in \(\left|\tau\right|\leq 1\). Then the unique solution of_
\[\left\{\begin{aligned} &\partial_{\tau}Q\left(f,\tau;x,\xi \right)=\left\{b\left(f,\tau;x\right)\xi,\,Q\left(f,\tau;x,\xi\right)\right\}- b_{x}\left(f,\tau;x\right)Q\left(f,\tau;x,\xi\right)+W\left(f,\tau;x,\xi\right)\\ &\left.Q\left(f,\tau;x,\xi\right)\right|_{\tau=0}=Q_{0}\left(f;x, \xi\right)\in\Sigma\Sigma_{K,K^{\prime},q}^{m}\left[\epsilon_{0},N\right]\end{aligned}\right.\] (B.14)
_has the form_
\[Q\left(f,\tau;x,\xi\right) =\left(1+\tilde{\beta}_{y}\left(f,\tau;\ y\right)\right)Q_{0} \left(f;\ y,\ \xi\left(1+\tilde{\beta}_{y}\left(f,\tau;\ y\right)\right)\right)\big{|}_{y= x+\tau\beta\left(f;x\right)}\] (B.15) \[+\int_{0}^{\tau}\frac{1+\tilde{\beta}_{y}\left(f,\tau;\ y \right)}{1+\tilde{\beta}_{y}\left(f,\tau;\ y\right)}\left(f,\tau^{\prime};\ y+\tilde{\beta} \left(f,\tau^{\prime};\ y\right)\,\ \frac{\xi\left(1+\tilde{\beta}_{y}\left(f,\tau;\ y \right)\right)}{1+\tilde{\beta}_{y}\left(f,\tau^{\prime};\ y\right)}\right)\, \mathrm{d}\tau^{\prime}\bigg{|}_{y=x+\tau\beta\left(f;x\right)}\,\]
_which is a symbol in \(\Sigma\Sigma_{K,K^{\prime},q}^{m}\left[\epsilon_{0},N\right]\), uniformly in \(\left|\tau\right|\leq 1\)._
Proof.: The solution \(\left(x(\tau),\xi(\tau)\right)=\phi^{0,\tau}(X,\Xi)\) of the characteristics system
\[\frac{\mathrm{d}}{\mathrm{d}\tau}x\left(\tau\right)=-b\left(\tau;x\left(\tau \right)\right)\,\ \ \ \frac{\mathrm{d}}{\mathrm{d}\tau}\xi\left(\tau\right)=b_{x}\left(\tau;x\left( \tau\right)\right)\ \xi\left(\tau\right)\,\] (B.16)
with initial condition \(\left.\left(x\left(\tau\right),\xi(\tau)\right)\right|_{\tau=0}=\phi^{0,0}(X, \Xi)=\left(X,\Xi\right)\) is (cf. [5, p. 83])
\[\left(x\left(\tau\right),\xi\left(\tau\right)\right)=\phi^{0,\tau}\left(X,\Xi \right)=\left(X+\tilde{\beta}\left(\tau,X\right),\frac{\Xi}{1+\tilde{\beta}_ {y}\left(\tau,X\right)}\right)\.\] (B.17)
By (B.16) and (B.14) we get \(\frac{\mathrm{d}}{\mathrm{d}\tau}\left[\xi(\tau)Q\left(\tau;x\left(\tau\right),\xi(\tau)\right)\right]=\xi(\tau)W\left(\tau;x\left(\tau\right),\xi(\tau)\right)\) and so, by integration,
\[\xi(\tau)\ Q\left(\tau;x\left(\tau\right),\xi(\tau)\right)=\Xi Q\left(0;X,\Xi \right)+\int_{0}^{\tau}\xi(\tau^{\prime})W\left(\tau^{\prime};x(\tau^{\prime }),\xi(\tau^{\prime})\right)\mathrm{d}\tau^{\prime}\,.\] (B.18)
The inverse flow \(\phi^{\tau,0}\left(x,\xi\right)\), i.e. \(\left(x,\xi\right)=\phi^{0,\tau}\left(X,\Xi\right)\) if and only if \(\left(X,\Xi\right)=\phi^{\tau,0}(x,\xi)\) is (cf. [5, p. 83])
\[\left(X,\Xi\right)=\phi^{\tau,0}(x,\xi)=\left(x+\tau\beta(x),\xi\left(1+\tilde {\beta}_{y}\left(\tau;y\right)\right)\big{|}_{y=x+\tau\beta(x)}\right)\,.\] (B.19)
In addition, by (B.17) and (B.19),
\[\left(x(\tau^{\prime}),\xi(\tau^{\prime})\right)=\phi^{\tau,\tau^{\prime}} \left(x,\xi\right)=\phi^{0,\tau^{\prime}}\left(\phi^{\tau,0}\left(x,\xi \right)\right)=\left(y+\tilde{\beta}\left(\tau^{\prime}y\right)\,\ \frac{\xi\left(1+\tilde{\beta}_{y}\left(\tau;y \right)\right)}{1+\tilde{\beta}_{y}\left(\tau^{\prime};y\right)}\right)\bigg{|}_ {y=x+\tau\beta\left(x\right)}\.\] (B.20)
We deduce (B.15) inserting (B.19) and (B.20) in (B.18). Finally \(Q\left(f,\tau;x,\xi\right)\) is a symbol in \(\Sigma\Sigma_{K,K^{\prime},q}^{m}\left[\epsilon_{0},N\right]\), by (B.15) and Lemmata 2.8 and 2.9.
**Step i): Determination of the principal symbol \(a_{0}\).** From (B.10), (B.13) and Lemma B.4 the principal symbol \(a_{0}\) solves the equation
\[\left\{\begin{aligned} &\partial_{\tau}a_{0}\left(\tau;x,\xi \right)=\left\{b\left(\tau;x\right)\xi,\ a_{0}\left(\tau;x,\xi\right)\right\}-b_{x}\left(\tau;x \right)\ a_{0}\left(\tau;x,\xi\right)\\ & a_{0}\left(0;x,\xi\right)=\mathrm{a}\left(x,\xi\right)\.\end{aligned}\right.\] (B.21)
By Lemma B.5 with \(W=0\) and \(Q_{0}=\mathrm{a}\), the solution of (B.21) is given by (B.2). The operator \(A^{\left(0\right)}:=A^{\left(0\right)}\left(\tau\right):=\mathrm{Op}^{BW} \left[a_{0}\left(\tau\right)\right]\) solves approximately (B.10) in the sense that, by (B.21) and Lemma B.4,
\[\partial_{\tau}A^{\left(0\right)}=\mathrm{i}\left[\mathrm{Op}^{BW}\left[b \left(\tau\right)\xi\right],\ A^{\left(0\right)}\right]-\mathrm{Op}^{BW}\left[ \frac{b_{x}\left(\tau\right)}{2}\right]A^{\left(0\right)}-A^{\left(0\right)} \mathrm{Op}^{BW}\left[\frac{b_{x}\left(\tau\right)}{2}\right]+\mathrm{Op}^{BW }\left[r^{\left(0\right)}\left(\tau\right)\right]+R^{\left(0\right)}\left(\tau\right)\] (B.22)
where \(r^{\left(0\right)}\left(\tau\right):=-r_{-3}(b,a_{0})-r_{-2}(b,a_{0})\) is a symbol in \(\Sigma\Sigma_{K,K^{\prime},q+1}^{m-2}\left[\epsilon_{0},N\right]\) and \(R^{\left(0\right)}\left(\tau\right)\) is a smoothing operator in \(\Sigma_{K,K^{\prime},q+1}^{-\rho+m}\left[\epsilon_{0},N\right]\), uniformly in \(\tau\in\left[0,1\right]\).
**Step ii): Determination of the subprincipal symbol \(\sum_{j=1}^{J}a_{j}\).** We define \(a_{1}(\tau;x,\xi)\) as the solution of the transport equation
\[\left\{\begin{aligned} &\partial_{\tau}a_{1}\left(\tau;x,\xi \right)=\left\{b\left(\tau;x\right)\xi,\ a_{1}\left(\tau;x,\xi\right)\right\}-b_{x} \left(\tau;x\right)\ a_{1}\left(\tau;x,\xi\right)-r^{\left(0\right)}\left(\tau ;x,\xi\right)\\ & a_{1}\left(0;x,\xi\right)=0\,.\end{aligned}\right.\] (B.23)
By Lemma B.5 the symbol \(a_{1}(\tau;x,\xi)\) is in \(\Sigma\Gamma_{K,K^{\prime},q+1}^{m-2}\). By Equations (B.22) and (B.23) and Lemma B.4
\[A^{(1)}(\tau):=A^{(0)}(\tau)+\operatorname{Op}^{BW}\left[a_{1}(\tau)\right]\]
is a better approximation of equation (B.10) in the sense that
\[\partial_{\tau}A^{(1)}=\operatorname{i}\left[\operatorname{Op}^{BW}\left[b( \tau)\,\xi\right],A^{(1)}\right]-\operatorname{Op}^{BW}\left[\tfrac{b_{\tau}( \tau)}{2}\right]A^{(1)}-A^{(1)}\operatorname{Op}^{BW}\left[\tfrac{b_{\tau}( \tau)}{2}\right]+\operatorname{Op}^{BW}\left[r^{(1)}(\tau)\right]+R^{(1)}(\tau)\] (B.24)
where \(r^{(1)}:=-r_{-3}(b,a_{1})-r_{-2}(b,a_{1})\) is a symbol in \(\Sigma\Gamma_{K,K^{\prime},q+1}^{m-4}\left[e_{0},N\right]\) and \(R^{(1)}(\tau)\) are smoothing operators in \(\Sigma\hat{\mathcal{R}}_{K,K^{\prime},q+1}^{-p+m}\left[e_{0},N\right]\) uniformly in \(\left|\tau\right|\leq 1\).
Repeating \(J\) times \((J\sim\rho/2)\) the above procedure, until the new paradifferential term may be incorporated into the smoothing remainder, we obtain an operator \(A^{(J)}(\tau):=\sum_{j=0}^{J}\operatorname{Op}^{BW}\left[a_{j}(\tau)\right]\) as in (B.13) solving
\[\begin{cases}\partial_{\tau}A^{(J)}(\tau)=\operatorname{i}\,\left[ \operatorname{Op}^{BW}\left[b(\tau)\,\xi\right],A^{(J)}(\tau)\right]- \operatorname{Op}^{BW}\left[\tfrac{b_{\tau}(\tau)}{2}\right]A^{(J)}(\tau)-A^{( J)}(\tau)\operatorname{Op}^{BW}\left[\tfrac{b_{\tau}(\tau)}{2}\right]+R^{(J)}(\tau)\\ A^{(J)}(0)=\operatorname{Op}^{BW}\left[\operatorname{a}\right]\end{cases}\] (B.25)
where \(R^{(J)}(\tau)\) are smoothing operators in \(\Sigma\hat{\mathcal{R}}_{K,K^{\prime},q+1}^{-p+m}\left[e_{0},N\right]\) uniformly in \(\left|\tau\right|\leq 1\).
**Step iii) : Analysis of the error.** We finally estimate the difference between the conjugated operator \(P(\tau)\) in (B.8) and \(P^{(J)}(\tau):=\partial_{x}\circ A^{(J)}(\tau)\).
**Lemma B.6**.: \(P(\tau)-P^{(J)}(\tau)\) _is a smoothing operator \(R(\tau)\) in \(\Sigma\hat{\mathcal{R}}_{K,K^{\prime},q+1}^{-\rho+m+1+N}\left[e_{0},N\right]\) uniformly in \(\left|\tau\right|\leq 1\)._
Proof.: In view of Eqs. (B.11), (B.12) and (B.25), the operator \(\mathcal{P}^{(J)}(\tau)=\partial_{x}\circ A^{(J)}(\tau)\) solves an approximated Heisenberg equation (cf. (B.9))
\[\partial_{\tau}\mathcal{P}^{(J)}(\tau)=\left[B\,,\,\mathcal{P}^{(J)}(\tau) \right]+R(\tau)\,\qquad R(\tau)\in\Sigma\hat{\mathcal{R}}_{K,K^{\prime},q+1}^{-p+m}.\] (B.26)
Recalling (B.8) we write
\[P^{(J)}(\tau)-P(\tau)=V(\tau)\Phi_{B}(\tau)^{-1}\quad\text{where}\quad V(\tau ):=P^{(J)}(\tau)\,\Phi_{B}(\tau)-\Phi_{B}(\tau)\circ\partial_{x}\circ \operatorname{Op}^{BW}\left[\operatorname{a}\right]\,.\]
By (B.26) we have that \(\partial_{\tau}V(\tau)=B(\tau)\,V(\tau)+R(\tau)\Phi_{B}(\tau)\), \(V(0)=0\), and therefore, by Duhamel and \(\partial_{\tau}\Phi_{B}=B\Phi_{B}\) we deduce \(V(\tau)=\Phi_{B}(\tau)\int_{0}^{\tau}\Phi_{B}(\tau^{\prime})^{-1}R(\tau^{ \prime})\Phi_{B}(\tau^{\prime})\,\mathrm{d}\tau^{\prime}\) and thus
\[P^{(J)}(\tau)-P(\tau)=\int_{0}^{\tau}\Phi_{B}(\tau)\circ\Phi_{B}(\tau^{\prime}) ^{-1}\circ R(\tau^{\prime})\circ\Phi_{B}(\tau^{\prime})\circ\Phi_{B}(\tau)^{-1 }\,\mathrm{d}\tau^{\prime}\,.\]
This is a smoothing operator in arguing as in [5, Proof of Thm. 3.27].
Lemma B.6 implies that \(P(\tau)=\partial_{x}\circ A^{(J)}(\tau)+R(\tau)\) concluding the proof of Proposition B.1-i with symbol \(P=\sum_{j=1}^{J}a_{j}(1)\). Item ii follows similarly as in [7, Lemma A.5]. Item iii is given in [5, Remark at page 89].
Proof of Proposition b.2
In view of (B.4) and Lemma 5.3 the flow \(\Phi_{W}(\tau):=\Phi_{W}\left(f,\tau\right)\) generated by (B.5) is well posed and
\[\frac{\mathrm{d}}{\mathrm{d}\tau}\left(\Phi_{W}(\tau)^{-1}\circ\partial_{x} \circ\operatorname{Op}^{BW}\left[\operatorname{a}\right]\circ\Phi_{W}(\tau) \right)=-\Phi_{W}(\tau)^{-1}\left[\partial_{x}\circ\operatorname{Op}^{BW} \left[w\right],\,\partial_{x}\circ\operatorname{Op}^{BW}\left[\operatorname{a} \right]\right]\Phi_{W}(\tau)\] (B.27)
and a Taylor expansion gives
\[\Phi_{W}(1)^{-1}\,\partial_{x}\circ\operatorname{Op}^{BW}\left[ \operatorname{a}\right]\circ\Phi_{W}(1)\\ =\partial_{x}\circ\operatorname{Op}^{BW}\left[\operatorname{a} \right]-\left[\partial_{x}\circ\operatorname{Op}^{BW}\left[w\right],\, \partial_{x}\circ\operatorname{Op}^{BW}\left[\operatorname{a}\right]\right]+ \sum_{\ell=2}^{L}\frac{(-1)^{\ell}}{\ell!}\mathrm{Ad}_{\partial_{x}\circ \operatorname{Op}^{BW}\left[w\right]}^{\ell}\left(\partial_{x}\circ \operatorname{Op}^{BW}\left[\operatorname{a}\right]\right)\\ +\frac{(-1)^{L+1}}{L!}\int_{0}^{1}(1-\tau)^{L}\Phi_{W}(\tau)^{-1} \circ\mathrm{Ad}_{\partial_{x}\circ\operatorname{Op}^{BW}\left[\operatorname{a }\right]}^{L+1}\left(\partial_{x}\circ\operatorname{Op}^{BW}\left[ \operatorname{a}\right]\right)\circ\Phi_{W}(\tau)\,\mathrm{d}\tau\,.\] (B.28)
Since \(\partial_{x}\circ\mathrm{Op}^{BW}\left[w\right]\) belongs to \(\Sigma\Gamma_{K,K^{\prime},1}^{-\mathrm{d}}\), \(\mathrm{d}>0\), then, by Proposition 2.21 each commutator \(\left[\partial_{x}\circ\mathrm{Op}^{BW}\left[w\right],\cdot\right]\) gains \(\mathrm{d}>0\) unit of order and one degree of vanishing in \(f\) and (B.28) is an expansion as in (B.6) in operators with decreasing order and increasing degree of homogeneity with a symbol \(P\) of order \(m-2\mathrm{d}\). Item iii follows as in [5, Remark at page 89], see also [10], by properties of the flow generated by paradifferential operators. Thus, Proposition 2.21, give that the last term of (B.28) belongs to \(\Sigma\hat{\mathcal{R}}_{K,K^{\prime},q}^{1+m-\mathrm{d}(L+1)+\max(0,1-\delta )N}\left[\epsilon_{0},N\right]\), hence if \(L+1\geq\frac{\rho+1+m+\max(0,1-\delta)N}{\mathrm{d}}\) it belongs to \(\Sigma\hat{\mathcal{R}}_{K,K^{\prime},q}^{-\rho}\left[\epsilon_{0},N\right]\). If \(w,\mathrm{a}\) are real and even in \(\xi\), then the operators \(\partial_{x}\circ\mathrm{Op}^{BW}\left[w\right]\) and \(\partial_{x}\circ\mathrm{Op}^{BW}\left[\mathrm{a}\right]\) are Hamiltonian (cf. (2.21)). The commutator of two Hamiltonian operators
\[\mathrm{Ad}_{\partial_{x}\circ\mathrm{Op}^{BW}\left[w\right]}\left(\partial_ {x}\circ\mathrm{Op}^{BW}\left[\mathrm{a}\right]\right)=\partial_{x}\circ S,\quad S:=\mathrm{Op}^{BW}\left[w\right]\circ\partial_{x}\circ\mathrm{Op}^{BW }\left[\mathrm{a}\right]-\mathrm{Op}^{BW}\left[\mathrm{a}\right]\circ \partial_{x}\circ\mathrm{Op}^{BW}\left[w\right],\] (B.29)
where \(S=S^{*}\), \(S=\overline{S}\), is another Hamiltonian operator where, by Proposition 2.21, the operator \(S=\mathrm{Op}^{BW}\left[s\right]\) with a real symbol \(s\) in \(\Sigma\Gamma_{K,K^{\prime},q+1}^{m-\mathrm{d}}\left[\epsilon_{0},N\right]\) even in \(\xi\) (cf. (2.21)), up to a smoothing operator in \(\Sigma\hat{\mathcal{R}}_{K,K^{\prime},q+1}^{-\rho}\left[\epsilon_{0},N\right]\), by renaming \(\rho\). Applying iteratively this result to \(\mathrm{Ad}_{\partial_{x}\circ\mathrm{Op}^{BW}\left[w\right]}^{\ell}\left( \partial_{x}\circ\mathrm{Op}^{BW}\left[\mathrm{a}\right]\right)\) the formula (B.6) follows.
Let us prove (B.7). As in (B.27) we have that
\[\frac{\mathrm{d}}{\mathrm{d}r}\big{(}\Phi_{W}\left(\tau\right)^{-1}\circ \partial_{t}\circ\Phi_{W}\left(\tau\right)\big{)}=-\Phi_{W}\left(\tau\right)^ {-1}\circ\left[\partial_{x}\circ\mathrm{Op}^{BW}\left[w\right],\partial_{t} \right]\circ\Phi_{W}\left(\tau\right)=\Phi_{W}\left(\tau\right)^{-1}\circ \partial_{x}\circ\mathrm{Op}^{BW}\left[w_{t}\right]\circ\Phi_{W}\left(\tau\right)\]
and a Taylor expansion gives
\[\Phi_{W}\left(1\right)^{-1}\circ\partial_{t}\circ\Phi_{W}\left(1\right) =\partial_{t}+\partial_{x}\circ\mathrm{Op}^{BW}\left[w_{t} \right]+\sum_{\ell=2}^{L}\frac{(-1)^{\ell-1}}{\ell!}\mathrm{Ad}_{\partial_{x} \circ\mathrm{Op}^{BW}\left[w_{t}\right]}^{\ell-1}\big{(}\partial_{x}\circ \mathrm{Op}^{BW}\left[w_{t}\right]\big{)}\] (B.30) \[+\frac{(-1)^{L}}{L!}\int_{0}^{1}\left(1-\tau\right)^{L}\Phi_{W} \left(\tau\right)^{-1}\circ\mathrm{Ad}_{\partial_{x}\circ\mathrm{Op}^{BW} \left[w\right]}^{L}\left(\partial_{x}\circ\mathrm{Op}^{BW}\left[w_{t}\right] \right)\circ\Phi_{W}\left(\tau\right)\,\mathrm{d}\tau\,.\]
Since \(\Phi_{W}\left(1\right)^{-1}\circ\partial_{t}\circ\Phi_{W}\left(1\right)= \partial_{t}+\Phi_{W}\left(1\right)^{-1}\circ\left(\partial_{t}\Phi_{W}\left( 1\right)\right)=\partial_{t}-\left(\partial_{t}\Phi_{W}\left(1\right)^{-1} \right)\circ\Phi_{W}\left(1\right)\) we deduce by (B.30) that
\[-\partial_{t}\Phi_{W}\left(1\right)^{-1}\circ\Phi_{W}\left(1\right)=\partial_{x }\circ\mathrm{Op}^{BW}\left[w_{t}\right]+\sum_{\ell=2}^{L}\frac{(-1)^{\ell-1} }{\ell!}\mathrm{Ad}_{\partial_{x}\circ\mathrm{Op}^{BW}\left[w_{t}\right]}^{ \ell-1}\big{(}\partial_{x}\circ\mathrm{Op}^{BW}\left[w_{t}\right]\big{)}+R\]
where, if \(L\gtrsim_{4,N}\rho\), then \(R\) is in \(\Sigma\hat{\mathcal{R}}_{K,K^{\prime}+1,2}^{-\rho}\left[\epsilon_{0},N\right]\) (renaming \(\rho\)). Then (B.7) follows arguing as for (B.6) and if \(w\) is real and even in \(\xi\) we also deduce that \(T\) is real and even in \(\xi\).
|
2305.07109
|
Tricritical Dicke model with and without dissipation
|
Light-matter interacting systems involving multi-level atoms are appealing
platforms for testing equilibrium and dynamical phenomena. Here, we explore a
tricritical Dicke model, where an ensemble of three-level systems interacts
with a single light mode, through two different approaches: a generalized
Holstein-Primakoff map, and a treatment using the Gell-Mann matrices. Both
methods are found to be equivalent in the thermodynamic limit of an infinite
number of atoms. In equilibrium, the system exhibits a rich phase diagram where
both continuous and discrete symmetries can be spontaneously broken. We
characterize all the different types of symmetries according to their scaling
behaviors. Far from the thermodynamic limit, considering just a few tens of
atoms, the system already exhibits features that could help characterize both
second and first-order transitions in a potential experiment. Importantly, we
show that the tricritical behavior is preserved when dissipation is taken into
account, moreover, the system develops a steady-state phase diagram with
various regions of bistability, all of them converging at the tricritical
point. Having multiple stable normal and superradiant phases opens prospective
avenues for engineering interesting steady states by a clever choice of initial
states and/or parameter quenching.
|
Diego Fallas Padilla, Han Pu
|
2023-05-11T19:41:07Z
|
http://arxiv.org/abs/2305.07109v2
|
# A tricritical Dicke model in and out of equilibrium
###### Abstract
Light-matter interacting systems involving multi-level atoms are appealing platforms for testing equilibrium and dynamical phenomena. Here, we explore a tricritical Dicke model, where an ensemble of three-level systems interacts with a single light mode, through two different approaches: a generalized Holstein-Primakoff map, and a treatment using the Gell-Mann matrices. Both methods are found to be equivalent in the thermodynamic limit of an infinite number of atoms. In equilibrium, the system exhibits a rich phase diagram where both continuous and discrete symmetries can be spontaneously broken. We characterize all the different types of symmetries according to their scaling behaviors. Far from the thermodynamic limit, considering just a few tens of atoms, the system already exhibits features that could help characterize both second and first-order transitions in a potential experiment. Importantly, we show that the tricritical behavior is preserved when dissipation is taken into account, moreover, the system develops a steady-state phase diagram with various regions of bistability, all of them converging at the tricritical point. Having multiple stable normal and superradiant phases opens prospective avenues for engineering interesting steady states by a clever choice of initial states and/or parameter quenching.
## I Introduction
When \(N\) two-level atoms confined in a small volume interacting with a single mode of light are all initialized in their excited state, their spontaneous emission processes can interfere constructively leading to an intensity-enhanced pulse of emitted light. This concept, known as Dicke superradiance, was introduced in 1954 [1] and represents a foundation for numerous subsequent studies regarding the coherence between emitters across several platforms [2; 3; 4].
Later, an equilibrium notion of superradiance was introduced with the Dicke model [5; 6]. In the limit of an infinite number of atoms, and above a critical value of the light-matter interaction strength, this model undergoes a second-order phase transition from a normal phase with a vanishing photon population to a superradiant phase with a macroscopic photon population. Near the critical point, interesting features such as squeezing [7; 8] or the onset of chaotic behavior [9] are expected.
Several extensions of the Dicke model have been proposed to unlock new exotic phenomena, of particular interest is the generalization to multi-level atoms [10; 11; 12; 13; 14]. Having more than two atomic levels allows for creative model proposals where the connectivity between levels can be engineered to generate useful properties. In this work, we focus on the study of multicritical points, specifically, tricritical points (TPs), using multi-level Dicke models [15]. A TP signals the intersection of a first- and a second-order phase transition, and with these two types of transitions having very different behaviors, a highly tunable system exhibiting a TP is ideal for exploring universal scaling, hysteresis, and metastability that goes beyond the much more extensively explored second-order quantum criticality. Recently, evidences of TPs have been reported in magnetic materials [16; 17; 18]. Such systems, however, lack the parameter tunability available in atomic/optical platforms.
Realizing models in a cavity QED environment typically requires an open system description due to the necessary presence of one or more sources of losses. Open Dicke-like systems are of great interest in their own right. For example, they show interesting steady-state features not present in the closed systems [19; 20; 21; 22; 23] as the inclusion of dissipation channels can modify the geometry of the phase boundaries, alter the order of the transitions, and generate regions of multi-stability where the final state of the system is highly dependent on initial state preparation.
In this work, we introduce a tricritical Dicke model (TDM), describing a single cavity mode interacting with an ensemble of three-level atoms. In this model, both continuous and discrete symmetry breakings can occur. First, we characterize the equilibrium phase diagram and critical scaling in the thermodynamic limit. Second, we explore the system away from the thermodynamic limit with finite number of atoms. Lastly, we describe the non-equilibrium phase transition landscape and examine how the TP manifests in such an open system. The richness of the dissipative phase diagram characterized by different types of phase transitions and regions of bistability opens exciting possibilities for engineering desired steady states.
## II Model
We consider an ensemble of \(N\) three-level atoms interacting with a single mode of light. We denote the \(j\)-th atom level by \(|a\rangle^{(j)}\), with \(a=1,2,3\) corresponding
to spin projection values in the \(z\)-direction 1, 0, and -1, respectively. The TDM is a generalization of the conventional Dicke model where now the phase transition between the superradiant and normal phases can occur across second-order line, a first-order line, or a TP. The TDM is described by the Hamiltonian:
\[H= \omega a^{\dagger}a+\Omega(1-\delta)P_{11}-\Omega P_{33} \tag{1}\] \[+\frac{g_{1}}{\sqrt{N}}(a(P_{12}+\gamma P_{23})+a^{\dagger}(P_{2 1}+\gamma P_{32}))\] \[+\frac{g_{2}}{\sqrt{N}}(a^{\dagger}(P_{12}+\gamma P_{23})+a(P_{2 1}+\gamma P_{32}))\,.\]
Here \(P_{ab}\) denotes a collective atomic operator defined as \(P_{ab}=\sum_{j=1}^{N}|a\rangle^{(j)}\langle b|^{(j)}\), \(a\) (\(a^{\dagger}\)) denote bosonic annihilation (creation) operators for the cavity photon mode, \(\omega\) is the photon frequency, \(\Omega\) characterizes the atomic energy splitting, \(g_{1}\) is the light-matter interaction strength for the co-rotating terms and \(g_{2}\) for the counter-rotating ones. Finally, the two dimensionless quantities \(\gamma\) and \(\delta\) represent an imbalance in the light-matter coupling strength and energy splitting between different atomic levels, respectively (see Fig. 1).
If \(g_{1}=g_{2}\) the system reduces to a specific case of the multicritical Dicke model presented in [15], on the other hand, if \(g_{2}=0\), the system reduces to a previously studied Tavis-Cummings model that also exhibits a tricritical point [24].
For the cavity system, one obstacle on the way is that experimental observation of the superradiant phase transition in its "pure" form is seriously challenged by the no-go theorem stated by Razaewski _et al._[25], where the inclusion of the \(A^{2}\) term from the dipole interaction prevents the transition from occurring. One way to circumvent this no-go theorem is to consider a system where the coupling between atomic levels is achieved by cavity-assisted Raman transitions. This could be realized, for example, in an optical cavity QED system through the coupling of different atomic hyperfine magnetic sub-levels with additional lasers [26]. This is the scheme we adopt here.
## III Thermodynamic limit
Let us first explore the thermodynamic limit in which the atom number \(N\longrightarrow\infty\), while the coupling strengths \(g_{1}\) and \(g_{2}\) are finite.
### Generalized Holstein-Primakoff mapping
In the conventional Dicke model, a Holstein-Primakoff mapping [27], where the spin collective operators are mapped into a single bosonic mode, is often used to explore the mean-field properties of the system [9]. This mapping can be intuitively understood as promoting one two-level atom from the ground state to the excited state is equivalent to adding one quantum of excitation in the mapped bosonic mode. In the TDM, since we are dealing with three-level atoms, we require to map the atomic collective operators into two different bosonic modes through a generalized Holstein-Primakoff mapping as suggested in Ref. [12]. In order to conduct the generalized mapping, we choose state \(|3\rangle\) as our reference state, the mapping is then defined by:
\[P_{j,k}=b_{j}^{\dagger}b_{k},\quad j,k=1,2\,,\] \[P_{j,3}=b_{j}^{\dagger}\,\Theta=(P_{3,j})^{\dagger}\,,\quad j=1,2\,,\] \[P_{3,3}=N-\sum_{j=1,2}b_{j}^{\dagger}b_{j}\,. \tag{2}\]
where \(\Theta\equiv\sqrt{N-b_{1}^{\dagger}b_{1}-b_{2}^{\dagger}b_{2}}\). The Hamiltonian is now given by:
\[H = \omega a^{\dagger}a-N\Omega+(2-\delta)\Omega\,b_{1}^{\dagger}b_{1 }+\Omega\,b_{2}^{\dagger}b_{2} \tag{3}\] \[+ \frac{g_{1}}{\sqrt{N}}(ab_{1}^{\dagger}b_{2}+a^{\dagger}b_{2}^{ \dagger}b_{1})+\frac{g_{1}\gamma}{\sqrt{N}}(ab_{2}^{\dagger}\Theta+a^{\dagger }\Theta b_{2})\] \[+ \frac{g_{2}}{\sqrt{N}}(ab_{2}^{\dagger}b_{1}+a^{\dagger}b_{1}^{ \dagger}b_{2})+\frac{g_{2}\gamma}{\sqrt{N}}(a^{\dagger}b_{2}^{\dagger}\Theta+a \Theta b_{2})\,,\]
The form of Eq. (3) makes evident the meaning of the new bosonic operators \(b_{1}\) and \(b_{2}\): they take us from the reference state to the other two states, and back. Creating an excitation in state \(|1\rangle\) requires an energy \((2-\delta)\Omega\) which is the detuning with respect to the reference state \(|3\rangle\), a similar argument follows for state \(|2\rangle\). Moreover, note that since states \(|3\rangle\) and \(|1\rangle\) are not directly coupled in our Hamiltonian, "cycle" terms such as \(b_{1}^{\dagger}b_{2}\) are needed using this formalism.
### Ground state phase diagram
Now, we displace each bosonic operator by their mean-field values
\[a=\alpha+c,\quad b_{1}=\beta_{1}+d_{1},\quad b_{2}=\beta_{2}+d_{2}\,, \tag{4}\]
where the mean-field values \(\alpha\), \(\beta_{1}\), and \(\beta_{2}\) are taken to be complex numbers, in general. The new bosonic operators
Figure 1: Schematics of the TDM. The three states \(|1\rangle\), \(|2\rangle\), and \(|3\rangle\) are represented by the top, middle, and bottom yellow horizontal bars, respectively. Wavy arrows represent photons of frequency \(\omega\). The light-matter interaction terms are represented by solid arrows, here, \(g=g_{1}\) for co-rotating terms and \(g=g_{2}\) for counter-rotating terms.
\(c\), \(d_{1}\), and \(d_{2}\) represent the variations with respect to the mean-field values. After substituting Eq. (4) into Eq. (3) and expanding in powers of \(N\), the Hamiltonian can be rewritten as (See details in Appendix A)
\[H\approx NH_{0}+\sqrt{N}H_{1}+H_{2}\,, \tag{5}\]
where terms with negative powers of \(N\) are discarded since we are considering the thermodynamic limit \(N\rightarrow\infty\). The first term \(H_{0}\) describes the ground state mean-field energy of the system and is given explicitly by:
\[H_{0} = \omega\alpha\alpha^{*}-\Omega+(2-\delta)\Omega\beta_{1}\beta_{1}^ {*}+\Omega\beta_{2}\beta_{2}^{*} \tag{6}\] \[+g_{1}(\alpha\beta_{1}^{*}\beta_{2}+\text{c.c})+g_{2}(\alpha \beta_{2}^{*}\beta_{1}+\text{c.c})\] \[+g_{1}\gamma\beta(\alpha\beta_{2}^{*}+\text{c.c})+g_{2}\gamma \beta(\alpha\beta_{2}+\text{c.c})\,,\]
where \(\beta\equiv\sqrt{1-|\beta_{1}|^{2}-|\beta_{2}|^{2}}\). Minimization of \(H_{0}\) with respect to the real and imaginary parts of \(\alpha\), \(\beta_{1}\), and \(\beta_{2}\) can be performed to determine the values of these parameters. The normal phase (NP) characterized by \(\alpha=\beta_{1}=\beta_{2}=0\), namely, all atoms in state \(|3\rangle\) and zero photon population, is always a solution to the set of equations \(\partial H_{0}/\partial\mu=0\) with \(\mu=\alpha\), \(\beta_{1}\), \(\beta_{2}\). However, this phase does not always represent the configuration that minimizes the energy, in that case, the equilibrium phase becomes a superradiant phase with nonzero values of the three order parameters \(\alpha\), \(\beta_{1}\), and \(\beta_{2}\).
For \(g_{1},\,g_{2}\neq 0\) two different superradiant phases are found. When \(g_{1}\) and \(g_{2}\) have the same sign, we find that all three order parameters are real, and we denote this phase as superradiant phase A (SRA). On the other hand, when \(g_{1}\) and \(g_{2}\) have opposite signs, we find that \(\beta_{1}\) remains real but both \(\alpha\) and \(\beta_{2}\) become purely imaginary, and we denote this phase as superradiant phase B (SRB). A similar behavior of order parameters was also found in previous studies in a model interpolating between the conventional Dicke and Tavis-Cummings models [22; 28].
For these two superradiant phases, \(\alpha\) is described by a single real number and then it is possible to find the location of the TP and the equation for the second-order line by doing a single parameter Landau theory analysis after performing time-independent perturbation theory following a procedure similar to the one described in Ref. [15] (See Appendix B). The critical line between the normal phase and each of the superradiant phase is determined by two constraints:
\[\text{SRA:}\quad\gamma^{2}=\frac{1}{\lambda_{+}^{2}}\geq\frac{1}{2-\delta}\,, \tag{7}\]
\[\text{SRB:}\quad\gamma^{2}=\frac{1}{\lambda_{-}^{2}}\geq\frac{1}{2-\delta}\,, \tag{8}\]
where \(\lambda_{\pm}\equiv|\lambda_{1}\pm\lambda_{2}|\), and \(\lambda_{i}\equiv g_{i}/\sqrt{\omega\Omega}\) are renormalized dimensionless coupling strengths. The location of the TP is obtained when the equal signs are taken in the above. Clearly, if \(\delta\) is positive we require \(\delta<2\) for all parameters to be kept real. Moreover, the derivation of Eqs. (7) and (8) assumes non-degenerate perturbation theory requiring \(\delta\neq 1\). To reduce the number of parameters and facilitate the visualization of the different phase boundaries, we constrain ourselves to \(\delta=0\). However, we will keep \(\delta\) in all our derivations since in certain experimental setups it might be easier to vary this detuning instead of the parameter \(\gamma\).
In Fig. 2, the phase diagram for \(\delta=0\) and three different values of \(\gamma\) is presented. Note that we have chosen \(\alpha^{2}\) instead of \(|\alpha|^{2}\) as the order parameter in order to differentiate between the SRA and SRB. For \(\delta=0\) the TP is located at \(\gamma=\gamma_{TP}=1/\sqrt{2}\) as deducted from Eqs. (7) and (8). Panel (a) illustrates how for values of \(\gamma>\gamma_{TP}\) the phase transition is of second order with the phase boundary defined by \(\gamma^{2}=1/\lambda_{\pm}^{2}\). In panel (b) the transition between the NP and SRA/SRB is given by a line of TP's. Finally, in panel (c) the transition is found to be of first order as the order parameter changes discontinuously to a nonzero value across the phase transition.
The phase diagram showcases both discrete and continuous symmetry breaking. First, note that the energy
Figure 2: Phase diagram in the \(\lambda_{1}\)-\(\lambda_{2}\) plane for \(\delta=0\) and (a) \(\gamma=0.8\), (b) \(\gamma=\gamma_{TP}=1/\sqrt{2}\), and (c) \(\gamma=0.6\). Here we choose \(\alpha^{2}\) as the order parameter but equivalent phase diagrams can be constructed for \(\beta_{1}\) and \(\beta_{2}\). The solid vertical line represents the Tavis-Cummings line dividing the SRA and SRB phases. The dashed lines in (a) signal the second-order boundary, while the dotted lines in (b) denote the line of tricritical points, both of these lines’ equations are given by Eqs. (7) and (8).
in Eq. (6) is invariant under the transformation \(\alpha\to-\alpha\), \(\beta_{2}\to-\beta_{2}\), \(\beta_{1}\to\beta_{1}\). This \(Z_{2}\) symmetry is spontaneously broken in the SRA/SRB phases. On the other hand, when \(g_{2}=0\) the system is reduced to a tricritical Tavis-Cummings model, in which case \(H_{0}\) is invariant under a more general transformation \(\alpha\to\alpha e^{i\theta}\), \(\beta_{2}\to\beta_{2}e^{i\theta}\), \(\beta_{1}\to\beta_{1}e^{2i\theta}\), with \(\theta\in[0,2\pi)\). This means that there are infinite equilibrium configurations with the three order parameters being nonzero for \(\lambda_{+}=\lambda_{-}>\lambda_{c}\), with \(\lambda_{c}\) being the value at which the first-order, second-order, or tricritical phase transition occurs. These solutions spontaneously break the continuous \(U(1)\) symmetry. The special case \(g_{1}=0\) is equivalent to the Tavis-Cummings case described above after a rotation of the atomic spin operators is performed [29].
### Critical behavior
Although \(H_{0}\) is enough to determine the ground state mean-field properties, further terms (\(H_{1}\) and \(H_{2}\)) are needed to study the excitation spectrum. As shown in Appendix A, for any values of the order parameters \(\alpha\), \(\beta_{1}\), and \(\beta_{2}\) that minimize \(H_{0}\), the Hamiltonian \(H_{1}\) vanishes, and the excitation spectrum is determined by \(H_{2}\). The general form of \(H_{2}\) is given by:
\[H_{2}=\sum_{j=1}^{6}\sum_{k=1}^{6}\mathcal{C}_{jk}v_{j}v_{k}\,, \tag{9}\]
where \(v_{j}\) is the \(j\)-th component of the operator vector \(\vec{v}=(c^{\dagger},d_{1}^{\dagger},d_{2}^{\dagger},c,d_{1},d_{2})\), and the matrix components \(\mathcal{C}_{jk}\) are given explicitly in Appendix A. Since the Hamiltonian in Eq. (9) is bilinear in the annihilation and creation operators, it can be diagonalized using a Bogoliubov transformation [30; 31] (see Appendix C) into the form
\[H_{2}=\sum_{j=1}^{3}\varepsilon_{j}a_{j}^{\dagger}a_{j}\,, \tag{10}\]
where we have omitted a constant shift. The annihilation and creation operators \(a_{j}\) and \(a_{j}^{\dagger}\) are a linear combination of all the operators contained in the components of \(\vec{v}\). If we consider that \(\varepsilon_{1}<\varepsilon_{2}<\varepsilon_{3}\) for a given set of all system parameters, then we can identify \(\varepsilon_{1}=\Delta\) as the energy gap between the ground state and the first excited state. In a second-order phase transition, including the TP, we expect \(\Delta\) to vanish exactly at the phase transition, this is illustrated in Fig. 3.
Note that the second order equation in Eq. (7) agrees with the numerical behavior as signaled by the white dashed line. When the first order line is crossed, a discontinuous jump in the energy gap is observed. Note that here we chose to illustrate the energy gap variation entering the SRA phase. However, as evidenced by the symmetry of the phase diagrams in Fig. 2 an identical behavior is expected for the SRB phase.
To differentiate between the different types of phase boundaries we can explore their corresponding critical exponents. For instance, let us consider a point \(p=(\delta,\lambda_{1},\lambda_{2},\gamma)\) located very close to the critical point \(p_{c}=(\delta_{c},\lambda_{1c},\lambda_{2c},\gamma_{c})\), formally, we consider \(p\) to be located in a line perpendicular to the phase boundary at point \(p_{c}\). We expect that the order parameter \(\alpha\) scales as \(\alpha\propto d^{\mu}\), where \(d=\sqrt{(\delta-\delta_{c})^{2}+(\lambda_{1}-\lambda_{1c})^{2}+(\lambda_{2}- \lambda_{2c})^{2}+(\gamma-\gamma_{c})^{2}}\) is the distance from the critical point when we approach it from the superradiant phase. Similarly, we could define the scaling behavior of the excitation gap \(\Delta\propto d^{\nu_{\pm}}\), where \(\nu_{-}\) considers the point \(p\) to be located in the normal phase and \(\nu_{+}\) is the scaling exponent when the boundary is approached from the superradiant phase.
In Fig. 4, the scaling behavior of \(\Delta\) and \(|\alpha|\) is presented. The first thing it must be noted is that the SRA and SRB phases have identical scaling exponents, this means that in an experiment, where the available quantity to measure is \(|\alpha|^{2}\), these phases are indistinguishable. Regardless of crossing a second-order boundary or a TP, the excitation gap vanishes with \(\nu_{\pm}=1/2\) for these two phases. For the Tavis-Cummings line, on the other hand, as we reach the critical point (second order or TP) from the NP, the energy gap vanishes with \(\nu_{-}=1\). However, the energy gap remains equal to zero inside the superradiant phase as shown in Fig. 4(a). This Goldstone mode [32] is characteristic of phases where a continuous \(U(1)\) symmetry is spontaneously broken, leading to these gapless excitations.
In the same way that \(\nu_{\pm}\) can be used to differentiate between the Tavis-Cummings line and the SRA/SRB phase transitions, the exponent \(\mu\) allows differentiating between a second-order line and a TP as shown in Fig. 4(b). After crossing to any superradiant phase the order parameter
Figure 3: Energy gap \(\Delta\) between the ground state and first excited state in the \(\gamma\)-\(\lambda_{+}\) plane for \(\delta=0\) and fixed \(\lambda_{1}=0.3\sqrt{2}\). The white dashed line represents the second-order boundary \(\gamma=1/\lambda_{+}\) which terminates at the TP as represented by the white star.
scales with \(\mu=1/2\) for a second order phase transition while the exponent is \(\mu=1/4\) for a TP, indicating that TP belongs to a different universality class in comparison to other critical points on the second-order line.
## IV Finite \(N\)
Now that all the mean-field features in the thermodynamic limit of the model have been discussed, it is important to study if the precursors of the phase transitions are still present for a finite number of atoms \(N\). In order to perform exact diagonalization calculations we need to consider a cutoff photon number \(N_{ph}\), this means that the total size of the Hilbert space is \(3^{N}\times N_{ph}\). Clearly, only a few atoms can be considered if the full Hilbert space is used. However, as we will show below, by exploiting symmetry constraints, we are able to consider a much larger system with \(N\sim 10^{2}\).
To this end, we can borrow some ideas from the treatment used in the conventional Dicke model, see for example Ref. [9]. The conventional Dicke Hamiltonian is given by \(H=\omega a^{\dagger}a+\Omega S_{z}+g/\sqrt{N}(a+a^{\dagger})S_{z}\), where \(S_{j}\) are collective spin operators. It is clear that \([H,S^{2}]=0\), which means that \(S^{2}\) is conserved and that states with different eigenvalue \(S\) are not mixed by the Hamiltonian. Since the mean field ground state in the NP is where all spins point downwards, it is of interest to consider the totally symmetric manifold with \(S=N/2\) to which the NP ground state belongs, and to represent the Hamiltonian using only the set of states \(\{|N/2,-N/2\rangle,|N,-N/2+1\rangle,...,|N/2,N/2\rangle\}\). These states \(|S=N/2,m\rangle\) are often referred to as Dicke States and using them reduces the atomic Hilbert space size from \(2^{N}\) to \(N+1\).
The TDM Hamiltonian in Eq. (1) clearly does not commute with \(S^{2}\) as it is nonlinear in the spin operators \(S_{j}\). However, if instead of considering an \(SU(2)\) representation through the conventional spin operators, we choose an \(SU(3)\) representation spanned by the Gell-Mann matrices \(\Lambda_{j}\) (see Appendix D for a list of the Gell-Mann matrices' properties), the TDM Hamiltonian can be represented in terms of a linear combination of Gell-Mann matrices:
\[H = \omega a^{\dagger}a+\frac{\Omega}{2}\left(\sqrt{3}\Lambda_{8}+ \Lambda_{3}\right) \tag{11}\] \[+\frac{g}{\sqrt{N}}(a+a^{\dagger})(\Lambda_{1}+\gamma\Lambda_{6} )\,.\]
Just as in Eq. (1), we have summed over all atoms and written the Hamiltonian in terms of collective operators, namely, \(\Lambda_{j}=\sum_{k=1}^{N}\Lambda_{j}^{(k)}\). One can show that the Hamiltonian commutes with the two Casimir operators of \(SU(3)\):
\[C_{1}=\sum_{j}\Lambda_{j}\Lambda_{j},\quad C_{2}=\sum_{j,k,l}d_{jkl}\Lambda_{j }\Lambda_{k}\Lambda_{l}\,, \tag{12}\]
where \(d_{jkl}=\frac{1}{4}\text{tr}(\{\Lambda_{j},\Lambda_{k}\}\Lambda_{l})\) are totally symmetric coefficients. Note that we use \(\Lambda_{i}\) instead of \(\lambda_{i}\) for the Gell-Mann matrices not to be confused with the dimensionless parameters \(\lambda_{1}\) and \(\lambda_{2}\) introduced before. A conventional approach is to use the Cartan-Weyl notation instead of the Gell-Mann matrices, so we define:
\[T_{\pm}=\tfrac{1}{2}(\Lambda_{1}\pm i\Lambda_{2}),\quad T_{z}= \tfrac{1}{2}\Lambda_{3},\quad Y=\tfrac{1}{\sqrt{3}}\Lambda_{8}\,,\] \[U_{\pm}=\tfrac{1}{2}(\Lambda_{6}\pm i\Lambda_{7}),\quad V_{\pm}= \tfrac{1}{2}(\Lambda_{4}\pm i\Lambda_{5})\,. \tag{13}\]
In this notation, it is clear that there are three sets of ladder operators driving the transitions between the three different states, while \(T_{z}\) and \(Y\) are both diagonal operators and are associated with isospin and hypercharge in the context of particle physics [33].
In terms of the operators defined in Eqs. (13) and up to a constant shift, the TDM Hamiltonian becomes:
\[H = \omega a^{\dagger}a+\Omega\left(\frac{3-\delta}{2}Y+(1-\delta)T_{ z}\right) \tag{14}\] \[+\frac{g_{1}}{\sqrt{N}}(a(T_{+}+\gamma U_{+})+a^{\dagger}(T_{-}+ \gamma U_{-}))\] \[+\frac{g_{2}}{\sqrt{N}}(a^{\dagger}(T_{+}+\gamma U_{+})+a(T_{-}+ \gamma U_{-}))\,.\]
Similar to how different representations of \(SU(2)\) are labeled by the different eigenvalues of \(S^{2}\), different representations of \(SU(3)\) will be classified depending on the eigenvalues of \(C_{1}\) and \(C_{2}\), which we denote as \(c_{1}\) and \(c_{2}\)
Figure 4: Scaling near different types of transitions. (a) The energy gap near the critical point. The dashed line corresponds to the Tavis-Cummings model where the counter-rotating terms are not present. The solid line depicts the behavior when both co- and counter-rotating terms are present. (b) The order parameter \(|\alpha|\) near the critical point. The dashed (solid) line represents the behavior across the TP (a second-order critical point). We consider values of \(|d|\leq 1\times 10^{-4}\), and \(d\) is defined to be negative (positive) if the transition is approached from the normal (superradiant) phase. The scaling exponent for each phase transition is signaled with an arrow.
respectively. A common notation change is to consider the integers \(p\) and \(q\) instead of \(c_{1}\) and \(c_{2}\) as the labels for the representations. In the particle physics context, \(p\) and \(q\) correspond to the number of quarks and antiquarks, respectively [34]. The relation between these two notations is given by [35]:
\[c_{1}=(p^{2}+q^{2}+3p+3q+pq)/3\] \[c_{2}=(p-q)(3+p+2q)(3+q+2p)/18\,. \tag{15}\]
Since \(Y\) and \(T_{z}\) commute with each other, they can define a set of commutable operators with \(C_{1}\) and \(C_{2}\). Moreover, if we define \(T^{2}=T_{x}^{2}+T_{y}^{2}+T_{z}^{2}\), with \(T_{\pm}=T_{x}\pm iT_{y}\), the set \(\{T_{z},T^{2},Y,C_{1},C_{2}\}\) defines a complete set of commutable operators [36]. Consequently, each state in a given representation is labeled by the eigenvale of these operators, namely, \(|t_{z},t,y,p,q\rangle\).
Since the TDM Hamiltonian commutes with \(C_{1}\) and \(C_{2}\), it does not mix states with different values of \(p\) and \(q\). Similar to the conventional Dicke states, we focus on the totally symmetric representation given by \(q=0\), \(p=N\)[34]. As shown in Ref. [37], in the totally symmetric representation, \(y\) and \(t\) are related by \(t=y/2+p/3\). Then, we can omit the labels \(p\), \(q\), and \(y\), and the states of interest are simply labeled by \(|t,t_{z}\rangle\). These states represent generalized Dicke states for \(SU(3)\), and they have been discussed before, see for example Ref. [38].
Once these generalized Dicke states are chosen as a basis, the only thing missing is to find the matrix elements of each operator in this basis, explicit expressions for each operator can be found in Appendix E. As \(T_{z}\), \(T_{+}\), and \(T_{-}\) define an \(SU(2)\) subalgebra, their matrix elements are very easy to determine. By contrast, \(U_{\pm}\) produce interesting matrix elements as they change the value of both \(t\) and \(t_{z}\), simultaneously.
The dimension of the totally symmetric representation is \((N+1)(N+2)/2\), which means that we have decreased the atomic Hilbert space size from being exponential in \(N\) to quadratic in \(N\). Furthermore, the parity operator \(\Pi=\exp(i\pi\left(a^{\dagger}a+T_{z}+3Y/2\right))\) commutes with the Hamiltonian in Eq. (14). This means that all states \(|n,t,t_{z}\rangle\), where \(n\) is the number of photons, can be divided into states with positive or negative parity. Consequently, the size of the Hilbert space needed for exact diagonalization is divided by half.
In Fig. 5, the behavior of the photon population \(\langle a^{\dagger}a\rangle\) for a finite number of atoms \(N\) is compared with the results from the thermodynamic limit \(N\to\infty\). Note that we cannot compare \(\langle a\rangle\) since the spontaneous symmetry breaking only occurs in the thermodynamic limit, namely, for finite \(N\) it is always the case that \(\langle a\rangle=0\).
We note from the figure that as \(N\) increases the behavior of the photon population converges rapidly to the expected behavior in the thermodynamic limit. Moreover, both the smooth behavior of the second order phase transition and the sharp discontinuous behavior of the first order line can already be captured with \(N=50\). Then, in an experimental realization where thousands of atoms could be trapped, we expect that the phase transition could be easily characterized by the behavior of \(\langle a^{\dagger}a\rangle/N\).
The convergence of the results as \(N\) increases signals that the description using the generalized Holstein-Primakoff map, and the one using the Gell-Mann matrices are equivalent to each other in the appropriate limit \(N\to\infty\).
## V Open system steady states
A potential experimental realization of the Hamiltonian in Eq. (14) can be done using the hyperfine states of an atom through cavity-assisted Raman transitions, this was proposed for realizing spin-1 light-interacting Hamiltonians in Ref. [39], later realized in Ref. [40], and could be extended to higher spin systems as proposed in Ref. [15]. In practical situations involving cavities, dissipative processes are unavoidable. This raises the question of whether all the types of critical boundaries that we find in equilibrium would survive once incoherent losses are taken into account. In particular, how would the TP manifest in an open system?
We focus only on the leaking of photons out of the cavity with rate \(\kappa\). In the absence of counter-rotating terms (Tavis-Cummings line), even an infinitesimal value of \(\kappa\) would suppress the dissipative phase transition [22]. Hence we focus exclusively on the dissipative phase transition into the SRA. For simplicity, we consider \(g_{1}=g_{2}=g\) and \(\delta=0\). This means that \(\lambda_{1}=\lambda_{2}\) and the light
Figure 5: Photon population as a function of \(\lambda_{+}\) for different atom number \(N\) near a phase transition into the SRA phase. In (a) the transition is of second order with \(\gamma=0.8\), while in (b) the transition is of first order with \(\gamma=0.6\). In both cases, we consider \(\lambda_{1}=\lambda_{2}=\lambda_{+}/2\) and \(\delta=0\). The dashed line shows the mean field behavior in the thermodynamic limit. We consider a photon cutoff of \(N_{ph}=100\).
matter interaction is reduced to a single parameter, to keep a consistent notation we choose that parameter to be \(\lambda_{+}=2\lambda_{1}=2\lambda_{2}\).
### Master equation
Since the complete expressions for \(C_{1}\) and \(C_{2}\) in Eq. (12) will be used as constraints, in this case, it is simpler to consider the Hamiltonian in terms of the Gell-Mann Matrices as in Eq. (11). The open system dynamics are described by the Heisenberg picture's Lindblad equation:
\[\frac{d}{dt}\mathcal{A}=i[H,\mathcal{A}]+\kappa(2a^{\dagger}\mathcal{A}a-\{a^ {\dagger}a,\mathcal{A}\})\,, \tag{16}\]
where \(\mathcal{A}\) represents any operator of interest. We can obtain a system of coupled differential equations by computing Eq. (16) for all \(\Lambda_{i}\)'s and \(a\):
\[\frac{d}{dt}\langle a\rangle = -i(\omega-i\kappa)\langle a\rangle-ig(\langle\Lambda_{1}\rangle+ \gamma\langle\Lambda_{6}\rangle)\] \[\frac{d}{dt}\langle\Lambda_{1}\rangle = -\Omega\langle\Lambda_{2}\rangle+g\gamma(\langle a\rangle+\langle a ^{\dagger}\rangle)\langle\Lambda_{5}\rangle\] \[\frac{d}{dt}\langle\Lambda_{2}\rangle = \Omega\langle\Lambda_{1}\rangle-2g(\langle a\rangle+\langle a^{ \dagger}\rangle)\langle\Lambda_{3}\rangle\] \[-g\gamma(\langle a\rangle+\langle a^{\dagger}\rangle)\langle \Lambda_{4}\rangle\] \[\frac{d}{dt}\langle\Lambda_{3}\rangle = 2g(\langle a\rangle+\langle a^{\dagger}\rangle)\langle\Lambda_{ 2}\rangle-g\gamma(\langle a\rangle+\langle a^{\dagger}\rangle)\langle\Lambda_ {7}\rangle\] \[\frac{d}{dt}\langle\Lambda_{4}\rangle = -2\Omega\langle\Lambda_{5}\rangle-g(\langle a\rangle+\langle a^{ \dagger}\rangle)\langle\Lambda_{7}\rangle\] \[+g\gamma(\langle a\rangle+\langle a^{\dagger}\rangle)\langle \Lambda_{2}\rangle\] \[\frac{d}{dt}\langle\Lambda_{5}\rangle = 2\Omega\langle\Lambda_{4}\rangle+g(\langle a\rangle+\langle a^{ \dagger}\rangle)\langle\Lambda_{6}\rangle\] \[-g\gamma(\langle a\rangle+\langle a^{\dagger}\rangle)\langle \Lambda_{1}\rangle\] \[\frac{d}{dt}\langle\Lambda_{6}\rangle = -\Omega\langle\Lambda_{7}\rangle-g(\langle a\rangle+\langle a^{ \dagger}\rangle)\langle\Lambda_{5}\rangle\] \[\frac{d}{dt}\langle\Lambda_{7}\rangle = \Omega\langle\Lambda_{6}\rangle+g(\langle a\rangle+\langle a^{ \dagger}\rangle)\langle\Lambda_{4}\rangle\] \[+g\gamma(\langle a\rangle+\langle a^{\dagger}\rangle)\langle \Lambda_{3}\rangle\] \[-\sqrt{3}g\gamma(\langle a\rangle+\langle a^{\dagger}\rangle) \langle\Lambda_{8}\rangle\] \[\frac{d}{dt}\langle\Lambda_{8}\rangle = \sqrt{3}g\gamma(\langle a\rangle+\langle a^{\dagger}\rangle) \langle\Lambda_{7}\rangle\,, \tag{17}\]
where we have taken the expectation value on both sides of each equation. Note that we have taken the mean field approximation where expectation values of the form \(\langle a\Lambda_{i}\rangle\) are approximated by \(\langle a\rangle\langle\Lambda_{i}\rangle\). This approximation has proven to be very effective in open Dicke-like systems when working in the thermodynamic limit \(N\rightarrow\infty\)[20]. Consequently, for all the following results we will always consider the system in the thermodynamic limit. We have also rescaled the expectation values as \(\langle\Lambda_{i}\rangle/N\rightarrow\langle\Lambda_{i}\rangle\) and \(\langle a\rangle/\sqrt{N}\rightarrow\langle a\rangle\).
Since we will focus only on the steady-state properties of the system, we set all equations in Eq. (17) equal to zero. Two important results follow from the steady-state equations. First, the steady state expectation value of all the antisymmetric \(\Lambda_{i}\) operators vanishes, namely, \(\langle\Lambda_{2}\rangle=\langle\Lambda_{5}\rangle=\langle\Lambda_{7}\rangle=0\); and second, although there are still seven real variables to determine (note that \(\langle a\rangle\) counts as two variables as it is generally complex), only five independent equations remain. After algebraic manipulation of Eqs. (17), it can be shown that the two additional constraints are given by:
\[A=\sum_{j}\langle\Lambda_{j}\rangle\langle\Lambda_{j}\rangle,\quad B=\sum_{j,k,l}d_{jkl}\langle\Lambda_{j}\rangle\langle\Lambda_{k}\rangle\langle\Lambda_{l} \rangle\,, \tag{18}\]
where \(A\) and \(B\) are time independent, i.e., \(dA/dt=0=dB/dt\). Hence \(A\) and \(B\) are two constants determined by the initial conditions. It is clear that these two additional constraints are a manifestation of the Casimir invariants in Eq. (12). This is a similar situation to what happens in the conventional open Dicke model where the extra constraint needed arises from the conservation of the total spin length (\(SU(2)\) Casimir invariant). In the thermodynamic limit, and in the totally symmetric representation where \(p=N\), the eigenvalues \(c_{1}\) and \(c_{2}\) are given by:
\[\frac{c_{1}}{N^{2}}\approx\frac{1}{3},\quad\frac{c_{2}}{N^{3}}\approx\frac{1}{ 9}\,. \tag{19}\]
Since rescaling \(\langle\Lambda_{j}\rangle\to D\langle\Lambda_{j}\rangle\), with \(D\) a time-independent constant, does not change Eq. (18), we can define \(A=D^{2}c_{1}/N^{2}\) and \(B=D^{3}c_{2}/N^{3}\). Here we choose \(D=2\) such that \(A=4/3\) and \(B=8/9\). Now that we have a complete set of algebraic equations we can solve for all possible steady states which can be broadly divided into four categories (three normal phases and one superradiant phase) as shown in Table 1.
Although we find four categories of steady states, it does not mean that all of them are stable attractors. In order to study the stability of each steady state, we can simulate the dynamics of the system of differential equations in Eq. (17) starting from slightly perturbed states and check if the dynamics lead the system back to this same steady state. This is illustrated in Fig. 6 where we initialize the system in a slightly perturbed state with respect to different normal phases. For the parameters in Fig. 6(a), the NP3 phase is stable and the system rapidly goes back to this state after it is slightly perturbed. By contrast, in (b) the perturbation causes the system to
\begin{table}
\begin{tabular}{|c|c|} \hline Phase & Expectation values \\ \hline Normal phase 1 (NP1) & \(\langle a\rangle=0\), \(\langle P_{11}\rangle=1\) \\ & \(\langle\Lambda_{1}\rangle=\langle\Lambda_{4}\rangle=\langle\Lambda_{6}\rangle=0\) \\ \hline Normal phase 2 (NP2) & \(\langle a\rangle=0\), \(\langle P_{22}\rangle=1\) \\ & \(\langle\Lambda_{1}\rangle=\langle\Lambda_{4}\rangle=\langle\Lambda_{6}\rangle=0\) \\ \hline \hline Normal phase 3 (NP3) & \(\langle a\rangle=0\), \(\langle P_{33}\rangle=1\) \\ & \(\langle\Lambda_{1}\rangle=\langle\Lambda_{4}\rangle=\langle\Lambda_{6}\rangle=0\) \\ \hline Superradiant phase (SR) & \(\langle a\rangle\neq 0\), \(\langle\Lambda_{1}\rangle\),\(\langle\Lambda_{4}\rangle\),\(\langle\Lambda_{6}\rangle\neq 0\) \\ \hline \end{tabular}
\end{table}
Table 1: Different steady state phases.
evolve away from the unstable NP3 phase into a stable SR phase.
### Dissipative phase diagram
In Fig. 7, a phase diagram with all the stable steady states is presented for \(\kappa/\omega=0.1\). The first key thing to notice is that while the NP1 phase is always unstable, both NP2 and NP3 have regions where they are stable. Specifically, we note that for \(\gamma>1\), the NP2 phase is always unstable regardless of the value of \(\lambda_{+}\). This behavior is easy to understand using the schematics in Fig. 1. If \(\gamma>1\), the coupling between states \(|3\rangle\) and \(|2\rangle\) is always stronger than the coupling with state \(|1\rangle\). Additionally, state \(|1\rangle\) has the highest energy of all, it follows that in the normal phase the system behaves like an effective two-level system and, as in the conventional spin \(1/2\) Dicke model, the only stable normal phase is that where all spins populate the lowest energy state, in this case, state \(|3\rangle\).
For \(\gamma<1\), on the other side, all the richness of having three-level atoms can be exploited and we see a series of different stability regions, of particular interest are the regions of bistability where two different phases are stable and the final fate of the system would depend entirely on the initial conditions. These bistable regions can contain two normal phases or one normal phase and one superradiant phase. The dependence on initial conditions in a bistable region is illustrated in Fig. 6 (b) and (c) where, for the same set of parameters, different initial states lead the system to the superradiant phase in (b) and to the NP2 phase in (c).
The connection between having a TP in the closed system and the emergence of these bistable regions is very interesting. In particular, we note that the TP in the open system (signaled by a red star in Fig. 7) is the only point where all three bistability regions converge. Using the generalized Holstein-Primakoff mapping and by considering the small fluctuations above the steady state, the stability boundaries of the NP2 and NP3 phases can be found analytically (see Appendix F for more details) as:
\[\lambda_{+}=\frac{\sqrt{1+\kappa^{2}/\omega^{2}}}{\gamma}\to NP3\,,\] \[\lambda_{+}=\frac{\sqrt{1+\kappa^{2}/\omega^{2}}}{\sqrt{1-\gamma ^{2}}}\to NP2\,. \tag{20}\]
These two boundaries are represented by the solid and dashed lines, respectively, in Fig. 7. The TP is located in the intersection of these two boundaries, namely, (\(\lambda_{+TP}=\sqrt{2+2\kappa^{2}/\omega^{2}}\), \(\gamma_{TP}=1/\sqrt{2}\)). Similar to what happens in the conventional open Dicke model, increasing the cavity decay rate \(\kappa\) requires a higher light-matter interaction strength to reach the critical point (higher \(\lambda_{+}\)). Nonetheless, since the leaking of photons affects both atomic types of transitions in the same manner, the critical value of \(\gamma\) remains unchanged with respect to the closed system value.
Figure 6: Steady state stability. In (a) the system parameters are fixed to \(\gamma=0.8\) and \(\lambda_{+}=0.6\sqrt{2}\), while in (b) and (c) the parameters are set to \(\gamma=0.8\) and \(\lambda_{+}=\sqrt{2}\). In all panels the state is initialized in a normal phase but with a slightly perturbed initial photon population \(\langle a\rangle=0.1+0.01i\). In (a) and (b) the initial state is very close to the NP3 phase while in (c) the initial state is very close to the NP2 phase. In all panels \(\kappa/\omega=0.1\).
Figure 7: Phase diagram of the stable steady states in the \(\lambda_{+}\)-\(\gamma\) plane for \(\kappa/\omega=0.1\). The white solid line represents the stability boundary of the NP3 phase, while white dashed and dotted lines bound the stability region of the NP2 phase. The TP, signaled by a red star, is located at the intersection of two of the stability boundaries.
In order to characterize the tricritical behavior in the open system and compare it to that of the closed system where we only have the NP3 and SR phases, we consider horizontal cuts (fixed \(\gamma\)) of the phase diagram in Fig. 7 and vary \(\lambda_{+}\). For each value of \(\lambda_{+}\) we start the dynamics in a state slightly perturbed from the NP3 phase and then let the system evolve until it reaches the steady state. The steady-state values of \(|\langle a\rangle|\) are shown in Fig. 8.
We note that as we sweep \(\lambda_{+}\) for \(\gamma\geq\gamma_{TP}\), the change of the NP3 phase going from stable to unstable is signaled by a smooth change in \(\langle a\rangle\) from zero to non-zero values. For \(\gamma\leq\gamma_{TP}\), on the other hand, the change in the expectation value of \(\langle a\rangle\) is discontinuous as we would expect in a first-order transition. This change from continuous to discontinuous behavior confirms the nature of the tricritical point. Note that since all points in Fig. 8 are obtained from an initial state very close to the NP3 phase, then it is clear that the critical points should follow Eq. (20) as illustrated by the gray vertical dashed lines, this again shows the equivalence between the generalized Holstein-Primakoff map and the use of the Gell-Mann matrices. In a similar fashion, by choosing different initial states we could take a look at all the other available steady states.
## VI Conclusion
We have presented a thorough study of the tricritical Dicke model in both the closed and the open setups. In equilibrium, the tunability of the system allows studying not only the change of the phase transition order from second to first but also the spontaneous symmetry breaking of both discrete and continuous symmetries. The different phase transitions were classified according to their scaling exponents. Moreover, signals of these transitions were shown to be observable far from the thermodynamic limit (with less than a hundred atoms).
In the presence of cavity losses, the system develops a series of regions of bistability with all of them converging at the tricritical point. Additionally, the NP2 phase becomes stable in a large region of the parameter space for \(\gamma\leq\gamma_{TP}\). The richness of the non-equilibrium phase diagram allows for the potential preparation of desired steady states through clever choices of initial states and/or parameter quenching.
Both in the closed and open setup there is an agreement between using the generalized Holstein-Primakoff map and the description using the Gell-Mann matrices in the appropriate limit \(N\to\infty\). Nonetheless, more than just being equivalent, the two approaches are complementary as different levels of information about the system can be accessed through each one of them.
Although this system could be realized using Raman transitions as mentioned above, an interesting future research direction could be to use other platforms where an equivalent system might be realized to explore these interesting steady states. There are already various platforms where a Dicke-like Hamiltonian can be simulated [41; 42; 43], and modifications in those setups might lead to Hamiltonians of the form of Eq. (1). For example, in a spin-orbit coupled BEC with spin-1 atoms, tricritical points have been reported [44]. In that case, the motional degrees of freedom of the atoms are the analog of the light mode in our setup [45]. Then, if a loss process equivalent to the photons leaking from the cavity can be engineered in such a platform, various interesting magnetic steady states could be explored.
###### Acknowledgements.
We acknowledge support from the US NSF PHY-2207283 and the Welch Foundation (Grant No. C-1669).
Figure 8: Steady state value of \(|\langle a\rangle|\) as a function of \(\lambda_{+}\) for different values of \(\gamma\). Each value of \(\langle a\rangle\) is found by numerically integrating the set of differential equations in Eq. (17) for a very long time \(t\omega=5000\). For all data points the initial state is slightly perturbed from the NP3 phase with \(\langle a\rangle=0.1+0.01i\). The vertical dashed gray lines represent the value of \(\lambda_{+}\) where we expect the NP3 phase to become unstable for each \(\gamma\) according to Eq. (20). We set \(\kappa/\omega=0.1\) for all points.
## Appendix A Generalized Holstein-Primakoff mapping
First, we start by expanding \(\Theta\) presented in Eq. (3):
\[\Theta\approx\sqrt{N}\beta-\frac{1}{2\beta}(\beta_{1}d_{1}^{\dagger}+\beta_{1}^{ *}d_{1}+\beta_{2}d_{2}^{\dagger}+\beta_{2}^{*}d_{2})-\frac{1}{2\beta\sqrt{N}} \left[(d_{1}^{\dagger}d_{1}+d_{2}^{\dagger}d_{2})+\frac{(\beta_{1}d_{1}^{ \dagger}+\beta_{1}^{*}d_{1}+\beta_{2}d_{2}^{\dagger}+\beta_{2}^{*}d_{2})^{2} }{4\beta^{2}}\right]\,, \tag{11}\]
where we have kept only powers of \(N\) that allow us to recast the Hamiltonian in the form of Eq. (5). \(H_{0}\) is explicitly given in Eq. (6) and the derivatives of \(H_{0}\) with respect to the order parameters are given by equations
\[\frac{\partial H_{0}}{\partial\alpha} = \omega\alpha^{*}+g_{1}\beta_{1}^{*}\beta_{2}+g_{2}\beta_{2}^{*} \beta_{1}+g_{1}\gamma\beta\beta_{2}^{*}+g_{2}\gamma\beta\beta_{2}\] \[\frac{\partial H_{0}}{\partial\beta_{1}} = (2-\delta)\Omega\beta_{1}^{*}+g_{1}\alpha^{*}\beta_{2}^{*}+g_{2} \alpha\beta_{2}^{*}-\frac{g_{1}\gamma\beta_{1}^{*}}{2\beta}(\alpha\beta_{2}^{* }+\text{c.c})-\frac{g_{2}\gamma\beta_{1}^{*}}{2\beta}(\alpha\beta_{2}+\text{c.c})\] \[\frac{\partial H_{0}}{\partial\beta_{2}} = \Omega\beta_{2}^{*}+g_{1}\alpha\beta_{1}^{*}+g_{2}\alpha^{*} \beta_{1}^{*}+g_{1}\gamma\beta\alpha^{*}+g_{2}\gamma\beta\alpha-\frac{g_{1} \gamma\beta_{2}^{*}}{2\beta}(\alpha\beta_{2}^{*}+\text{c.c})-\frac{g_{2}\gamma \beta_{2}^{*}}{2\beta}(\alpha\beta_{2}+\text{c.c})\,. \tag{12}\]
Additionally, \(\frac{\partial H_{0}}{\partial\alpha^{*}}=\left(\frac{\partial H_{0}}{ \partial\alpha}\right)^{*}\), \(\frac{\partial H_{0}}{\partial\beta_{1}^{*}}=\left(\frac{\partial H_{0}}{ \partial\beta_{1}}\right)^{*}\), and \(\frac{\partial H_{0}}{\partial\beta_{2}^{*}}=\left(\frac{\partial H_{0}}{ \partial\beta_{2}}\right)^{*}\). Clearly, minimization of \(H_{0}\) requires that simultaneously all the six derivatives presented above are equal to zero. Expanding the terms proportional to \(\sqrt{N}\) we find that:
\[H_{1}=\left(\frac{\partial H_{0}}{\partial\alpha}\right)c+\left(\frac{ \partial H_{0}}{\partial\alpha^{*}}\right)c^{\dagger}+\left(\frac{\partial H_ {0}}{\partial\beta_{1}}\right)d_{1}+\left(\frac{\partial H_{0}}{\partial\beta _{2}^{*}}\right)d_{1}^{\dagger}+\left(\frac{\partial H_{0}}{\partial\beta_{2}} \right)d_{2}+\left(\frac{\partial H_{0}}{\partial\beta_{2}^{*}}\right)d_{2}^{ \dagger}\,. \tag{13}\]
This means that as long as we always consider the values of the order parameters that minimize the energy then \(H_{1}=0\).
\(H_{2}\) can be rewritten in the general form given in Eq. (9), with \(\vec{v}=(c^{\dagger},d_{1}^{\dagger},d_{1},c,d_{1},d_{2})\), the explicit values of the \(\mathcal{C}_{jk}\) are given by:
\[\mathcal{C}_{11}=\mathcal{C}_{44}=0,\quad\mathcal{C}_{14}=\mathcal{C}_{41}= \omega/2\]
\[\mathcal{C}_{22}=\mathcal{C}_{55}^{*}=-\frac{\gamma\beta_{1}^{2}}{8\beta^{3}}( g_{1}(\alpha\beta_{2}^{*}+\text{c.c})+g_{2}(\alpha\beta_{2}+\text{c.c}))\]
\[\mathcal{C}_{25}=\mathcal{C}_{52}^{*}=\frac{1}{2}\left((2-\delta)\Omega-\frac{g _{17}}{2\beta}\left(1+\frac{|\beta_{1}|^{2}}{2\beta^{2}}\right)(\alpha\beta_{2 }^{*}+\text{c.c})\right)-\frac{g_{27}}{4\beta}\left(1+\frac{|\beta_{1}|^{2}}{2 \beta^{2}}\right)(\alpha\beta_{2}+\text{c.c})\]
\[\mathcal{C}_{33}=\mathcal{C}_{66}^{*}=-\frac{\gamma\beta_{2}^{2}}{8\beta^{3}}( g_{1}(\alpha\beta_{2}^{*}+\text{c.c})+g_{2}(\alpha\beta_{2}+\text{c.c}))-\frac{ \gamma}{2\beta}(g_{1}\alpha\beta_{2}+g_{2}\alpha^{*}\beta_{2})\]
\[\mathcal{C}_{36}=\mathcal{C}_{63}^{*}=\frac{1}{2}\left(\Omega-\frac{g_{17}}{2 \beta}\left(2+\frac{|\beta_{1}|^{2}}{2\beta^{2}}\right)(\alpha\beta_{2}^{*}+ \text{c.c})\right)-\frac{g_{27}}{4\beta}\left(2+\frac{|\beta_{2}|^{2}}{2\beta^ {2}}\right)(\alpha\beta_{2}+\text{c.c})\]
\[\mathcal{C}_{12}=\mathcal{C}_{21}=\mathcal{C}_{45}^{*}=\mathcal{C}_{54}^{*}= \frac{1}{2}g_{2}\beta_{2}-\frac{\gamma}{4\beta}(g_{1}\beta_{2}\beta_{1}+g_{2} \beta_{2}^{*}\beta_{1})\]
\[\mathcal{C}_{15}=\mathcal{C}_{51}=\mathcal{C}_{42}^{*}=\mathcal{C}_{24}^{*}= \frac{1}{2}g_{1}\beta_{2}^{*}-\frac{\gamma}{4\beta}(g_{1}\beta_{2}\beta_{1}^{ *}+g_{2}\beta_{2}^{*}\beta_{1}^{*})\]
\[\mathcal{C}_{13}=\mathcal{C}_{31}=\mathcal{C}_{46}^{*}=\mathcal{C}_{64}^{*}= \frac{1}{2}\left(g_{1}\beta_{1}-\frac{g_{1}\gamma\beta_{2}^{2}}{2\beta}\right)+ \frac{1}{2}\left(g_{2}\gamma\beta-\frac{g_{27}|\beta_{2}|^{2}}{2\beta}\right)\]
\[\mathcal{C}_{16}=\mathcal{C}_{61}=\mathcal{C}_{43}^{*}=\mathcal{C}_{34}^{*}= \frac{1}{2}\left(g_{1}\gamma\beta-\frac{g_{17}|\beta_{2}|^{2}}{2\beta}\right)+ \frac{1}{2}\left(g_{2}\beta_{1}^{*}-\frac{g_{27}(\beta_{2}^{*})^{2}}{2\beta}\right)\]
\[\mathcal{C}_{23}=\mathcal{C}_{32}=\mathcal{C}_{56}^{*}=-\frac{g_{1}\gamma\alpha \beta_{1}}{4\beta}-\frac{g_{27}\alpha^{*}\beta_{1}}{4\beta}-\frac{g_{1} \gamma\beta_{2}\beta_{1}}{8\beta^{3}}(\alpha\beta_{2}^{*}+\text{c.c})-\frac{g_{ 27}\gamma\beta_{2}\beta_{2}}{8\beta^{3}}(\alpha\beta_{2}+\text{c.c})\]
\[\mathcal{C}_{26}=\mathcal{C}_{62}=\mathcal{C}_{35}^{*}=\mathcal{C}_{53}^{*}= \frac{1}{2}(g_{1}\alpha+g_{2}\alpha^{*})-\frac{g_{1}\gamma\alpha^{*}\beta_{1}}{4 \beta}-\frac{g_{27}\alpha\beta_{1}}{4\beta}-\frac{g_{1}\gamma\beta_{1}^{*}\beta_{ 2}^{*}}{8\beta^{3}}(\alpha\beta_{2}^{*}+\text{c.c})-\frac{g_{27}\gamma\beta_{ 2}\beta_{2}^{*}}{8\beta^{3}}(\alpha\beta_{2}+\text{c.c})\,. \tag{14}\]
## Appendix B Perturbation theory
In the SRA and SRB phases, the mean-field value of \(\langle a\rangle=\alpha\) is purely real and purely imaginary, respectively. Here we explicitly compute the critical boundaries for the SRA phase, but an identical procedure follows for the SRB phase.
First, replacing \(a\) and \(a^{\dagger}\) in Eq. (1) by their expectation value \(\alpha\), we obtain the mean-field Hamiltonian as
\[H_{\text{MF}}/N\Omega = \frac{1}{\lambda_{+}^{2}}\alpha^{2}+(1-\delta)P_{11}-\Omega P_{33} \tag{15}\] \[+\alpha\left(P_{12}+\gamma P_{23}+P_{21}+\gamma P_{32}\right),\]
where we have rescaled \(\frac{(g_{1}+g_{2})\alpha}{\Omega\sqrt{N}}\to\alpha\). Near the critical line (either a second-order phase transition or a TP), we
expect \(\alpha\) to be very small. We can then apply the time-independent perturbation theory, treating the first line of Eq. (14) as the unperturbed Hamiltonian and the second line the perturbing Hamiltonian.
If we keep the perturbation expansion up to the sixth order, the mean-field energy will have the form
\[E_{\rm MF}/N\Omega=p_{0}+p_{1}\alpha^{2}+p_{2}\alpha^{4}+p_{3}\alpha^{6}\,, \tag{15}\]
where the \(p_{i}\) coefficients are explicitly given as
\[p_{0}=-1,\quad p_{1}=\tfrac{1}{\lambda_{+}^{2}}-\gamma^{2},\quad p _{2}=\gamma^{2}\left(\gamma^{2}-\tfrac{1}{2-\delta}\right)\,,\] \[p_{3}=\gamma^{2}\left(-2\gamma^{4}+\tfrac{3\gamma^{2}}{2-\delta }+\tfrac{\gamma^{2}}{(2-\delta)^{2}}-\tfrac{1}{(2-\delta)^{2}}\right)\,. \tag{16}\]
Using the standard Landau theory analysis [46], if \(p_{2}>0\), the line \(p_{1}=0\) represents a second-order boundary which leads to
\[\gamma^{2}=\frac{1}{\lambda_{+}^{2}}>\frac{1}{2-\delta}\]
and the TP is determined by the conditions \(p_{1}=p_{2}=0\) and \(p_{3}>0\), i.e.,
\[\gamma^{2}=\frac{1}{\lambda_{+}^{2}}=\frac{1}{2-\delta}\]
These are the results reported in Eq. (7) in the main text.
An alternative and straightforward method to find these two conditions was described in [15] for tridiagonal Hamiltonians like our TDM Hamiltonian. In order to use that result, we rewrite the Hamiltonian in the consistent notation
\[H_{\rm MF}/N\Omega=\left(\frac{1}{\lambda_{+}^{2}}\alpha^{2}+1\right)\mathbb{ I}+\alpha d+h\,, \tag{17}\]
here \(\mathbb{I}\) is the 3 by 3 identity matrix, and the \(d\) and \(h\) matrices are defined as
\[d=\begin{pmatrix}0&1&0\\ 1&0&\gamma\\ 0&\gamma&0\end{pmatrix},\quad h=\begin{pmatrix}(2-\delta)&0&0\\ 0&1&0\\ 0&0&0\end{pmatrix}\,. \tag{18}\]
In this notation, the critical conditions are given by
\[|d_{k,k-1}|^{2}=\frac{1}{\lambda_{+}^{2}}h_{k,k},\quad\text{for}\ k=2,3\,. \tag{19}\]
This yields the two critical equations \(\gamma^{2}=1/\lambda_{+}^{2}\) and \(\lambda_{+}^{2}=2-\delta\) as presented in Eq. (7). These two constraints are equivalent to \(p_{1}=0\) and \(p_{2}=0\), respectively. A similar procedure but using \(\langle a\rangle=i\alpha\), leads to Eq. (8) for the SRB phase.
## Appendix C Bogoliubov transformation
Since the Hamiltonian in Eq. (9) is bilinear in the annihilation and creation operators we can diagonalize it by doing a Bogoliubov transformation.
First, we can rewrite \(H_{2}\) as:
\[H_{2}=\vec{v}\mathcal{M}\vec{v}^{\dagger}\,, \tag{20}\]
where, in our current notation, \(\mathcal{M}\) is given by:
\[\mathcal{M}=\begin{pmatrix}\mathcal{C}_{14}&\mathcal{C}_{15}&\mathcal{C}_{16}& \mathcal{C}_{11}&\mathcal{C}_{12}&\mathcal{C}_{13}\\ \mathcal{C}_{24}&\mathcal{C}_{25}&\mathcal{C}_{26}&\mathcal{C}_{21}&\mathcal{ C}_{22}&\mathcal{C}_{23}\\ \mathcal{C}_{34}&\mathcal{C}_{35}&\mathcal{C}_{36}&\mathcal{C}_{31}&\mathcal{ C}_{32}&\mathcal{C}_{33}\\ \mathcal{C}_{44}&\mathcal{C}_{45}&\mathcal{C}_{46}&\mathcal{C}_{41}&\mathcal{C}_{ 42}&\mathcal{C}_{43}\\ \mathcal{C}_{54}&\mathcal{C}_{55}&\mathcal{C}_{56}&\mathcal{C}_{51}&\mathcal{ C}_{52}&\mathcal{C}_{53}\\ \mathcal{C}_{64}&\mathcal{C}_{65}&\mathcal{C}_{66}&\mathcal{C}_{61}&\mathcal{C}_{6 2}&\mathcal{C}_{63}\end{pmatrix}. \tag{21}\]
Now, let us consider a Bogoliubov transformation \(T\) such that \(\vec{v}^{\dagger}=T\vec{u}^{\dagger}\), where \(\vec{u}=(a_{1}^{\dagger},a_{2}^{\dagger},a_{3}^{\dagger},a_{1},a_{2},a_{3})\) are a new set of annihilation and creation operators. Our objective is to find the transformation \(T\) such that \(H_{2}\) is diagonalized as in Eq. (10).
Since we require the operators in \(\vec{u}\) to follow canonical bosonic commutation relations, namely, \([a_{j},a_{k}^{\dagger}]=\delta_{jk}\), \([a_{j},a_{k}]=0\), and \([a_{j}^{\dagger},a_{k}^{\dagger}]=0\), it follows that \(T\) is constrained by
\[T^{\dagger}\Gamma T=\Gamma\,, \tag{22}\]
where \(\Gamma\) is a diagonal matrix with diagonal given by \((1,1,1,-1,-1,-1)\). Since we are looking for \(T\) such that \(T^{\dagger}\mathcal{M}T\) is diagonal with two-fold degenerate eigenvalues \(\varepsilon_{1}\), \(\varepsilon_{2}\), and \(\varepsilon_{3}\), then it follows that \(T^{\dagger}\Gamma^{2}\mathcal{M}T=\Gamma T^{\dagger}=\Gamma T^{-1}\Gamma \mathcal{M}T\), which means that \(T^{-1}\Gamma\mathcal{M}T\) is a diagonal matrix with eigenvalues \(\varepsilon_{1}\), \(\varepsilon_{2}\), \(\varepsilon_{3}\), \(-\varepsilon_{1}\), \(-\varepsilon_{2}\), and \(-\varepsilon_{3}\). Then, by simply diagonalizing the matrix \(\Gamma\mathcal{M}\) we can find both the transformation matrix \(T\) as well as the corresponding eigenvalues.
## Appendix D Gell-Mann matrices
The Gell-Mann matrices are a group of eight 3 by 3 matrices that generate the \(SU(3)\) algebra. They are explicitly defined as [33]:
\[\Lambda_{1}^{(k)} =\begin{pmatrix}0&1&0\\ 1&0&0\\ 0&0&0\end{pmatrix},\quad\Lambda_{2}^{(k)}=\begin{pmatrix}0&-i&0\\ i&0&0\\ 0&0&0\end{pmatrix}\] \[\Lambda_{3}^{(k)} =\begin{pmatrix}1&0&0\\ 0&-1&0\\ 0&0&0\end{pmatrix},\quad\Lambda_{4}^{(k)}=\begin{pmatrix}0&0&1\\ 0&0&0\\ 1&0&0\end{pmatrix}\] \[\Lambda_{5}^{(k)} =\begin{pmatrix}0&0&-i\\ 0&0&0\\ i&0&0\end{pmatrix},\quad\Lambda_{6}^{(k)}=\begin{pmatrix}0&0&0\\ 0&0&1\\ 0&1&0\end{pmatrix}\] \[\Lambda_{7}^{(k)} =\begin{pmatrix}0&0&0\\ 0&0&-i\\ 0&i&0\end{pmatrix},\quad\Lambda_{8}^{(k)}=\tfrac{1}{\sqrt{3}}\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&-2\end{pmatrix}\,. \tag{23}\]
where the superscript \(k\) indicates that these are single-particle operators associated with the \(k\)th atom.
commutation and anticommutation relations of the Gell-Mann matrices are given, respectively, by:
\[[\Lambda_{j}^{(n)},\Lambda_{k}^{(n)}]=2i\sum_{l}f_{jkl}\Lambda_{l}^{ (n)}\,, \tag{30}\] \[\{\Lambda_{j}^{(n)},\Lambda_{k}^{(n)}\}=\frac{4}{3}\delta_{jk} \mathbb{I}+2\sum_{l}d_{jkl}\Lambda_{l}^{(n)}\,. \tag{31}\]
Here \(f_{jkl}\) are totally antisymmetric structure constants and most of them vanish, for a list of the nonzero values of \(f_{jkl}\) see Ref. [34]. The \(d_{jkl}\) are totally symmetric constants defined explicitly by \(d_{jkl}=\frac{1}{4}\text{tr}(\{\Lambda_{j},\Lambda_{k}\}\Lambda_{l})\). These \(d_{jkl}\) constants are also used to define one of the Casimir operators, see Eq. (12). By construction, the collective operators \(\Lambda_{j}=\sum_{k=1}^{N}\Lambda_{j}^{(k)}\) used in the main text follow the same commutation/anticommutation relations given above.
## Appendix E Matrix Elements SU(3) Dicke states
Here we list how each operator acts on the generalized Dicke states \(|t,t_{z}\rangle\). Both \(T_{z}\) and \(Y\) are diagonal in this basis
\[T_{z}|t,t_{z}\rangle=t_{z}|t,t_{z}\rangle,\quad Y|t,t_{z}\rangle=\left(2t- \frac{2N}{3}\right)|t,t_{z}\rangle\,, \tag{32}\]
where \(N\) is the number of atoms. In the second relation, we have used the fact that \(y\) the eigenvalue of \(Y\) is not independent of the eigenvalue \(t\) in a totally symmetric representation.
Since \(T_{\pm}\) and \(T_{z}\) define an \(SU(2)\) subalgebra, the matrix elements of \(T_{\pm}\) are defined as
\[T_{\pm}|t,t_{z}\rangle=\sqrt{t(t+1)-t_{z}(t_{z}\pm 1)}|t,t_{z}\pm 1\rangle\,. \tag{33}\]
Finally, for a totally symmetric representation \(D(N,0)\), the matrix elements of the ladder operators \(U_{\pm}\) are given by [47]
\[U_{+}|t,t_{z}\rangle=\sqrt{(t-t_{z}+1)(N-2t)}|t+1/2,t_{z}-1/2 \rangle\,,\] \[U_{-}|t,t_{z}\rangle=\sqrt{(t-t_{z})(N-2t+1)}|t-1/2,t_{z}+1/2 \rangle\,.\]
With all these matrix elements being defined, we can construct a matrix representation of Eq. (14) and perform exact diagonalization.
## Appendix F Stability of normal phases
In order to find the stability boundaries of the normal phases we can use the \(H_{2}\) Hamiltonian in Eq. (9). For the NP3, \(H_{2}\) is simply given by
\[H_{2}=\omega c^{\dagger}c+2\Omega(1-\delta)d_{1}^{\dagger}d_{1} +\Omega d_{2}^{\dagger}d_{2}\] \[+g_{2}\gamma(c^{\dagger}d_{2}^{\dagger}+cd_{2})+g_{1}\gamma(c^{ \dagger}d_{2}+cd_{2}^{\dagger})\,. \tag{34}\]
Now, we can compute the Lindblad Eq. (16) for \(c\), \(d_{1}\), and \(d_{2}\) using \(H_{2}\) as a Hamiltonian. Note that since we are in a normal phase, here \(c=a\). The six resulting equations can be written in matrix form as
\[\frac{d\vec{x}}{dt}=\begin{pmatrix}-\kappa&\omega&0&0&0\\ -\omega&-\kappa&0&0&-2g\gamma&0\\ 0&0&0&2\Omega&0&0\\ 0&0&-2\Omega&0&0&0\\ 0&0&0&0&0&\Omega\\ -2g\gamma&0&0&0&-\Omega&0\end{pmatrix}\vec{x}\,, \tag{35}\]
where we have set \(g_{1}=g_{2}=g\) and \(\delta=0\) in order to be consistent with the main text discussion. Here \(\vec{x}^{T}=(\text{Re}(\langle c\rangle),\text{Im}(\langle c\rangle),\text{Re }(\langle d_{1}\rangle),\text{Im}(\langle d_{1}\rangle),\text{Re}(\langle d _{2}\rangle),\text{Im}(\langle d_{2}\rangle))\). The eigenvalues of the matrix above determine whether the NP3 phase represents a stable steady state or not. If the real part of all eigenvalues is negative the NP3 phase is stable; on the other hand, if at least one of the eigenvalues has a positive real part, the phase is unstable.
Since we are interested in the boundary where the phase becomes unstable, it is important to determine when the matrix develops a zero eigenvalue. Then, we set the determinant of the matrix equation to zero, leading to:
\[-4\Omega^{3}(4g^{2}\gamma^{2}\omega-\kappa^{2}\Omega-\omega^{2}\Omega)=0\,. \tag{36}\]
After some algebra, and since \(\lambda_{+}=\lambda_{1}+\lambda_{2}=2g/\sqrt{\omega\Omega}\) in this case, we obtain the boundary equation
\[\lambda_{+}^{2}=\frac{1+\frac{\kappa^{2}}{\omega^{2}}}{\gamma^{2}}\,, \tag{37}\]
which is given as the first equation in Eq. (20). As explained in Ref. [12], if we want to consider the case for the NP2 phase, it is required to repeat the process of the generalized Holstein-Primakoff mapping but using \(|2\rangle\) as the reference state. Once the mapping is done, one can take the \(H_{2}\) Hamiltonian found for the NP2 phase and compute the Lindblad equations. Performing a similar stability analysis the stability boundary of the NP2 phase is found to be
\[\lambda_{+}^{2}=\frac{1+\frac{\kappa^{2}}{\omega^{2}}}{1-\gamma^{2}}\,. \tag{38}\]
which is given as the second equation in Eq. (20).
|
2305.10475
|
Jet Diffusion versus JetGPT -- Modern Networks for the LHC
|
We introduce two diffusion models and an autoregressive transformer for LHC
physics simulations. Bayesian versions allow us to control the networks and
capture training uncertainties. After illustrating their different density
estimation methods for simple toy models, we discuss their advantages for Z
plus jets event generation. While diffusion networks excel through their
precision, the transformer scales best with the phase space dimensionality.
Given the different training and evaluation speed, we expect LHC physics to
benefit from dedicated use cases for normalizing flows, diffusion models, and
autoregressive transformers.
|
Anja Butter, Nathan Huetsch, Sofia Palacios Schweitzer, Tilman Plehn, Peter Sorrenson, Jonas Spinner
|
2023-05-17T18:00:00Z
|
http://arxiv.org/abs/2305.10475v2
|
# Jet Diffusion versus JetGPT -- Modern Networks for the LHC
###### Abstract
We introduce two diffusion models and an autoregressive transformer for LHC physics simulations. Bayesian versions allow us to control the networks and capture training uncertainties. After illustrating their different density estimation methods for simple toy models, we discuss their advantages for Z plus jets event generation. While diffusion networks excel through their precision, the transformer scales best with the phase space dimensionality. Given the different training and evaluation speed, we expect LHC physics to benefit from dedicated use cases for normalizing flows, diffusion models, and autoregressive transformers.
###### Contents
* 1 Introduction
* 2 Novel generative networks
* 2.1 Denoising Diffusion Probabilistic Model
* 2.2 Conditional Flow Matching
* 2.3 Autoregressive Transformer
* 3 Toy models and Bayesian networks
* 4 LHC events
* 5 Outlook
Introduction
The future of LHC physics lies in a systematic and comprehensive understanding of all aspects of the recorded data in terms of fundamental theory. This can be achieved through simulation-based analyses, applying and adapting modern data science methods. As obvious from the name, this method relies on a fast and precise simulation chain, starting with the hard scattering evaluated for a given Lagrangian, to fast detector simulations. Because LHC physics is driven by fundamental questions, these simulations have to be based on first principles, mere modeling would not allow us to extract relevant or interesting information from the data. Moreover, for theory predictions to not become a limiting factor to the entire LHC program, this simulation chain has to be (i) precise, (ii) fast, and (iii) flexible.
Modern machine learning (ML) has shown great potential to transform LHC analyses and simulations [1, 2]. The leading ML-tools for LHC simulations are, obviously, generative networks, which combine unsupervised density estimation over phase space with a fast sampling into the learned density. The list of tasks where (generative) neural networks can improve LHC simulations is long [1]. It starts with phase space integration [3, 4] and phase space sampling [5, 6, 7, 8, 9] of ML-encoded fast transition amplitudes [10, 11]. More advanced tasks include event subtraction [12], event unweighting [13, 14], or super-resolution enhancement [15, 16]. Prototypical applications which allow for a systematic evaluation of the network performance are NN-event generators [17, 18, 19, 20, 21, 22], NN-parton showers [23, 24, 25, 26, 27, 28], or detector simulations [29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45]. Even when trained on first-principle simulations, such fast generators are easy to handle, efficient to ship, and powerful in amplifying statistical limitations of the training samples [46, 47].
Classical generative network architectures include variational autoencoders (VAEs) and generative adversarial networks (GANs). Both of them can generate high-dimensional data, assuming that the intrinsic dimensionality of the problem is much smaller than the apparent dimensionality of its representation. However, both have not been shown to fulfill the precision requirements of the LHC. Precise density estimation points to bijective generative networks, for instance normalizing flows [48, 49, 50, 51, 52] and their invertible network (INN) variant [53, 54, 55], which are limited to lower-dimensional sampling but sufficient at least for the hard process at the LHC.
LHC studies are consistently showing promising results for normalizing flows*, including transformative tasks, like probabilistic unfolding [56, 57, 58, 59, 60], inference from high-dimensional data [61], or the matrix element method [62]. One reason why INNs have established a new level of stability, control and uncertainty estimation, is the combination with Bayesian neural network (BNN) concepts [63, 64, 65, 66, 67, 68, 69], discriminator-training and reweighting [70, 71], and conditional training on augmented data. In the spirit of explainable AI, Bayesian generative networks allow us to understand how networks learn a given phase space distribution, in the case of INNs very similar to a traditional fit [69]. They systematically improve the precision of the underlying density estimation and track the effects from statistical and systematic limitations of the training data [22]. In this study we will first compare the successful INNs with new diffusion networks [72, 73, 74, 75, 76].
Footnote *: Note that in these applications autoregressive flows do _not_ outperform advanced coupling layers.
An aspect of neural networks which is often overlooked is that in precision LHC simulations the intrinsic dimension of the physics problem and the apparent dimension of its phase space are similar; for this dimensionality we need to encode all correlations [77, 78]. This implies that the network size, its training effort, and its performance tend to scale poorly with the number of particles and suffer from the curse of dimensionality. This is the motivation to also include an autoregressive [79, 80] transformer [81] in our study of modern generative
networks.
In this paper we will introduce two new different diffusion models for particle physics applications in Secs. 2.1 and 2.2. We then introduce a new autoregressive, eventually pre-trained, transformer architecture (JetGPT) with an improved dimensionality scaling in Sec. 2.3. For all three networks we develop new Bayesian versions, to control their learning patterns and the uncertainty in the density estimation step. In Sec. 3 we illustrate all three models for two toy models, a two-dimensional linear ramp and a Gaussian ring. Finally, in Sec. 4 we use all three networks to generate \(Z+\)jets events for the LHC, the same reference process as used for uncertainty-aware INNs in Ref. [22]. This standard application allows us to quantify the advantages and disadvantages of the three new architectures and compare them to the INN benchmark.
## 2 Novel generative networks
At the LHC, generative networks are used for many simulation and analysis tasks, typically to describe virtual or real particles over their correlated phase space. The number of particles ranges from few to around 50, described by their energy and three momentum directions, sometimes simplified through on-shell conditions. Typical generative models for the LHC then map simple latent distributions to a phase space distribution encoded in the training data,
\[r\sim p_{\text{latent}}(r)\quad\longleftrightarrow\quad x\sim p_{\text{ model}}(x|\theta)\approx p_{\text{data}}(x)\,. \tag{1}\]
The last step represents the network training, for instance in terms of a variational approximation of \(p_{\text{data}}(x)\). The latent distribution is typically a standard multi-dimensional Gaussian,
\[p_{\text{latent}}(r)=\mathcal{N}(r;0,1)\,. \tag{2}\]
We focus on the case where the dimensionalities of the latent space \(r\) and the phase space \(x\) are identical, and there is no lower-dimensional latent representation. For these kinds of dimensionalities, bijective network architectures are promising candidates to encode precision correlations. For strictly symmetric bijective networks like INNs the forward and backward directions are inverse to each other, and the network training and evaluation is symmetric. However, this strict symmetry is not necessary to generate LHC events or configurations.
The success of normalizing flows or INNs for this task motivates a study of so-called diffusion or score-based models as an alternative. We will introduce two different models in Sec. 2.1 and 2.2, one with a discrete and one with a continuous time evolution. The main question concerning such diffusion models in LHC physics is if their precision matches the INNs, how we can benefit from their superb expressivity, and if those benefits outweigh the slower evaluation.
A major challenge for all network applications in LHC physics is the required precision in all correlations, and the corresponding power-law scaling with the number of phase space dimensions. This scaling property leads us to introduce and test an autoregressive transformer in Sec. 2.3. Again, the question is how precise and how expressive this alternative approach is and if the benefits justify the extra effort in setup and training.
Because fundamental physics applications require full control and a conservative and reliable uncertainty estimation of neural networks, we will develop Bayesian versions of all three generative models. This allows us to control the uncertainty in the density estimation and to derive an intuition how the different networks learn the phase space distribution of the data.
### Denoising Diffusion Probabilistic Model
#### Architecture
Denoising Diffusion Probabilistic Models (DDPM) [73] transform a model density by gradually adding Gaussian noise. This setup guarantees that the network links a non-trivial physics distribution to a Gaussian noise distribution, as illustrated in Eq.(1). The task of the reverse, generative process is to to denoise this diffused data. The structure of diffusion models considers the transformation in Eq.(3) a time-dependent process with \(t=0\)... \(T\),
\[p_{\text{model}}(x_{0}|\theta)\quad\stackrel{{\text{forward} \rightarrow}}{{\leftarrow\text{backward}}}\quad p_{\text{latent}}(x_{T}). \tag{3}\]
The DDPM discretizes the time series in Eq.(3) in the forward direction and encodes is it into a neural network for the backward direction. We start with the forward process, which turns the physical distribution into noise. The corresponding joint distribution is factorized into discrete steps,
\[p(x_{1},...,x_{T}|x_{0}) =\prod_{t=1}^{T}p(x_{t}|x_{t-1})\] \[\text{with}\qquad p(x_{t}|x_{t-1}) =\mathcal{N}(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}). \tag{4}\]
Each conditional step \(p(x_{t}|x_{t-1})\) adds noise with variance \(\beta_{t}\) around the mean \(\sqrt{1-\beta_{t}}x_{t-1}\). The combination of \(x_{t}\) as a variable and the mean proportional to \(x_{t-1}\) implies that the successive steps can be combined as Gaussian convolutions and give the closed form
\[p(x_{t}|x_{0}) =p(x_{t}|x_{t-1})\int\prod_{i=1}^{t-1}dx_{i-1}\ p(x_{i}|x_{i-1})\] \[=\mathcal{N}(x_{t};\sqrt{1-\tilde{\beta}_{t}}x_{0},\tilde{\beta} _{t})\qquad\text{with}\qquad 1-\tilde{\beta}_{t}=\prod_{i=1}^{t}(1-\beta_{i}). \tag{5}\]
The scaling of the mean with \(\sqrt{1-\beta_{t}}\) prevents the usual addition of the variances and instead stabilizes the evolution of the Gaussian over the time series. The variance can be adapted through a schedule, where \(\tilde{\beta}_{t}\to 1\) for \(t\to T\) should be guaranteed. As suggested in Ref. [73] we choose a linear increase \(\beta_{t}=2\cdot 10^{-2}(t-1)/T+10^{-4}/T\).
As a first step towards reversing the forward diffusion, we apply Bayes' theorem on each slice defined in Eq.(4),
\[p(x_{t-1}|x_{t})=\frac{p(x_{t}|x_{t-1})p(x_{t-1})}{p(x_{t})}. \tag{6}\]
However, a closed-form expression for \(p(x_{t})\) only exists if conditioned on \(x_{0}\), as given in Eq.(5). Using \(p(x_{t}|x_{t-1},x_{0})=p(x_{t}|x_{t-1})\) we can instead compute the conditioned forward posterior as a Gaussian
\[p(x_{t-1}|x_{t},x_{0}) =\frac{p(x_{t}|x_{t-1})p(x_{t-1}|x_{0})}{p(x_{t}|x_{0})}=\mathcal{ N}(x_{t-1};\hat{\mu}_{t}(x_{t},x_{0}),\hat{\beta}_{t})\] \[\text{with}\quad\hat{\mu}(x_{t},x_{0}) =\frac{\sqrt{1-\tilde{\beta}_{t-1}}\beta_{t}}{\tilde{\beta}_{t}}x _{0}+\frac{\sqrt{1-\beta}_{t}\tilde{\beta}_{t-1}}{\tilde{\beta}_{t}}x_{t} \quad\text{and}\quad\hat{\beta}_{t}=\frac{\tilde{\beta}_{t-1}}{\tilde{\beta}_ {t}}\beta_{t}. \tag{7}\]
The actual reverse process starts with Gaussian noise and gradually transforms it into the phase-space distribution through the same discrete steps as Eq.(4), without knowing \(x_{0}\) a
priori. The corresponding generative network needs to approximate Eq.(6) for each step. We start by defining our modeled phase-space distribution
\[p_{\text{model}}(x_{0}|\theta)=\int dx_{1}...dx_{T}\;p(x_{0},...,x_{T}|\theta)\,, \tag{8}\]
and assume that the joint probability is again given by a chain of independent Gaussians,
\[p(x_{0},...,x_{T}|\theta) =p_{\text{latent}}(x_{T})\prod_{t=1}^{T}p_{\theta}(x_{t-1}|x_{t})\] \[\text{with}\quad p_{\theta}(x_{t-1}|x_{t}) =\mathcal{N}(x_{t-1};\mu_{\theta}(x_{t},t),\sigma_{\theta}^{2}(x _{t},t))\,. \tag{9}\]
Here, \(\mu_{\theta}\) and \(\sigma_{\theta}\) are learnable parameters describing the individual conditional probability slices \(x_{t}\to x_{t-1}\). It turns out that in practice we can fix \(\sigma_{\theta}^{2}(x_{t},t)\to\sigma_{t}^{2}\)[73]. We will see that the advantage of the discrete diffusion model is that we can compare a Gaussian posterior, Eq.(7), with a reverse, learned Gaussian in Eq.(9) for each step.
#### Loss function
Ideally, we want to train our model by maximizing the posterior \(p_{\text{model}}(\theta|x_{0})\), however, this is not tractable. Using Bayes' theorem and dropping regularization and normalization terms this is equivalent to minimizing the corresponding negative log likelihood in Eqs.(8) and (9),
\[\left\langle-\log p_{\text{model}}(x_{0}|\theta)\right\rangle_{p_{ \text{latent}}}\] \[= -\int dx_{0}\;p_{\text{data}}(x_{0})\;\log\!\left(\int dx_{1}... dx_{T}\;p_{\text{latent}}(x_{T})\prod_{t=1}^{T}p_{\theta}(x_{t-1}|x_{t})\right)\] \[= -\int dx_{0}\;p_{\text{data}}(x_{0})\;\log\!\left(\int dx_{1}... dx_{T}\;p_{\text{latent}}(x_{T})p(x_{1},...,x_{T}|x_{0})\prod_{t=1}^{T}\frac{p_{\theta}(x _{t-1}|x_{t})}{p(x_{t}|x_{t-1})}\right)\] \[= -\int dx_{0}\;p_{\text{data}}(x_{0})\;\log\!\left(p_{\text{latent }}(x_{T})\prod_{t=1}^{T}\frac{p_{\theta}(x_{t-1}|x_{t})}{p(x_{t}|x_{t-1})} \right)_{p(x_{1},...,x_{T}|x_{0})} \tag{10}\]
In the first step, we insert a one into our loss function by dividing Eq.(4) with itself. Using Jensen's inequality \(f(\left\langle x\right\rangle)\leq\left\langle f(x)\right\rangle\) for convex functions we find
\[\left\langle-\log p_{\text{model}}(x_{0}|\theta)\right\rangle_{p_ {\text{data}}} \leq -\int dx_{0}\;p_{\text{data}}(x_{0})\left\langle\log\!\left(p_{ \text{latent}}(x_{T})\prod_{t=1}^{T}\frac{p_{\theta}(x_{t-1}|x_{t})}{p(x_{t}| x_{t-1})}\right)\right\rangle_{p(x_{1},...,x_{T})|x_{0})} \tag{11}\] \[= -\left\langle\log\!\left(p_{\text{latent}}(x_{T})\prod_{t=1}^{T} \frac{p_{\theta}(x_{t-1}|x_{t})}{p(x_{t}|x_{t-1})}\right)\right\rangle_{p(x_{0},...,x_{T})}\] \[= \left\langle-\log p_{\text{latent}}(x_{T})-\sum_{t=2}^{T}\log \frac{p_{\theta}(x_{t-1}|x_{t})}{p(x_{t}|x_{t-1})}-\log\frac{p_{\theta}(x_{0}| x_{1})}{p(x_{1}|x_{0})}\right\rangle_{p(x_{0},...,x_{T})}\] \[\equiv\mathcal{L}_{\text{DDPM}}\,.\]
As suggested above, we would like to compare each intermediate learned latent distribution \(p_{\theta}(x_{t-1}|x_{t})\) to the real posterior distribution \(p(x_{t-1}|x_{t},x_{0})\) of the forward process. To reverse
the ordering of the forward slice we use Bayes' theorem,
\[\mathcal{L}_{\text{DDPPM}}=\left\langle-\log p_{\text{latent}}(x_{T} )-\sum_{t=2}^{T}\log\frac{p_{\theta}(x_{t-1}|x_{t})p(x_{t-1}|x_{0})}{p(x_{t-1}| x_{t},x_{0})p(x_{t}|x_{0})}-\log\frac{p_{\theta}(x_{0}|x_{1})}{p(x_{1}|x_{0})} \right\rangle_{p(x_{0},\ldots,x_{T})}\] \[=\left\langle-\log p_{\text{latent}}(x_{T})-\sum_{t=2}^{T}\log \frac{p_{\theta}(x_{t-1}|x_{t})}{p(x_{t-1}|x_{t},x_{0})}-\log\frac{p(x_{1}|x_{0 })}{p(x_{T}|x_{0})}-\log\frac{p_{\theta}(x_{0}|x_{1})}{p(x_{1}|x_{0})}\right\rangle _{p(x_{0},\ldots,x_{T})}\] \[=\left\langle-\log\frac{p_{\text{latent}}(x_{T})}{p(x_{T}|x_{0} )}-\sum_{t=2}^{T}\log\frac{p_{\theta}(x_{t-1}|x_{t})}{p(x_{t-1}|x_{t},x_{0})}- \log p_{\theta}(x_{0}|x_{1})\right\rangle_{p(x_{0},\ldots,x_{T})}\] \[=\sum_{t=2}^{T}\left\langle\text{KL}[p(x_{t-1}|x_{t},x_{0}),p_{ \theta}(x_{t-1}|x_{t})]\right\rangle_{p(x_{0},x_{t})}+\left\langle-\log p_{ \theta}(x_{0}|x_{1})\right\rangle_{p(x_{0},\ldots,x_{T})}+\text{const}\] \[\approx\sum_{t=2}^{T}\left\langle\text{KL}[p(x_{t-1}|x_{t},x_{0} ),p_{\theta}(x_{t-1}|x_{t})]\right\rangle_{p(x_{0},x_{t})} \tag{12}\]
Now, the KL-divergence compares the forward Gaussian step of Eq.(7) with the reverse, learned Gaussian in Eq.(9). The second sampled term will always be negligible compared to the first \(T-1\) terms. The KL-divergence between two Gaussian, with means \(\mu_{\theta}(x_{t},t)\) and \(\hat{\mu}(x_{t},x_{0})\) and standard deviations \(\sigma_{t}^{2}\) and \(\hat{\beta}_{t}\), has the compact form
\[\mathcal{L}_{\text{DDPPM}}=\sum_{t=2}^{T}\left\langle\frac{1}{2\sigma_{t}^{2}} \left|\hat{\mu}-\mu_{\theta}\right|^{2}\right\rangle_{p(x_{0},x_{t})}+\text{ const.} \tag{13}\]
This implies that \(\mu_{\theta}\) approximates \(\hat{\mu}\). The sampling follows \(p(x_{0},x_{t})=p(x_{t}|x_{0})\,p_{\text{data}}(x_{0})\). We numerically evaluate this loss using the reparametrization trick on Eq.(5)
\[x_{t}(x_{0},\epsilon) =\sqrt{1-\bar{\beta}_{t}}x_{0}+\sqrt{\bar{\beta}_{t}}\epsilon \qquad\text{with}\qquad\epsilon\sim\mathcal{N}(0,1)\] \[\Leftrightarrow x_{0}(x_{t},\epsilon) =\frac{1}{\sqrt{1-\bar{\beta}_{t}}}\left(x_{t}-\sqrt{\bar{\beta}_{t}} \epsilon\right)\,. \tag{14}\]
These expressions provide, for example, a closed form for \(\hat{\mu}(x_{t},x_{0})\), but in terms of \(x_{t}\) and \(\epsilon\),
\[\hat{\mu}(x_{t},\epsilon)=\frac{1}{\sqrt{1-\bar{\beta}_{t}}}\left(x_{t}(x_{0}, \epsilon)-\frac{\beta_{t}}{\sqrt{\bar{\beta}_{t}}}\epsilon\right)\,. \tag{15}\]
For the reverse process we choose the same parametrization, but with a learned \(\epsilon_{\theta}(x_{t},t)\),
\[\mu_{\theta}(x_{t},t)\equiv\hat{\mu}(x_{t},\epsilon_{\theta})=\frac{1}{\sqrt{1 -\bar{\beta}_{t}}}\left(x_{t}-\frac{\beta_{t}}{\sqrt{\bar{\beta}_{t}}}\epsilon _{\theta}(x_{t},t)\right)\,. \tag{16}\]
Inserting both expressions into Eq.(13) gives us
\[\mathcal{L}_{\text{DDPPM}}=\sum_{t=2}^{T}\left\langle\frac{1}{2\sigma_{t}^{2}} \frac{\beta_{t}^{2}}{(1-\beta_{t})\bar{\beta}_{t}}\left|\epsilon-\epsilon_{ \theta}\left(\sqrt{1-\bar{\beta}_{t}}x_{0}+\sqrt{\bar{\beta}_{t}}\epsilon,t \right)\right|^{2}\right\rangle_{x_{0}\sim p_{\text{data}},\epsilon\sim\mathcal{ N}(0,1)}\,. \tag{17}\]
The sum over \(t\) can be evaluated numerically as a sampling. We chose our model variance \(\sigma_{t}^{2}\equiv\hat{\beta}_{t}\) to follow our true variance. Often, the prefactor in this form for the loss is neglected in the training, but as we need a likelihood loss for the Bayesian setup and no drop in performance was observed, we keep it.
The DDPM model belongs to the broad class of score-based models, and Eq.(13) can also be reformulated for the model to predict the score \(s(x_{t},t)=\nabla_{x_{t}}\log p(x_{t})\) of our latent space at time \(t\). It can be shown that \(s_{\theta}(x_{t},t)=-\epsilon_{\theta}(x_{t},t)/\sigma_{t}\)[82].
#### Training and sampling
The training algorithm for the DDPM is illustrated in Fig. 1. For a given phase-space point \(x_{0}\sim p_{\text{data}}(x_{0})\) drawn from our true phase space distribution we draw a time step \(t\sim\mathcal{U}(1,T)\) from a uniform distribution as well as Gaussian noise \(\epsilon\sim\mathcal{N}(0,1)\) at each iteration. Given Eq.(15) we can then calculate our diffused data after \(t\) time steps \(x_{t}\), which is fed to the DDPM network together with our condition \(t\). The network encodes \(\epsilon_{\theta}\) and we compare this network prediction with the true Gaussian noise \(\epsilon\) multiplied by a \(t\)-dependent constant as given in the likelihood loss of Eq.(17). Note that we want to ensure that our network sees as many different time steps \(t\) for many different phase-space points \(x_{0}\) as necessary to learn the step-wise reversed diffusion process, which is why we use a relatively simple residual dense network architecture, which is trained over many epochs.
The sampling algorithm for the DDPM is shown in Fig. 2. We start by feeding our network our final timestep \(T\) and \(x_{T}\sim p_{\text{latent}}(x_{T})\) drawn from our Gaussian latent space distribution. With the predicted \(\epsilon_{\theta}\) and drawn Gaussian noise \(z_{T-1}\sim\mathcal{N}(0,1)\) we can then calculate \(x_{T-1}\), which is a slightly less diffused version of \(x_{T}\). This procedure is repeated until reaching our phase-space and computing \(x_{0}\), where no additional Gaussian noise is added. Note that during sampling the model needs to predict \(\epsilon_{\theta}\)\(T\) times, making the sampling process slower than for classic generative networks like VAEs, GANs, or INNs.
Figure 1: DDPM training algorithm, following Ref. [73], with the loss derived in Eq.(17).
Figure 2: DDPM sampling algorithm, following Ref. [73].
#### Likelihood extraction
To calculate the model likelihood we can use Eq.(8) or its sampled estimate,
\[p_{\text{model}}(x_{0}|\theta)=\left\langle p_{\theta}(x_{0}|x_{1})\right\rangle_{ p(x_{1},\ldots,x_{T}|\theta)}, \tag{18}\]
but this is very inefficient. The problem is that \(p_{\theta}(x_{0}|x_{1})\) is a narrow distribution, essentially zero for almost all sampled \(x_{1}\). We can improve the efficiency by importance sampling and use instead
\[p_{\text{model}}(x_{0}|\theta) =\left\langle\frac{p(x_{1},\ldots,x_{T}|\theta)}{p(x_{1},\ldots, x_{T}|x_{0})}p_{\theta}(x_{0}|x_{1})\right\rangle_{p(x_{1},\ldots,x_{T}|x_{0})}\] \[=\left\langle\frac{p(x_{0},\ldots,x_{T}|\theta)}{p(x_{1},\ldots, x_{T}|x_{0})}\right\rangle_{p(x_{1},\ldots,x_{T}|x_{0})}. \tag{19}\]
This samples a diffusion process starting from \(x_{0}\) and into the latent space, meaning that it represents a likely forward and backward path. This means the integrand should not just be zero most of the time.
#### Bayesian DDPM
The key step in the training of generative networks is the density estimation over phase space, from which the network then samples. Like any neural network task this density estimation comes with uncertainties, for instance from a limited amount of training data, a lack of model flexibility, or even training data which we know cannot be trusted. This means that the density estimation step of the generative network should assign an uncertainty to the extracted phase space density, ideally in form of a second map over the target phase space. This problem has been tackled for bijective normalizing flows through a Bayesian network extension [69], which can be combined with other measures, like conditional training on augmented data [22].
The idea behind Bayesian networks is to train network weights as distributions and evaluate the network by sampling over these distributions. This will provide a central value and an uncertainty for the numerically defined network output [63, 64, 65].2 Because general MCMC-methods are expensive for large networks, we use variational inference [83] to learn Gaussian approximations for each weight distribution. Because of the non-linear nature of the network this does not mean that the network output has to come with a Gaussian uncertainty [68].
Footnote 2: We cannot emphasize often enough that Bayesian networks for uncertainty quantification have nothing to do with Bayesian inference.
We repeat the main steps in deriving the Bayesian loss for any neural network approximating, for instance, a density map \(\rho(x)\approx\rho_{\theta}(x)\) following Ref. [84]. The expectation value is defined as
\[\langle\rho\rangle(x)\equiv\langle\rho\,\rangle=\int d\rho\;\rho\;p(\rho) \qquad\text{with}\qquad p(\rho)=\int d\theta\;p(\rho|\theta)\,p(\theta|x_{ \text{train}})\,, \tag{20}\]
where we omit the \(x\)-dependence. We use the variational approximation to approximate
\[p(\rho)=\int d\theta\;p(\rho|\theta)\,p(\theta|x_{\text{train}})\approx\int d \theta\;p(\rho|\theta)\,q(\theta)\,, \tag{21}\]
where \(q(\theta)\) is also a function of \(x\). The variational approximation step requires us to minimize
\[\mathcal{L}_{\text{BNN}}=\text{KL}[q(\theta),p(\theta|x_{\text{train}})] =\left\langle\log\frac{q(\theta)}{p(\theta|x_{\text{train}})} \right\rangle_{q}\] \[=\int d\theta\;q(\theta)\,\log\frac{q(\theta)}{p(\theta|x_{\text{ train}})}\] \[=-\int d\theta\;q(\theta)\,\log p(x_{\text{train}}|\theta)+\text{ KL}[q(\theta),p(\theta)]+\text{const}\,, \tag{22}\]
where we use Bayes' theorem to transform the untractable \(p(\theta|x_{\text{train}})\), introducing the prior \(p(\theta)\) for the network weights. This so-called ELBO loss combines a likelihood loss with a regularization term, their relative size fixed by Bayes' theorem.
It turns out that for sufficiently deep networks we can choose \(q(\theta)\) as uncorrelated Gaussians per network weight [65], such that the training parameters are a set of means and standard deviations for each network weight. Compared to the deterministic network, its Bayesian version is twice the size, but automatically regularized, keeping the additional numerical effort minimal. While \(p(\theta)\), also chosen as a Gaussian, is formally defined as a prior, we emphasize that in our case the step from the prior to the posterior has nothing to do with Bayesian inference. The Gaussian width of \(p(\theta)\) can be treated as a network hyperparameter and varied to improve the numerical performance. We typically find that the result is stable under varying the width by several orders of magnitude, and width one works well.
The derivation of Eq.(22) can be easily extended to the density estimation step of a generative networks, in the same way as for the Bayesian INN [69]. The Bayesian DDPM loss follows from the deterministic likelihood loss in Eqs.(11) and (17) by adding a sampling over \(\theta\sim q(\theta)\) and the regularization term,
\[\mathcal{L}_{\text{B-DDPM}}=\left\langle\mathcal{L}_{\text{DDPM}}\right\rangle _{\theta\sim q}+\text{KL}[q(\theta),p(\theta)]\,. \tag{23}\]
Switching a deterministic network into its Bayesian version includes two steps, (i) swap the deterministic layers to the corresponding Bayesian layers, and (ii) add the regularization term to the loss. For the latter, one complication arises. We estimate the complete loss from a dataset including \(N\) events in \(M\) batches, which means the likelihood term is summed and then normalized over \(M\) batches, while the regularization term comes with the complete prefactor \(1/N\).
\begin{table}
\begin{tabular}{l|c c} \hline \hline hyperparameter & toy models & LHC events \\ \hline Timesteps & 1000 & 1000 \\ Time Embedding Dimension & - & 64 \\ \# Blocks & 1 & 2 \\ Layers per Block & 8 & 5 \\ Intermediate Dimensions & 40 & 64 \\ \# Model Parameters & 20k & 75k \\ \hline LR Scheduling & one-cycle & one-cycle \\ Starter LR & \(10^{-4}\) & \(10^{-4}\) \\ Maximum LR & \(10^{-3}\) & \(10^{-3}\) \\ Epochs & 1000 & 1000, 3000, 10000 \\ Batch Size & 8192 & 8192, 8192, 4096 \\ \hline \# Training Events & 600k & 3.2M, 850k, 190k \\ \# Generated Events & 1M & 1M, 1M, 1M \\ \hline \hline \end{tabular}
\end{table}
Table 1: Training setup and hyperparameters for the Bayesian DDPM generator.
To evaluate the Bayesian network we need to again sample over the network weight distribution. This way we guarantee that the uncertainty of the network output can have any functional form. The number of samplings for the network evaluations can be chosen according to the problem. We choose 30 for all problems discussed in this work. To compare the Bayesian network output with a deterministic network output we can either go into the limit \(q(\theta)\to\delta(\theta-\theta_{0})\) or only evaluate the means of the network weight distributions.
Our network is implemented in PyTorch and uses Adam as optimizer. All hyperparameters are given in Tab. 1. As already mentioned we use a simple residual network which consists of multiple fully connected dense layers with SiLU activation functions. Within our setup a significant increase in performance is achieved when initializing the weights of the last layer in each block to zero.
### Conditional Flow Matching
#### Architecture
As an alternative, we study Conditional Flow Matching (CFM) [74, 75, 76]. Like the DDPM, it uses a time evolution to transform phase space samples into noise, so the reverse direction can generate samples as outlined in Eq.(3). Instead of a discrete chain of conditional probabilities, the time evolution of samples in the CFM framework follows a continuous ordinary differential equation (ODE)
\[\frac{dx(t)}{dt}=v(x(t),t)\qquad\text{with}\qquad x(t=0)=x_{0}\, \tag{24}\]
where \(v(x(t),t)\) is called the velocity field of the process. This velocity field can be linked to a probability density \(p(x,t)\) with the continuity equation
\[\frac{\partial p(x,t)}{\partial t}+\nabla_{x}\left[p(x,t)v(x,t)\right]=0. \tag{25}\]
These two equations are equivalent in the sense that for a given probability density path \(p(x,t)\) any velocity field \(v(x,t)\) describing the sample-wise evolution Eq.(24) will be a solution of Eq.(25), and vice versa. Our generative model employs \(p(x,t)\) to transforms a phase space distribution into a Gaussian latent distribution
\[p(x,t)\to\begin{cases}p_{\text{data}}(x)&t\to 0\\ p_{\text{latent}}(x)=\mathcal{N}(x;0,1)&t\to 1\.\end{cases} \tag{26}\]
The associated velocity field will allow us to generate samples by integrating the ODE of Eq.(24) from \(t=1\) to \(t=0\).
As for the DDPM, we start with a diffusion direction. We define the time evolution from a phase space point \(x_{0}\) to the standard Gaussian as
\[x(t|x_{0})=(1-t)x_{0}+t\epsilon\to\begin{cases}x_{0}&t\to 0\\ \epsilon\sim\mathcal{N}(0,1)&t\to 1\,\end{cases} \tag{27}\]
following a simple linear trajectory [76], after not finding better results with other choices. For given \(x_{0}\) we can generate \(x(t|x_{0})\) by sampling
\[p(x,t|x_{0})=\mathcal{N}(x;(1-t)x_{0},t). \tag{28}\]
This conditional time evolution is similar to the DDPM case in Eq.(5), and it give us the complete probability path
\[p(x,t)=\int dx_{0}\,p(x,t|x_{0})\,p_{\text{data}}(x_{0})\;. \tag{29}\]
It fulfills the boundary conditions in Eq.(26),
\[p(x,0) =\int dx_{0}\,p(x,0|x_{0})\,p_{\text{data}}(x_{0})=\int dx_{0}\; \delta(x-x_{0})\,p_{\text{data}}(x_{0})=p_{\text{data}}(x)\] \[p(x,1) =\int dx_{0}\,p(x,1|x_{0})\,p_{\text{data}}(x_{0})=\mathcal{N}(x ;0,1)\int dx_{0}\,p_{\text{data}}(x_{0})=\mathcal{N}(x;0,1)\;. \tag{30}\]
From this probability density path we need to extract the velocity field. We start with the conditional velocity, associated with \(p(x,t|x_{0})\), and combine Eq.(24) and (27) to
\[\nu(x(t|x_{0}),t|x_{0})=\frac{d}{dt}\left[(1-t)x_{0}+t\epsilon\right]=-x_{0}+ \epsilon\;. \tag{31}\]
The linear trajectory leads to a time-constant velocity, which solves the continuity equation for \(p(x,t|x_{0})\) by construction. We exploit this fact to find the unconditional \(\nu(x,t)\)
\[\frac{\partial p(x,t)}{\partial t} =\int dx_{0}\;\frac{\partial p(x,t|x_{0})}{\partial t}\;p_{\text {data}}(x_{0})\] \[=-\int dx_{0}\;\nabla_{x}\left[\nu(x,t|x_{0})p(x,t|x_{0})\right]\, p_{\text{data}}(x_{0})\] \[=-\nabla_{x}\left[p(x,t)\int dx_{0}\;\frac{\nu(x,t|x_{0})p(x,t|x_ {0})p_{\text{data}}(x_{0})}{p(x,t)}\right]\] \[=-\nabla_{x}\left[p(x,t)\nu(x,t)\right]\;, \tag{32}\]
by defining
\[\nu(x,t)=\int dx_{0}\;\frac{\nu(x,t|x_{0})p(x,t|x_{0})p_{\text{ data}}(x_{0})}{p(x,t)}\;. \tag{33}\]
While the conditional velocity in Eq.(31) describes a trajectory between a normal distributed and a phase space sample \(x_{0}\) that is specified in advance, the aggregated velocity in Eq.(33) can evolve samples from \(p_{\text{data}}\) to \(p_{\text{latent}}\) and vice versa.
Like the DDPM model, the CFM model can be linked to score-based diffusion models, [74] derive a general relation between the velocity field and the score of a diffusion process that for our linear trajectory reduces to \(s(x,t)=-\frac{1}{t}(x+(1-t)\nu(x,t))\).
#### Loss function
Encoding the velocity field in Eq.(33) is a simple regression task, \(\nu(x,t)\approx\nu_{\theta}(x,t)\). The straightforward choice for the loss is the mean squared error,
\[\mathcal{L}_{\text{FM}} =\left\langle[\nu_{\theta}(x,t)-\nu(x,t)]^{2}\right\rangle_{t,x \sim p(x,t)}\] \[=\left\langle\nu_{\theta}(x,t)^{2}\right\rangle_{t,x\sim p(x,t)}- \left\langle 2\nu_{\theta}(x,t)\nu(x,t)\right\rangle_{t,x\sim p(x,t)}+\text{ const}\;, \tag{34}\]
where the time is sampled uniformly over \(t\in[0,1]\). While we would want to sample \(x\) from the probability path given in Eq.(29) and learn the velocity field given in Eq.(33), neither of
those is tractable. However, it would be easy to sample from the conditional path in Eq.(28) and calculate the conditional velocity in Eq.(31). We rewrite the above loss in terms of the conditional quantities, so the first term becomes
\[\left\langle v_{\theta}(x,t)^{2}\right\rangle_{t,x\sim p(x,t)} =\left\langle\int dx\,v_{\theta}(x,t)^{2}\int dx_{0}\,p(x,t|x_{0} )p_{\text{data}}(x_{0})\right\rangle_{t}\] \[=\left\langle v_{\theta}(x,t)^{2}\right\rangle_{t,x_{0}\sim p_{ \text{data}},x\sim p(x,t|x_{0})}\] \[=\left\langle v_{\theta}(x(t|x_{0}),t)^{2}\right\rangle_{t,x_{0} \sim p_{\text{data}},e} \tag{35}\]
Using Eq.(33) we can rewrite the second loss term as
\[-2\left\langle v_{\theta}(x,t)v(x,t)\right\rangle_{t,x\sim p(x,t)} =-2\left\langle\int dx\,\,p(x,t)v_{\theta}(x,t)\,\frac{\int dx_ {0}v(x,t|x_{0})p(x,t|x_{0})p_{\text{data}}(x_{0})}{p(x,t)}\,\right\rangle_{t}\] \[=-2\left\langle\int dxdx_{0}\,v_{\theta}(x,t)\,v(x,t|x_{0})\,p(x, t|x_{0})\,p_{\text{data}}(x_{0})\right\rangle_{t}\] \[=-2\left\langle v_{\theta}(x,t)\,v(x,t|x_{0})\right\rangle_{t,x_{ 0}\sim p_{\text{data}},x\sim p(x,t|x_{0})}\] \[=-2\left\langle v_{\theta}(x(t|x_{0}),t)\,v(x(t|x_{0}),t|x_{0}) \right\rangle_{t,x_{0}\sim p_{\text{data}},e}\,. \tag{36}\]
The (conditional) Flow Matching loss of Eq.(34) then becomes
\[\mathcal{L}_{\text{CFM}} =\left\langle[v_{\theta}(x(t|x_{0}),t)-v(x(t|x_{0}),t|x_{0})]^{2} \right\rangle_{t,x_{0}\sim p_{\text{data}},e}\] \[=\left\langle\left[v_{\theta}((1-t)x_{0}+t\epsilon,t)-(\epsilon- x_{0})]^{2}\right\rangle_{t,x_{0}\sim p_{\text{data}},e}\,. \tag{37}\]
#### Training and Sampling
The CFM training is illustrated in Fig. 3. At each iteration we sample a data point \(x_{0}\sim p_{\text{data}}(x_{0})\) and a normal distributed \(\epsilon\sim\mathcal{N}(0,1)\) as starting and end points of a trajectory, as well as a time \(t\sim\mathcal{U}([0,1])\). We then compute \(x(t|x_{0})\) following Eq.(27) and the associated conditional velocity \(v(x(t|x_{0}),t|x_{0})\) following Eq.(31). The point \(x(t|x_{0})\) and the time \(t\) are passed to a neural network which encodes the conditional velocity field \(v_{\theta}(x,t)\approx v(x,t|x_{0})\). One property of the training algorithm is that the same network input, a time \(t\) and a position \(x(t|x_{0})\), can be produced by many different trajectories, each with a different conditional velocity. While the network training is based on a wide range of possible trajectories, the CFM
Figure 3: CFM training algorithm, with the loss derived in Eq.(37).
loss in Eq.(37) ensures that sampling over many trajectories returns a well-defined velocity field.
Once the CFM model is trained, the generation of new samples is straightforward. We start by drawing a sample from the latent distribution \(x_{1}\sim p_{\text{latent}}=\mathcal{N}(0,1)\) and calculate its time evolution by numerically solving the ODE backwards in time from \(t=1\) to \(t=0\)
\[\frac{d}{dt}x(t) =v_{\theta}(x(t),t)\qquad\text{with}\quad x_{1}=x(t=1)\] \[\Rightarrow\qquad x_{0} =x_{1}-\int_{0}^{1}v_{\theta}(x,t)dt\equiv G_{\theta}(x_{1})\,, \tag{38}\]
We use the scipy.solve_ivp function with default settings for this. Under mild regularity assumptions this solution defines a bijective transformation between the latent space sample and the phase space sample \(G_{\theta}(x_{1})\), similar to an INN.
#### Likelihood extraction
The CFM model also allows to calculate phase space likelihoods. Making use of the continuity equation we can write
\[\frac{dp(x,t)}{dt} =\frac{\partial p(x,t)}{\partial t}+\nabla_{x}p(x,t)\,v(x,t)\] \[=\frac{\partial p(x,t)}{\partial t}+\nabla_{x}\left[p(x,t)v(x,t) \right]-p(x,t)\nabla_{x}v(x,t)\] \[=-p(x,t)\nabla_{x}v(x,t)\,. \tag{39}\]
Its solution is
\[\frac{p(x_{1},1)}{p(x_{0},0)}\equiv\frac{p_{\text{latent}}(G_{\theta}^{-1}(x _{0}))}{p_{\text{model}}(x_{0}|\theta)}=\exp\left(-\int_{0}^{1}dt\nabla_{x}v( x(t),t)\right)\,, \tag{40}\]
and we can write in the usual INN notation [84]
\[p_{\text{model}}(x_{0}|\theta) =p_{\text{latent}}(G_{\theta}^{-1}(x_{0}))\left|\det\!\frac{ \partial G_{\theta}^{-1}(x_{0})}{\partial x_{0}}\right|\] \[\Rightarrow\qquad\left|\det\!\frac{\partial G_{\theta}^{-1}(x_{0 })}{\partial x_{0}}\right| =\exp\!\left(\int_{0}^{1}dt\nabla_{x}v_{\theta}(x(t),t)\right)\,. \tag{41}\]
Calculating the Jacobian requires integrating over the divergence of the learned velocity field. This divergence can be calculated using automatic differentiation approximately as fast as \(n\) network calls, where \(n\) is the data dimensionality.
#### Bayesian CFM
Finally, we also turn the CFM into a Bayesian generative model, to account for the uncertainties in the underlying density estimation [69]. From the Bayesian DDPM we know that this can be achieved by promoting the network weights from deterministic values to, for instance, Gaussian distributions and using variational approximation for the training [63, 64, 65, 65]. For the Bayesian INN or the Bayesian DDPM the loss is a sum of the likelihood loss and a KL-divergence regularization, Eq.(23). Unfortunately, the CFM loss in Eq.(37) is not a likelihood
loss. To construct a Bayesian CFM loss we therefore combine it with Bayesian network layers and a free KL-regularization,
\[\mathcal{L}_{\text{B-CFM}}=\left\langle\mathcal{L}_{\text{CFM}}\right\rangle_{ \theta\sim q(\theta)}+c\,\text{KL}[q(\theta),p(\theta)]. \tag{42}\]
While for a likelihood loss the factor \(c\) is fixed by Bayes' theorem, in the CFM case it is a free hyperparameter. We find that the network predictions and their associated uncertainties are very stable when varying it over several orders of magnitude.
Our network is implemented in PyTorch and uses Adam as optimizer. All hyperparameters are given in Tab. 2. We employ a simple network consisting of fully connected dense layers with SiLU activation functions. Given limited resources, simple and fast networks trained for a large number of iterations produces the best results. For the LHC events we used two blocks of dense layers connected by a residual connection. In our setup dropout layers lead to significantly worse results, while normalization layers have no visible impact on the results. We find that the training of CFM models can be very noisy, using a large batch size can help to stabilize this.
In general, training diffusion models requires a relatively large number of epochs, as indicated in Tabs. 1 and 2. A key result of our study is to use a cosine-annealing learning rate scheduler for the CFMs and one-cycle scheduling for the DDPM, as well as significantly downsizing the models compared to INNs, to allow for more training epochs. For the entire hyperparameter setup, our B-DDPM implementation turns out to be slightly more sensitive than the B-CFM.
### Autoregressive Transformer
#### Architecture
A distinct shortcoming of traditional generative models like GANs, INNs, and diffusion models is that they learn the correlations in all phase space directions simultaneously. This leads to a power-law scaling for instance of the training effort for a constant precision in the learned correlations [77]. The autoregressive transformer (AT) [85] instead interprets the phase space vector \(x=(x_{1},...x_{n})\) as a sequence of elements \(x_{i}\) and factorizes the joint \(n\)-dimensional probability into \(n\) probabilities with a subset of conditions,
\[p_{\text{model}}(x|\theta)=\prod_{i=1}^{n}p(x_{i}|x_{1},...,x_{i-1})\approx p_{ \text{data}}(x)\,, \tag{43}\]
\begin{table}
\begin{tabular}{l|c c} \hline \hline hyperparameter & toy models & LHC events \\ \hline Embedding Dimension & - & 32 \\ \# Blocks & 1 & 2 \\ Layers per Block & 8 & 5 \\ Intermediate Dimensions & 40 & 128, 64, 64 \\ \# Model Parameters & 20k & 265k, 85k, 85k \\ \hline LR Scheduling & cosine annealing & cosine annealing \\ Starter LR & \(10^{-2}\) & \(10^{-3}\) \\ Epochs & 1000 & 1000, 5000, 10000 \\ Batch Size & 8192 & 16384 \\ \hline \# Training Events & 600k & 3.2M, 850k, 190k \\ \# Generated Events & 1M & 1M, 1M, 1M \\ \hline \hline \end{tabular}
\end{table}
Table 2: Training setup and hyperparameters for the Bayesian CFM generator.
as illustrated in Fig. 4. This autoregressive approach improves the scaling with the phase space dimensionality in two ways. First, each distribution \(p(x_{i}|x_{1},...x_{i-1})\) is easier to learn than a distribution conditional on the full phase space vector \(x\). Second, we can use our physics knowledge to group challenging phase space directions early in the sequence \(x_{1},...,x_{n}\).
The network learns the conditional probabilities over phase space using a representation
\[p(x_{i}|\omega^{(i-1)})=p(x_{i}|x_{1},...x_{i-1})\;, \tag{44}\]
where the parameters \(\omega^{(i-1)}\) encode the conditional dependence on \(x_{1},...x_{i-1}\). A naive choice are binned probabilities \(w_{j}^{(i-1)}\) per phase space direction,
\[p(x_{i}|\omega^{(i-1)})=\sum_{\text{bins }j}w_{j}^{(i-1)}\mathbb{1}^{(j)}(x_{i})\;, \tag{45}\]
where \(\mathbb{1}^{(j)}(x)\) is one for \(x\) inside the bin \(j\) and zero outside. A more flexible and better-scaling approach is a Gaussian mixture,
\[p(x_{i}|\omega^{(i-1)})=\sum_{\text{Gaussian }j}w_{j}^{(i-1)}\mathcal{N}(x_{i};\mu_{j}^{(i-1)}, \sigma_{j}^{(i-1)})\;. \tag{46}\]
It generalizes the fixed bins to a set of learnable means and widths.
Our architecture closely follows the Generative Pretrained Transformer (GPT) models [85], illustrated in Fig. 5. The network takes a sequence of \(x_{i}\) as input and evaluates them all in parallel. We use a linear layer to map each value \(x_{i}\) in a \(d\)-dimensional latent space, denoted as \(x_{ia}\). The network consists of a series of TransformerDecoder blocks, combining a self-attention layer with a standard feed-forward network. Finally, a linear layer maps the latent space onto the representation \(\omega^{(i-1)}\) of the conditions.
Figure 4: Autoregressive approach to density estimation. The attention matrix \(A_{ij}\) defined in Eq.(50) encodes information between components \(x_{i}\). We introduce an auxiliary condition \(x_{0}=0\) for the first phase space component \(x_{1}\).
Figure 5: Architecture of the autoregressive transformer. All phase space components \(x_{i}\) are evaluated in parallel, see Fig. 4.
Equations (45) and (46) do not provide an actual structure correlating phase space regions and phase space directions. This means the transformer needs to construct an appropriate basis and correlation pattern by transforming the input \(x\) into an \(x^{\prime}\), with the same dimension as the input vector and leading to the \(\omega\) representation. Its goal is to construct a matrix \(A_{ij}\) that quantifies the relation or similarity of two embedded phase space components \(x_{ia}\) and \(x_{ja}\). We construct the single-headed self-attention [86] of an input \(x\) in three steps.
1. Using the conventions of the first layer, we want to measure the relation between \(x_{i}\) and a given \(x_{j}\), embedded in the \(d\)-dimensional latent space. Replacing the naive scalar product \(x_{ia}x_{ja}\), we introduce learnable latent-space transformations \(W^{Q,K}\) to the elements \[q_{ia}=W^{Q}_{\alpha\beta}x_{i\beta}\qquad\text{and}\qquad k_{ja}=W^{K}_{ \alpha\beta}x_{j\beta}\,\] (47) and use the directed scalar product \[A_{ij}\thicksim q_{ia}k_{ja}\] (48) to encode the relation of \(x_{j}\) with \(x_{i}\) through \(k_{j}\) and \(q_{i}\). While the scalar product is symmetric, the attention matrix does not have to be, \(A_{ij}\neq A_{ji}\). These global transformations allow the transformer to choose a useful basis for the scalar product in latent space.
2. The first problem with \(A_{ij}\) given in Eq.(48) is that it grows with the latent space dimensionality, so it turns out to be useful to replace it by \(A_{ij}\to A_{ij}/\sqrt{d}\). More importantly, we want all entries \(j\) referring to a given \(i\) to be normalized, \[A_{ij}\in[0,1]\qquad\text{and}\qquad\sum_{j}A_{ij}=1\.\] (49) This leads us to the definition \[A_{ij}=\text{Softmax}_{j}\frac{q_{ia}k_{ja}}{\sqrt{d}}\qquad\text{with} \qquad\text{Softmax}_{j}(x_{j})=\frac{e^{x_{j}}}{\sum_{k}e^{x_{k}}}\.\] (50) Similar to the adjacency matrix of a graph, this attention matrix quantifies how closely two phase space components are related. Our autoregressive setup sketched in Fig. 4 requires us to set \[A_{ij}=0\qquad\text{for}\quad j>i\.\] (51)
3. Now that the network has constructed a basis to evaluate the relation between two input elements \(x_{i}\) and \(x_{j}\), we use it to update the actual representation of the input information. We combine the attention matrix \(A_{ij}\) with the input data, but again transformed in latent space through a learnable matrix \(W^{V}\), \[v_{ja}=W^{V}_{\alpha\beta}x_{j\beta}\quad\Rightarrow\quad x^{ \prime}_{ia} =A_{ij}v_{ja}\] \[=\text{Softmax}_{j}\left(\frac{W^{Q}_{\delta\gamma}x_{ij}W^{K}_{ \delta\sigma}x_{j\sigma}}{\sqrt{d}}\right)W^{V}_{\alpha\beta}x_{j\beta}\.\] (52) In this form we see that the self-attention vector \(x^{\prime}\) just follows from a general basis transformation with the usual scalar product, but with an additional learned transformation for every input vector.
The self-attention can be stacked with other structures like a feed-forward network, to iteratively construct an optimal latent space representation. This can either be identified with the final output \(\omega^{(i)}\) or linked to this output through a simple linear layer. To guarantee a stable training of this complex structure, we evaluate the self-attention as an ensemble, defining a multi-headed self-attention. In addition, we include residual connections, layer normalization, and dropout just like the GPT model. Because the sum over \(j\) in Eq.(52) leads to permutation equivariance in the phase space components, we break it by providing explicit positional information through a linear layer that takes the one-hot encoded phase space position \(i\) as input. This positional embedding is then added to the latent representation \(x_{i\alpha}\).
#### Training and sampling
The training of the autoregressive transformer is illustrated in Fig. 6. We start with an universal \(x_{0}=0\) in \(p(x_{1}|\omega^{(0)})\) for all events. The transformer encodes all parameters \(\omega\) needed for \(p(x_{i}|\omega^{(i-1)})\) in parallel. The chain of conditional likelihoods for the realized values \(x_{i}\) gives the full likelihood \(p_{\text{model}}(x|\theta)\), which in turn can be used for the loss function
\[\mathcal{L}_{\text{AT}} =\left\langle-\log p_{\text{model}}(x|\theta)\right\rangle_{x\sim p _{\text{data}}}\] \[=\sum_{i=1}^{n}\left\langle-\log p(x_{i}|\omega^{(i-1)})\right\rangle _{x\sim p_{\text{data}}}. \tag{53}\]
The successive transformer sampling is illustrated in Fig. 7. For each component, \(\omega^{(i-1)}\) encodes the dependence on the previous components \(x_{1},...,x_{i-1}\), and correspondingly we sample from \(p(x_{i}|\omega^{(i-1)})\). The parameters \(\omega^{(0)},...\omega^{(i-2)}\) from the sampling of previous components are re-generated in each step, but not used further. This way the event generation is less efficient than the likelihood evaluation during training, because it cannot be parallelized.
Figure 6: Training algorithm for the autoregressive transformer.
Figure 7: Sampling algorithm for the autoregressive transformer.
#### Bayesian version
As any generative network, we bayesianize the transformer by drawing its weights from a set of Gaussians \(q(\theta)\) as defined Eq.(21). In practice, we replace the deterministic layers of the transformer by Bayesian layers and add the KL-regularization from Eq.(22) to the likelihood loss of the transformer, Eq.(53)
\[\mathcal{L}_{\text{B-AT}}=\left\langle\mathcal{L}_{\text{AT}}\right\rangle_{ \theta\sim q(\theta)}+\text{KL}[q(\theta),p(\theta)]. \tag{54}\]
For large generative networks, we encounter the problem that too many Bayesian weights destabilize the network training. While a deterministic network can switch of unused weights by just setting them to zero, a Bayesian network can only set the mean to zero, in which case the Gaussian width will approach the prior \(p(\theta)\). This way, excess weights can contribute noise to the training of large networks. This problem can be solved by adjusting the hyperparameter describing the network prior or by only bayesianizing a fraction of the network weights. In both cases it is crucial to confirm that the uncertainty estimate from the network is on a stable plateau. For the transformer we find that the best setup is to only bayesianizing the last layer.
To implement the autoregressive transformer we use PyTorch with the Radam optimizer. All hyperparameters are given in Tab. 3. We propose to couple the number of parameters \(m\) in the parametrization vector \(\omega^{(i-1)}\) to the latent space dimensionality \(d\), because the latent space dimensionality naturally sets the order of magnitude of parameters that the model can predict confidently.
## 3 Toy models and Bayesian networks
Before we can turn to the LHC phase space as an application to our novel generative models, we study their behavior for two simple toy models, directly comparable to Bayesian INNs [69]. These toy models serve two purposes: first, we learn about the strengths and the challenges of the different network architectures, when the density estimation task is simple and the focus lies on precision. Second, the interplay between the estimation of the density and its uncertainty over phase space allows us to understand how the different network encode the density.
\begin{table}
\begin{tabular}{l|c c} \hline \hline hyperparameter & toy models & LHC events \\ \hline \# Gaussians \(m\) & 21 & 43 \\ \# bins \(m\) & 64 & - \\ \hline \# TransformerDecoder \(N\) & 4 & 4 \\ \# Self-attention Heads & 4 & 4 \\ Latent Space Size \(d\) & 64 & 128 \\ \# Model Parameters & 220k & 900k \\ \hline LR Scheduling & one-cycle & one-cycle \\ Starter LR & \(3\times 10^{-4}\) & \(10^{-4}\) \\ Maximum LR & \(3\times 10^{-3}\) & \(10^{-3}\) \\ Epochs & 200 & 2000 \\ Batch Size & 1024 & 1024 \\ Radam \(\epsilon\) & \(10^{-8}\) & \(10^{-4}\) \\ \hline \# Training Events & 600k & 2.4M, 670k, 190k \\ \# Generated Events & 600k & 1M, 1M, 1M \\ \hline \hline \end{tabular}
\end{table}
Table 3: Training setup and hyperparameters for the Bayesian autoregressive transformer.
We remind ourselves that an INN just works like a high-dimensional fit to the correlated 2-dimensional densities [69].
### Denoising Diffusion Probabilistic Model
Our first toy example is a normalized ramp, linear in one direction and flat in the second,
\[p_{\text{ramp}}(x_{1},x_{2})=2x_{2}. \tag{55}\]
The network input and output are unweighted events. The hyperparameters of each model are given in Tabs. 1, 2, and 3. A training dataset of 600k events guarantees that for our setup and binning the statistical uncertainty on the phase space density is around the per-cent level. To show one-dimensional Bayesian network distributions we sample the \(x_{i}\)-direction and the \(\theta\)-space in parallel [69, 22]. This way the uncertainty in one dimension is independent of the existence and size of other dimensions.
Starting with the DDPM we show the non-trivial one-dimensional distributions in Fig. 8. In the left panel we see that the network learns the underlying phase space density well, but not quite at the desired per-cent precision. The uncertainty from the B-DDPM captures remaining
Figure 8: Ramp distribution from the DDPM. We show the learned density and its B-DDPM uncertainty (left) as well as the absolute and relative uncertainties with a range given by 10 independent trainings (right). We use \(\delta=|\text{Model}-\text{Truth}|/\text{Truth}\).
Figure 9: Gaussian ring distribution from the DDPM. We show the learned density and its B-DDPM uncertainty (left) as well as the absolute and relative uncertainties with a range given by 10 independent trainings (right).
deviations, if anything, conservatively. In the right panel we see that the absolute uncertainty has a minimum around \(x_{1}=0.7\), similar to the behavior of the Bayesian INN and confirmed by independent trainings. We can understand this pattern by looking at a constrained fit of the normalized density
\[p(x_{2})=ax_{2}+b=a\left(x_{2}-\frac{1}{2}\right)+1\qquad\text{with}\qquad x_{2 }\in[0,1]\,. \tag{56}\]
A fit of \(a\) then leads to an uncertainty in the density of
\[\sigma\equiv\Delta p\approx\left|x_{2}-\frac{1}{2}\right|\,\Delta a\, \tag{57}\]
just using simple error propagation. The minimum in the center of the phase space plan can be interpreted as the optimal use of correlations in all directions to determine the local density.
For the DDPM the minimum is not quite at \(x_{2}=0.5\), and the uncertainty as a function of \(x_{2}\) is relatively flat over the entire range. Because of the statistically limited training sample, the network output comes with a relatively large uncertainty towards \(x_{2}=0\). For larger \(x_{2}\)-values, the gain in precision and uncertainty is moderate. For \(x_{2}>0.75\) the absolute and relative uncertainties increase, reflecting the challenge to learn the edge at \(x_{2}=1\). These results are qualitatively similar, but quantitatively different from the INN case, which benefits more from the increase in training data and correlations for \(x_{2}=0.1\)... \(0.5\).
The second toy example is a Gaussian ring, or a Gaussian sphere in two dimensions,
\[p_{\text{ring}}(x_{1},x_{2})=\mathcal{N}(\sqrt{x_{1}^{2}+x_{2}^{2}};1,0.1). \tag{58}\]
Figure 10: Ramp (upper) and Gaussian ring (lower) distributions from the CFM. We show the learned density and its B-CFM uncertainty (left) as well as the absolute and relative uncertainties with a range given by 10 independent trainings (right).
The DDPM result are shown in Fig. 9. The precision on the density is significantly worse than for the ramp, clearly missing the per-cent mark. The agreement between the training data and the learned density is not quite symmetric, reflecting the fact that we train and evaluate the network in Cartesian candidates but show the result in \(R\). Especially for large radii, the network significantly overestimates the tail, a failure mode which is covered by the predictive uncertainty only for \(R\lesssim 1.3\). In the right panels of Fig. 9 the main feature is a distinct minimum in the uncertainty around the mean of the Gaussian. As for the ramp, this can be understood from error propagation in a constrained fit. If we assume that the network first determines a family of functions describing the radial dependence, in terms of a mean and a width, the contribution from the mean vanishes at \(R=1\)[69]. Alternatively, we can understand the high confidence of the network through the availability of many radial and angular correlations in this phase space region.
### Conditional Flow Matching
To confirm that the diffusion architecture is behind the DDPM features, we repeat our study with the CFM model in Fig. 10. The main difference to the DDPM is that the agreement between the learned and the training densities is now at the per-cent level, for the ramp and for the Gaussian ring. This shows that diffusion models are indeed able to learn a phase space density with the same precision and stability as normalizing flows or INNs. As before, the
Figure 11: Ramp (upper) and Gaussian ring (lower) distribution from the autoregressive transformer with a binned likelihood. We show the learned density and its Bayesian network uncertainty (left) as well as the absolute and relative uncertainties with a range given by 10 independent trainings, compared to the statistical uncertainty of the training data in blue (right).
predictive uncertainty from the B-CFM model is conservative for the entire phase space of the ramp, but it fails in the exponentially suppressed tail of the Gaussian ring for \(R\gtrsim 1.3\). We emphasize that as a function of \(R\) this problem is clearly visible when we increase \(R\) to the point where \(\sigma(R)=\mathcal{O}(p(R))\).
Looking at the pattern of the predicted uncertainty \(\sigma\) in \(x_{2}\) and in \(R\), we see a similar behavior as for the INN and for the DDPM. As for the DDPM, the minimum in the middle of the ramp is flatter than for the INN, and its position has moved to \(x_{2}\approx 0.3\). For the radial distribution of the Gaussian ring there is the usual minimum on the peak.
Summarizing our findings for the two diffusion models, they behave similar but not identical to the INN. For all of them, the relation between the density and its uncertainty shows patterns of a constrained fit, suggesting that during the the training the networks first determine a class of suitable models and then adjust the main features of these models, like the slope of a ramp or the position and width of a Gaussian ring.
### Autoregressive Transformer
Finally, we target the two-dimensional ramp, Eq.(55), and the Gaussian ring, Eq.(58) with the transformer. In Fig. 11 we start with a simple representation of the phase space density using 64 bins. In this naive setup the densities of the ramp and the Gaussian ring are described accurately, within our per-cent target range. The largest deviations appear in the tails of the
Figure 12: Ramp (upper) and Gaussian ring (lower) distribution from the autoregressive transformer with a Gaussian mixture likelihood. We show the learned density and its Bayesian network uncertainty (left) as well as the absolute and relative uncertainties with a range given by 10 independent trainings, compared to the statistical uncertainty of the training data in blue (right).
Gaussian ring, but remain almost within the statistical limitations of the training data.
Unlike for the INN and the diffusion models, the uncertainty in the right panels of Fig. 11 does not show any real features for the ramp or the Gaussian ring. This shows that the transformer does not use a fit-like density estimation and does not benefit from the increased correlations in the center of phase space. Both of these aspects can be understood from the model setup. First, the autoregressive structure never allows the transformer to see the full phase space density and encode global (symmetry) patterns; second, the main motivation of the transformer is to improve the power-law scaling with the dimensionality of all possible correlations and only focus on the most relevant correlations at the expense of the full phase space coverage.
In Fig. 12 we show the same results for a mixture of 21 Gaussians. For this small number of dimensions the advantage over the binned distribution is not obvious. The main problem appears at the upper end of the ramp, where there exists enough training data to determine a well-suited model, but the poorly-suited GMM just fails to reproduce the flat growth towards the sharp upper edge and introduces a significant artifact, just covered by the uncertainty. For the Gaussian ring the GMM-based transformer is also less precise than the binned version, consistent with the lower resolution in the 2-dimensional model.
The uncertainty predicted by the Bayesian transformer is typically smaller than for diffusion models. We therefore add the statistical uncertainty of the training data to the right panels of Figs. 11 and 12, providing a lower bound on the uncertainty. In both cases, the uncertainty of the Bayesian transformer conservatively tracks the statistical uncertainty of the training data.
Finally, in Fig. 13 we illustrate the unique way in which the GMM-based transformer reconstructs the density for the Gaussian ring successively. In the left panel, we show \(p_{\text{model}}(x_{1})\) after the first autoregressive step, constructed out of 21 learned Gaussians. The peaks at \(\pm 1\) arise from the marginalization along the longest line of sight. The marginalization also distorts the form of the Gaussians, which are distributed along the ring. The density after the second autoregressive step, \(p_{\text{model}}(x_{2}|x_{1})\), is conditioned on the first component. In the second panel we show \(p_{\text{model}}(x_{2}|x_{1}=0)\) with sharp peaks at \(\pm 1\) because the event has to be at the edge of the ring. The Gaussians building the left and right peak are distributed roughly equally. On the other hand, \(p_{\text{model}}(x_{2}|x_{1}=1)\) has a broad plateau in the center, again from the \(x_{1}\)-condition.
Figure 13: Conditional likelihoods for the Gaussian ring. We show the full Gaussian mixture as well as the 21 individual Gaussians, compared to the truth distribution.
LHC events
Most generative network tasks at the LHC are related to learning and sampling phase space densities, for instance event generation at the parton or reconstruction level, the description of detector effects at the reconstruction level, the computation of event-wise likelihoods in the matrix element method, or the inversion and unfolding of reconstructed events. This is why we benchmark our new networks on a sufficiently challenging set of LHC events. Following Ref. [22] we choose the the production of leptonically decaying \(Z\)-bosons, associated with a variable number of QCD jets,
\[pp\to Z_{\mu\mu}+\{1,2,3\}\text{ jets}. \tag{59}\]
The network has to learn the sharp \(Z\)-peak as well as correlated phase space boundaries and features in the jet-jet correlations. We generate the training dataset of 5.4M events (4.0M + 1.1M + 300k) using Sherpa2.2.10 [87] at 13 TeV, including ISR and parton shower with CKKW merging [88], hadronization, but no pile-up. The jets are defined by Fastjet3.3.4 [89] using the anti-\(k_{T}\) algorithm [90] and applying the basic cuts
\[p_{T,j}>20\text{ GeV }\qquad\text{ and }\qquad\Delta R_{jj}>0.4. \tag{60}\]
The jets and muons are each ordered in transverse momentum. Our phase space dimensionality is three per muon and four per jet, i.e. 10, 14, and 18 dimensions. Momentum conservation is not guaranteed, because some final-state particles might escape for instance the jet algorithm. However, the physically relevant phase space dimensionality is reduced to 9, 13, and 17 by removing the global azimuthal angle.
Our data representation includes a minimal preprocessing. Each particle is represented by
\[\{\,p_{T},\eta,\phi,m\,\}. \tag{61}\]
Given Eq.(60), we provide the form \(\log(p_{T}-p_{T,\text{min}})\), leading to an approximately Gaussian shape. All azimuthal angles are given relative to the leading muon, and the transformation into \(\text{artanh}(\Delta\phi/\pi)\) again leads to an approximate Gaussian. The jet mass is encoded as \(\log m\). Finally, we centralize and normalize each phase space variable as \((q_{i}-\tilde{q}_{i})/\sigma(q_{i})\) and apply a whitening/PCA transformation separately for each jet multiplicity for the two diffusion models.
### Denoising Diffusion Probabilistic Model
The additional challenge for \(Z+\)jets event generation is the variable number of jets, which we tackle with a conditional evaluation [22], illustrated in Fig. 14. The training is independent for the three jet multiplicities. We start by giving the information for the \(Z+1\)-jet sub-process, 12 phase space dimensions, to a first network. It is supplemented with the one-hot encoded jet count. The second network then receives the 4-momentum of the second jet as an input, and the \(Z+1\)-jet information additionally to the jet count as a condition. Analogously, the third network learns the third jet kinematics conditioned on the \(Z+2\)-jet information. For democratic jets this conditioning would be perfect, but since we order the jets in \(p_{T}\) it has to and does account for the fact that for higher jet multiplicities the interplay between partonic energy and jet combinatorics leads to differences in the spectra of the leading jets at a given multiplicity.
As discussed in Sec. 2.1 time is a crucial condition for the DDPM network, and we embed it into the full conditioning of the LHC setup as a high-dimensional latent vector linked by a linear layer. We also add a second block to our network architecture, where the conditions are fed to each block individually. The amount of training data is different for the different jet
multiplicities and corresponding networks. As shown in Tab. 1, the first network uses the full 3.2M events, the second 850k events with at least two jets, and the third network 190k events with three jets. This hierarchy is motivated by the way the chain of conditional networks add information and also by the increasing cost of producing the corresponding training samples. We could balance the data during training, but for the B-DDPM model this leads to a slight performance drop. We compensate the lack of training data by increasing the number of epochs successively from 1000 to 10000.
Going from toy models to LHC events, we increase the number of blocks to two, which improves the performance. The reason is that we attach the condition to the input at the beginning of each block, so the second block reinforces the condition. Going to even more blocks will slightly improve the performance, but at the expense of the training time.
In Fig. 15 we show a set of kinematic distributions for different jet multiplicities, including the jet-inclusive scalar sum of the up to three \(p_{T,j}\). These distributions will be the same for all three network in this paper and can be compared directly to the Bayesian INN results in Fig. 11 of Ref. [22], serving as a precision baseline. Starting with the almost featureless \(p_{T}\)-distributions in the left panels, we see that for all three distributions the deviation from the truth, given by high-statistics training data, is similar for the actual training data and for the DDPM-generated events. The network really extracts all available information from the training data combined with its fit-like implicit bias. For sufficient training statistics, the precision on the phase space density as a function of \(p_{T}\) is below the per-cent level, easily on par with the INN baseline. For a given jet multiplicity this precision drops with increasing \(p_{T}\) and correspondingly decreasing training data, an effect that is correctly and conservatively modeled by the uncertainty estimate of the B-DDPM. Combining all \(n\)-jet samples into one observable is no problem for the network and does not lead to any artifacts.
In the right panels of Fig. 15 we show the most challenging phase space correlations. We start with the \(Z\)-peak, which governs most of the events, but requires the network to learn a very specific phase space direction very precisely. Here, the agreement between the true density and the DDPM result drops to around 10% without any additional phase space mapping, similar to the best available INN. The deviation is not covered by the Bayesian network uncertainty, because it arises from a systematic failure of the network in the phase space resolution, induced by the network architecture. However, this effect is less dramatic than it initially looks when we notice that the ratio of densities just describes the width of the mass peak being broadened by around 10%. If needed, it can be easily corrected by an event reweighting of the \(Z\)-kinematics. Alternatively, we can change the phase space parametrization to include intermediate particles, but most likely at the expense of other observables.
Figure 14: Conditional Sampling Architecture.
Figure 15: Bayesian DDPM densities and uncertainties for \(Z+1\) jet (upper), \(Z+2\) jets (center), and \(Z+3\) jets (lower) from combined \(Z+\) jets generation. The uncertainty on the training data is given by bin-wise Poisson statistics. The network architecture is given in Tab. 1. For a comparison with the INN we refer to Fig. 11 of Ref. [22].
Next, we study the leading challenge of ML-event generators, the jet-jet correlations and specifically the collinear enhancement right at the hard jet-separation cut of \(\Delta R_{jj}>0.4\). Three aspects make this correlation hard to learn: (i) this phase space region is a sub-leading feature next to the bulk of the distribution around \(\Delta R_{jj}\sim\pi\); (ii) it includes a sharp phase space boundary, which density estimators will naturally wash out; and (iii), the collinear enhancement needs to be described correctly, even though it appears right at the phase space boundary. Finally, for this correlation the conditional setup and the Bayesian extension are definitely not helpful.
What helps for this correlation is the so-called magic transformation introduced in Ref. [22]. It scales the \(\Delta R_{jj}\)-direction in phase space such that the density in this phase space direction becomes a monotonous function. While from a classic Monte Carlo perspective the benefits of this transformation are counter-intuitive, from a a fit-like perspective the magic transformation can simplify the class of function which the network then adapts to the data, as shown for the toy models in the previous section. This argument is confirmed by the fact that for our diffusion networks this transformation is helpful, just like for the INN, but for the transformer it is not needed. Both, for the 2-jet and the 3-jet sample we see that with the magic transformation the DDPM learns the \(\Delta R_{jj}\) features, but at the same 10% level as the INN and hence missing our 1% target. The Bayesian uncertainty estimate increases in this phase space region as well, but it is not as conservative as for instance in the \(p_{T}\)-tails.
The challenge of current diffusion networks, also the DDPM, is the evaluation speed. For each additional jet we need to call our network 1000 times, so sampling 3-jet events takes three times as long as sampling 1-jet events. However, none of the networks presented in this study are tuned for generation speed, the only requirement for a limited hyperparameter scan is the precision baseline given by the INN.
### Conditional Flow Matching
For the CFM diffusion network we follow the same conditional setup as for the DDPM and the INN to account for the variable number of jets. The network is described in Tab. 2, unlike for the DDPM the three networks do not have the same size, but the first network with its 9 phase space dimensions is larger. Also the number of epochs increases from 1000 to 10000 going to the 3-jet network. For the CFM we combine the embedding of the time and the conditioning on the lower jet multiplicities. We find the best results when encoding time, the kinematic condition, and the actual network input separately into same-sized latent vectors with independent linear layers. Then all three are concatenated and given to the network.
The kinematic distributions generated by the CFM are shown in Fig. 16. Again, the transverse momentum spectra are learned with high precision, with decreasing performance in the tails, tracked correctly by the Bayesian network uncertainty. The correlation describing the \(Z\)-peak is now modeled as well as the bulk of the single-particle distributions, a significant improvement over the INN baseline [22]. For the most challenging \(\Delta R_{jj}\) distributions the CFM uses the same magic transformation as the DDPM and achieves comparable precision. This means that while there might possibly be a slight benefit to our CFM implementation with an ODE approach to the discrete time evolution in terms of precision, our level of network optimization does not allow us to attribute this difference to luck vs network architecture. Similarly, in the current implementation the CFM generation is about an order of magnitude faster than the DDPM generation, but this can mostly be attributed to the linear trajectory and the extremely efficient ODE solver.
Figure 16: Bayesian CFM densities and uncertainties for \(Z+1\) jet (upper), \(Z+2\) jets (center), and \(Z+3\) jets (lower) from combined \(Z+\) jets generation. The uncertainty on the training data is given by bin-wise Poisson statistics. The network architecture is given in Tab. 2. For a comparison with the INN we refer to Fig. 11 of Ref. [22].
Figure 17: Bayesian autoregressive transformer densities and uncertainties for \(Z+1\) jet (upper), \(Z+2\) jets (center), and \(Z+3\) jets (lower) from combined \(Z+\) jets generation. The uncertainty on the training data is given by bin-wise Poisson statistics. The network architecture is given in Tab. 3. For a comparison with the INN we refer to Fig. 11 of Ref. [22].
### Autoregressive Transformer
For the third network, a generative transformer, we already know from Sec. 3 that it learns and encodes the phase space density different from normalizing flows or diffusion networks. A key structural difference for generating LHC events is that the transformer can generate events with different jet multiplicities using the same network. The one-hot-encoded jet multiplicity is provided as an additional condition for the training. The autoregressive structure can work with sequences of different length, provided there is an appropriate way of mapping the sequences onto each other. For the LHC events we enhance the sensitivity to the angular jet-jet correlations through the ordering
\[\left(\,\left(\phi,\eta\right)_{j_{1,2,3}},\,\left(p_{T},\eta\right)_{\mu_{1}},\,\left(p_{T},\phi,\eta\right)_{\mu_{2}},\,\left(p_{T},m\right)_{j_{1,2,3}} \,\right)\,. \tag{62}\]
While the Bayesian transformer does learn the angular correlations also when they appear at the end of the sequence, this ordering provides a significant boost to the network's precision. For the transformer training, we want the features of the 3-jet to be well represented in the set of vectors defined in Eq.(62). To train on equal numbers of events with one, two, and three jets, we sample 1-jet and 2-jet events randomly at the beginning of each epoch. The loss is first evaluated separately for each jet multiplicity, and then averaged for the training update.
In Fig. 17 we show the standard set of kinematic observables for the autoregressive transformer based on a Gaussian mixture model, with the architecture given in Tab. 3. Just like the two diffusion models, and the INN, it learns the different \(p_{T}\)-distributions with a precision close to the statistics of the training data. Sampling a variable number of jets with the multiplicity as a condition leads to no additional complication.
Looking at the correlations in the right panels, the \(Z\)-mass now comes with an increased width and a shift. This is, in part, an effect of the ordering of the input variables, where the lepton information comes after the angular information on the jets. The benefit of this ordering can be seen in the \(\Delta R_{jj}\) distributions, which are reproduced at the per-cent precision without any additional effort. This is true for \(\Delta R_{j_{1}j_{2}}\) and \(\Delta R_{j_{1}j_{3}}\), reflecting the democratic ordering and training dataset. The sharp phase space boundary at \(\Delta R_{jj}=0.4\) can be trivially enforced during event generation.
## 5 Outlook
Generative neural networks are revolutionizing many aspects of our lives, and LHC physics is no exception. Driven by large datasets and precise first-principle simulations, LHC physics offers a wealth of opportunities for modern machine learning, in particular generative networks [2]. Here, classic network architectures have largely been surpassed by normalizing flows, especially its INN variant, but cutting-edge new approaches are extremely promising. Diffusion networks should provide an even better balance between expressivity and precision in the density estimation. Autoregressive transformers should improve the scaling of network size and training effort with the phase space dimensionality.
In this paper we have provided the first comprehensive study of strengths and weaknesses of these new architectures for an established LHC task. We have chosen two fundamentally different approaches to diffusion networks, where the DDPM learns the time evolution in terms of discrete steps, while the CFM encodes the continuous time evolution into in a differential equation. The autoregressive JetGPT transformer follows the standard GPT architecture, where for our relatively simple setup we get away without actual pretraining.
For each architecture we have first implemented a Bayesian network version, which allows us to understand the different ways they approach the density estimation. While the diffusion
networks first identify classes of functions and then adapt them to the correlations in phase space, much like the INN [69], the transformer learns patterns patch-wise and dimension by dimension.
Next, we have applied all three networks to the generation of \(Z+\)jets events, with a focus on the conditional setup for variable jet multiplicities and the precision in the density estimation [22]. The most challenging phase space correlations are the narrow \(Z\)-peak and the angular jet-jet separation combined with a collinear enhancement.
Our two diffusion models are, conceptually, not very far from the INNs. We have found that they face the same difficulties, especially in describing the collinear jet-jet correlation. Just like for the INN, the so-called magic transformation [22] solved this problem. Both diffusion networks provided an excellent balance between expressivity and precision, at least on part with advanced INNs. This included the density estimation as well as the uncertainty map over phase space. The main advantage of the CFM over the DDPM was a significantly faster sampling for our current implementation, at the level of the INN or the transformer. In contrast, the DDPM model is based on a proper likelihood loss, with all its conceptual and practical advantages for instance when bayesianizing it. Both networks required long training, but fewer network parameters than then INN. We emphasize that ML-research on diffusion models it far from done, so all differences between the two models found in this paper should be considered with a grain of salt.
Finally, we have adapted the fundamentally different GPT architecture to LHC events. Its autoregressive setup provided a different balance between learning correlations and scaling with the phase space dimension, and it has never been confronted with the precision requirements of the LHC. The variable numbers of particles in the final state was implemented naturally and without an additional global conditioning. Our transformer is based on a Gaussian mixture model for the phase space coverage, and we have used the freedom of ordering phase space dimensions in the conditioning chain to emphasize the most challenging correlations. This has allowed the transformer to learn the jet-jet correlations better than the INN or the diffusion models, but at the expense of the description of the \(Z\)-peak. The generation time of the transformer is comparable with the fast INN.
Altogether, we have found that new generative network architectures have the potential to outperform even advanced normalizing flows and INNs. However, diffusion models and autoregressive transformers come with their distinct set of advantages and challenges. Given the result of our study we expect significant progress in generative network applications for the LHC, whenever the LHC requirements in precision, expressivity, and speed can be matched by one of the new architectures.
## Acknowledgements
We would like to thank Theo Heimel, Michel Luchmann, Luigi Favaro, Ramon Winterhalder, and Claudius Krause for many useful discussions. AB, NH, and SP are funded by the BMBF Junior Group _Generative Precision Networks for Particle Physics_ (DLR 01IS22079). AB, TP and JS would like to thank the Baden-Wurttemberg-Stiftung for financing through the program _Internationale Spitzenforschung_, project _Uncertainties - Teaching AI its Limits_ (BWST_IF2020-010). This research is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant 396021762 - TRR 257: Particle Physics Phenomenology after the Higgs Discovery and through Germany's Excellence Strategy EXC 2181/1 - 390900948 (the Heidelberg STRUCTURENES Excellence Cluster).
|
2304.07220
|
Tensorial time derivatives on moving surfaces: General concepts and a
specific application for surface Landau-de Gennes models
|
Observer-invariance is regarded as a minimum requirement for an appropriate
definition of time derivatives. We systematically discuss such time derivatives
for surface tensor field and provide explicit formulations for material,
upper-convected, lower-convected and Jaumann/corotational time derivatives
which all lead to different physical implications. We compare these results
with the corresponding time derivatives for tangential tensor fields. As
specific surface 2-tensor fields we consider surface Q-tensor fields and
conforming surface Q-tensor fields and apply the results in surface Landau-de
Gennes models for surface liquid crystals.
|
Ingo Nitschke, Axel Voigt
|
2023-04-14T16:02:41Z
|
http://arxiv.org/abs/2304.07220v1
|
Tensorial time derivatives on moving surfaces: General concepts and a specific application for surface Landau-de Gennes models
###### Abstract
Observer-invariance is regarded as a minimum requirement for an appropriate definition of time derivatives. We systematically discuss such time derivatives for surface tensor field and provide explicit formulations for material, upper-convected, lower-convected and Jaumann/corotational time derivatives which all lead to different physical implications. We compare these results with the corresponding time derivatives for tangential tensor fields. As specific surface 2-tensor fields we consider surface Q-tensor fields and conforming surface Q-tensor fields and apply the results in surface Landau-de Gennes models for surface liquid crystals.
keywords: tensor fields, moving surface, embedded surface, observer-invariance, time derivative Msc: [2020] 53A45, 53A05, 37C10, 70G45 +
Footnote †: journal: Journal of Geometry and Physics
## 1 Introduction
Observer-invariant time derivatives for tensor-fields on moving surfaces \(\mathcal{S}\subset\mathbb{R}^{3}\) are important ingredients for various applications, such as fluid deformable surfaces and surface liquid crystals, see, e.g., [1; 2; 3; 4; 5]. They determine specific rates of change independently of their observation and specify transport mechanism reflecting a certain inertia in the considered quantity induced by material motions. For tangential tensor-fields, defined in the tangent bundle of \(\mathcal{S}\), denoted by \(\mathbb{T}^{\pi}\mathcal{S}\), time derivatives for arbitrary observer are discussed in detail in [6]. Unlike for scalar fields, where the time derivative is uniquely defined, severe differences in the evolution of the tangential tensor-field, e.g. a surface director field or a surface Q-tensor field in surface liquid crystal models [7], have been identified. The implications of these differences, e.g. in morphogenesis [8; 9; 10], which can be modelled using fluid deformable surfaces and surface liquid crystals are not yet explored. The requirement of the surface tensor-fields to be tangential might be too strong for such applications. They require surface tensor-fields with an additional normal component, see, e.g., [11] for a director field on a flexible membrane and [12; 13; 14; 15] for surface Q-tensor-fields but on stationary surfaces. These surface tensor-fields are defined in \(\mathbb{T}^{\pi}\mathbb{R}^{3}|\)s, and need slightly different time derivatives, which respect the embedding space \(\mathbb{R}^{3}\) as well as the surface \(\mathcal{S}\). We will systematically discuss these time derivatives, their properties and relations and apply them to a surface Landau-de Gennes model.
Unlike in [6], we do not take a spacetime manifold as a basis for observer-invariant time derivatives. The advantage is an improved readability due to lesser abstract concepts. The disadvantage is that we do not get any longer observer-invariance for free as a results of a covariance principle w. r. t. the choice of spacetime coordinates. The main issue to develop observer-invariant time derivatives is that the time \(t\) is not a coordinate of \(\mathcal{S}\), but rather a parameter to describe time-dependencies w. r. t. an observer and the relation between time and space. We need to demonstrate that the time derivatives of tensor-fields on moving surfaces are invariant within the observer class depicting the moving surface. In parts we circumvent this issue by stipulating time derivatives for a material observer (Lagrange perspective) and transform these representation to an arbitrary observer. We only consider instantaneous tensor-fields.
We introduce notation in subsection 1.1 and provide a short tabular summary in subsection 1.2. The actual derivation of time derivatives is constituted in section 2, which is organized in the following way. Subsection 2.1 describes the general approach to obtain time derivatives. Basically, we use a differential quotient w. r. t. the time parameter \(t\) s. t. a time derivative yields a certain rate of the considered tensor-fields. Such an approach is only sufficient for fixed choices of convenient pullbacks, which are capable of evaluating a "future" tensor-field on the current surface. We illustrate this approach for scalar fields in \(\mathrm{T}^{0}\mathcal{S}\), where such a pullback seems to be uniquely given. The situation changes for \(n\)-tensor fields with \(n\geq 1\), where different pullbacks lead to different time derivatives. In subsection 2.2 we derive the material time derivative, in subsection 2.3 the upper-convected time derivative, in subsection 2.4 the lower-convected time derivative and in subsection 2.5 the Jaumann/corotational time derivative. The individual derivatives are build on each other. In all of these subsections we also consider vector- as well as 2-tensor-fields separately for the sake of readability. Additionally, we show at the end of each subsection that all of these time derivatives are thin-film limits of usual flat \(\mathbb{R}^{3}\) time derivatives. With surface Landau-de Gennes models for surface liquid crystals in mind [14], we treat in subsection 2.6 Q-tensor fields in \(\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\), which are symmetric and trace-free, as a special case of surface 2-tensor fields. Here we consider only the material and Jaumann/corotational time derivative. Moreover, we discuss surface conforming Q-tensor fields in \(\mathrm{C}_{\mathcal{S}}\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\), where the eigenvector spaces are aligned to the surface. This gives the opportunity to modify the material derivative to a simpler representation. Using these tools we formulate surface Landau-de Gennes models on evolving surfaces which lead for the material time derivative to the same formulation as postulated in [14].
### Notation
We mainly adopt notations from [7]. Nevertheless, we give a condensed summary in this section including some notational extensions. A moving surface \(\mathcal{S}\) is sufficiently described by parameterizations
\[\mathbf{X}:\quad\mathcal{T}\times\mathcal{U}\to\mathbb{R}^{3}:\quad(t,y^{1},y^{2} )\mapsto\mathbf{X}(t,y^{1},y^{2})\in\mathcal{S}|_{t}\,, \tag{1}\]
where \(\mathcal{U}\subset\mathbb{R}^{2}\) is the chart codomain and \(\mathcal{T}=[t_{0},t_{1}]\subset\mathbb{R}\) the time domain. For simplicity we assume that \(\mathbf{X}(t,\mathcal{U})=\mathcal{S}|_{t}\) can be achieved by a single time-depending parameterization \(\mathbf{X}\) for all \(t\in\mathcal{T}\). The results can be extended to the more general case considering subsets providing an open covering of \(\mathcal{S}\). We omit the time parameter \(t\) in the notation if it is clear that the considered term can be evaluated temporally locally. A parameterization is not uniquely given for a moving surface. For instance comprises \(\mathbf{X}\) information about the observer. Due to this, we subscribe quantities with \(\mathfrak{m}\) if we consider the material observer and \(\mathfrak{o}\) if we consider an arbitrary observer, see [6] for more details about observer. One could refer the material observer to the Lagrangian perspective/specification. Since we also consider motion in normal direction of the surface and the observer has to follow the material in this direction, a pure Eulerian perspective does not exist on moving surfaces generally. Note that we assume that \(\mathbf{X}\) provides a sufficiently smooth embedding of \(\mathcal{S}\) into \(\mathbb{R}^{3}\).
We write \(\mathrm{T}^{n}\mathbb{R}^{3}|_{\mathcal{S}}\) as a shorthand for the space of sufficiently smooth \(n\)-tensor field on \(\mathcal{S}\subset\mathbb{R}^{3}\), i. e. for \(\mathbf{R}\in\mathrm{T}^{n}\mathbb{R}^{3}|_{\mathcal{S}}\) and \((y^{1},y^{2})\in\mathcal{U}\) the quantity \(\mathbf{R}(y^{1},y^{2})\in\mathrm{T}^{n}_{\mathbf{X}(y^{1},y^{2})}\mathbb{R}^{3} \cong(\mathbb{R}^{3})^{n}\) is a usual \(\mathbb{R}^{3}\)-\(n\)-tensor defined at \(\mathbf{X}(y^{1},y^{2})\in\mathcal{S}\).
\begin{table}
\begin{tabular}{|l l|} \hline \(\mathrm{T}^{0}\mathcal{S}=\mathrm{T}^{0}\mathbb{R}^{3}|_{\mathcal{S}}\) & scalar fields \\ \hline \hline \(\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\) & vector fields \\ \hline \(\mathrm{T}\mathcal{S}<\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\) & tangential vector fields \\ \hline \hline \(\mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\) & 2-tensor fields \\ \hline \(\mathrm{T}^{2}\mathcal{S}<\mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\) & tangential 2-tensor fields \\ \hline \(\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}<\mathrm{T}^{2}\mathbb{R}^{3}|_{ \mathcal{S}}\) & Q-tensor fields (trace-free, symmetric) \\ \hline \(\mathrm{C}_{\mathcal{S}}\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}<\mathrm{Q}^ {2}\mathbb{R}^{3}|_{\mathcal{S}}\) & surface conforming Q-tensor fields (only normal and tangential eigenvectors) \\ \hline \(\mathrm{Q}^{2}\mathcal{S}<\{\mathrm{C}_{\mathcal{S}}\mathrm{Q}^{2}\mathbb{R}^{3}| _{\mathcal{S}},\mathrm{T}^{2}\mathcal{S}\}\) & tangential Q-tensor fields (trace-free, symmetric) \\ \hline \end{tabular}
\end{table}
Table 1: Most used tensor field spaces and their local subtensor space relations. All Q-tensor related spaces are defined in section 2.6
This means that we handle tensor bundles and fields (section of bundles) synonymously due to the assumed smooth structure. We also does not distinguish between co- and contravariant tensor fields, and everything between, in index-free notations, since they are isomorph by the musical isomorphisms \((b,\sharp)\) for a given metric and all operators used in this paper respect that. The space of tangential \(n\)-tensor fields \(\mathrm{T}^{n}\mathcal{S}\) is a subtensor field of \(\mathrm{T}^{n}\mathbb{R}^{3}|_{\mathcal{S}}\), i. e. it holds the subtensor relation \(\mathrm{T}^{n}_{X(y^{1},y^{2})}\mathcal{S}<\mathrm{T}^{n}_{X(y^{1},y^{2})} \mathbb{R}^{3}\) for all \((y^{1},y^{2})\in\mathcal{U}\). The space \(\mathrm{T}^{n}\mathcal{S}\) contains only the fields from \(\mathrm{T}^{n}\mathbb{R}^{3}|_{\mathcal{S}}\) that can be represented by a tangential frame. We summarize the most used subtensor fields of \(\mathrm{T}^{n}\mathbb{R}^{3}|_{\mathcal{S}}\) for \(n\in 0,1,2\) in this paper in table 1. Some of them are defined in their associated section, where they are used. For \(n=1\) we omit the index, e. g. it is \(\mathrm{T}\mathcal{S}=\mathrm{T}^{1}\mathcal{S}\). Every subtensor field relation brings its uniquely defined orthogonal projection \(\Pi_{(\cdot)}\) along, which is labeled by its image, i. e. the subtensor field space. The orthogonal projection \(\Pi_{\mathrm{T}^{n}\mathcal{S}}:\mathrm{T}^{n}\mathbb{R}^{3}|_{\mathcal{S}} \to\mathrm{T}^{n}\mathcal{S}\) projects \(n\)-tensor fields into tangential \(n\)-tensor fields for instance. We use the global Cartesian as well as local tangential frames and thus, for a better readability, also different index notations (Ricci calculus) in accordance with their frame. We apply capital Latin letters \(A,B,C,\ldots\) w. r. t. the Cartesian frame \(\{e_{A}\}\), e. g. we could use \(R^{AB}\boldsymbol{e}_{A}\otimes\boldsymbol{e}_{B}\) to describe a 2-tensor field \(\boldsymbol{R}\in\mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\). Small Latin letters \(i,j,k,\ldots\) are used w. r. t. the tangential frame \(\{\partial_{i}\boldsymbol{X}\}\) derived from parameterization (1). For instance we could write \(r^{ij}\partial_{i}\boldsymbol{X}\otimes\partial_{j}\boldsymbol{X}\) for a tangential 2-tensor field \(\boldsymbol{r}\in\mathrm{T}^{2}\mathcal{S}\).
We only use two kinds of spatial derivatives. One is the covariant derivative \(\nabla:\mathrm{T}^{n}\mathcal{S}\to\mathrm{T}^{n+1}\mathcal{S}\) defined by the Christoffel symbols \(\Gamma_{ijk}=\frac{1}{2}(\partial_{i}g_{jk}+\partial_{j}g_{ik}-\partial_{k}g_ {ij})\) in a usual way, where \(g_{ij}\) is the covariant proxy of the metric tensor. In index notations, we represent \(\nabla\) with a stroke "\(|\)". For instance, we write \([\nabla\boldsymbol{r}]^{ij}_{\phantom{ij}k}=r^{ij}_{\phantom{ij}k}=\partial_{k }r^{ij}+\Gamma^{i}_{kl}r^{ij}+\Gamma^{j}_{kl}r^{ji}\) for \(\boldsymbol{r}\in\mathrm{T}^{2}\mathcal{S}\). The other one is the surface derivative \(\nabla_{\mathrm{C}}:\mathrm{T}^{n}\mathbb{R}^{3}|_{\mathcal{S}}\to\mathrm{T}^{ n}\mathbb{R}^{3}|_{\mathcal{S}}\otimes\mathrm{T}\mathcal{S}<\mathrm{T}^{n+1} \mathbb{R}^{3}|_{\mathcal{S}}\) defined as the covariant derivative on the Cartesian proxy components, which are scalar fields in \(\mathrm{T}^{0}\mathcal{S}\). As an example, it is \(\nabla_{\mathrm{C}}\,\boldsymbol{R}=\boldsymbol{e}_{A}\otimes\boldsymbol{e}_{B} \otimes\nabla R^{AB}\) valid for \(\boldsymbol{R}\in\mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\). For readers from other communities, it holds \(\nabla_{\mathrm{C}}\,\boldsymbol{R}=(\boldsymbol{\widetilde{\nabla}}\, \boldsymbol{\widetilde{R}})|_{\mathcal{S}}\boldsymbol{Id}_{\mathcal{S}}\), where \(\boldsymbol{\widetilde{R}}\in\mathrm{T}^{n}\mathbb{R}^{3}\) is an arbitrary smooth extension s. t. \(\boldsymbol{\widetilde{R}}|_{\mathcal{S}}=\boldsymbol{R}\in\mathrm{T}\mathbb{R} ^{3}|_{\mathcal{S}}\) is valid, \(\boldsymbol{\widetilde{\nabla}}:\mathrm{T}^{n}\mathbb{R}^{3}\to\mathrm{T}^{n+1 }\mathbb{R}^{3}\) the usual \(\mathbb{R}^{3}\)-gradient and \(\boldsymbol{Id}_{\mathcal{S}}\in\mathrm{T}^{2}\mathcal{S}\) the surface identity, resp. tangential projection, tensor field, e. g. given by \([\boldsymbol{Id}_{\mathcal{S}}]^{AB}=\delta^{AB}-\nu^{A}\nu^{B}\) or \([\boldsymbol{Id}_{\mathcal{S}}]^{ij}=g^{ij}\). Both derivatives are also related outside the Cartesian frame and we give these relations in the appropriated locations in this paper where they are needed. Further definitions for covariant differential operators
\begin{table}
\begin{tabular}{|l l|} \hline \(X_{m}\), \(X_{o}\) & material and observer parameterization \\ \hline \(g_{unij}=\left<\partial_{i}X_{m},\partial_{j}X_{m}\right>_{\mathrm{T}\mathcal{S}}\), & material and observer metric tensor proxy field \\ \(g_{\omega ij}=\left<\partial_{i}X_{o},\partial_{j}X_{o}\right>_{\mathrm{T} \mathcal{S}}\) & \\ \hline \(g_{unij}^{ij}\), \(g_{o}^{ij}\) & matrix inverse of proxy fields \(g_{unij}\) and \(g_{\omega ij}\) \\ \hline \(\Gamma^{k}_{unij}\), \(\Gamma^{k}_{\omega ij}\) & Christoffel symbols of 2nd kind w. r. t. \(g_{unij}\) and \(g_{\omega ij}\) \\ \hline \(\boldsymbol{\nu}\in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\) & normal field \\ \hline \(\boldsymbol{I}=-\nabla_{\mathrm{C}}\,\boldsymbol{\nu}\), \(\mathcal{H}=\mathrm{Tr}\,\boldsymbol{I}\!\!
like div (divergence), rot (curl), \(\Delta\) (Bochner-Laplace), etc., are derived from \(\nabla\) in the usual way. In the example section 2.6.3 we use the surface Laplace operator \(\Delta_{\mathrm{C}}:=(\mathrm{Tr}\,\nabla_{\mathrm{C}}^{2}):\mathrm{T}^{2} \mathbb{R}^{3}|_{S}\to\mathrm{T}^{2}\mathbb{R}^{3}|_{S}\) on 2-tensor fields, which stated a kinda connection Laplace operator a priori w. r. t. surface derivative \(\nabla_{\mathrm{C}}\). For more details see lemmas 24 and 25, where we show that \(\Delta_{\mathrm{C}}\) is the Laplace-Beltrami operator on the Cartesian Proxy components as well as a Bochner-like Laplace operator w. r. t. \(\nabla_{\mathrm{C}}\). Additionally, corollary 26 gives a relation to covariant differential operators outside the Cartesian frame. Inner products \(\langle\cdot,\cdot\rangle_{(\cdot)}\) are written with angle brackets and labeled by its associated space. For instance \(\langle\mathbf{r}_{1},\mathbf{r}_{2}\rangle_{\mathrm{T}\mathcal{S}}=g_{\mathrm{x}}g_{ \mathrm{y}}\dot{r}_{1}^{ij}r_{2}^{kl}\) is the local inner product, or \(\langle\mathbf{r}_{1},\mathbf{r}_{2}\rangle_{\mathrm{L}^{2}(\mathrm{T}\mathcal{S})}= \int_{\mathcal{S}}\langle\mathbf{r}_{1},\mathbf{r}_{2}\rangle_{\mathrm{T}\mathcal{S}} \,\mathrm{d}\mathcal{S}\) is the global inner product of \(\mathbf{r}_{1},\mathbf{r}_{2}\in\mathrm{T}^{2}\mathcal{S}\). Note that inner products on tensor fields are backwards compatible with their subtensor fields, e. g. it holds \(\langle\mathbf{r}_{1},\mathbf{r}_{2}\rangle_{\mathrm{T}\mathbb{R}^{3}|_{S}}=\langle \mathbf{r}_{1},\mathbf{r}_{2}\rangle_{\mathrm{T}\mathcal{S}}\) for \(\mathbf{r}_{1},\mathbf{r}_{2}\in\mathrm{T}^{2}\mathcal{S}\). Norms are given and written according to their inner products, e. g. it is valid for \(\mathbf{r}\in\mathrm{T}\mathcal{S}\). We save writing an extra operation symbol, like a dot, for simple tensor-tensor multiplications \(\mathrm{T}^{\mathrm{T}}\mathbb{R}^{3}|_{S}\times\mathrm{T}^{m}\mathbb{R}^{3}|_ {S}\to\mathrm{T}^{n+m-2}\mathbb{R}^{3}|_{S}\), e. g. it is \(\mathbf{R}_{1}\mathbf{R}_{2}=R_{1}^{AB}R_{2B}\mathbf{e}_{A}\in\mathrm{T}\mathbb{R}^{3}|_{ \mathcal{S}}\) valid for \(\mathbf{R}_{1}\in\mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\) and \(\mathbf{R}_{2}\in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\). However, we sometimes use the double-dot symbol ":" for the double-contraction product \(\mathrm{T}^{\mathrm{T}}\mathbb{R}^{3}|_{S}\times\mathrm{T}^{m}\mathbb{R}^{3}|_ {S}\to\mathrm{T}^{n+m+2}\mathbb{R}^{3}|_{\mathcal{S}}\), e. g. it holds \(\mathbf{R}_{1}\!:\!\mathbf{R}_{2}=\langle\mathbf{R}_{1},\mathbf{R}_{2}\rangle_{\mathrm{T}^{2} \mathbb{R}^{3}|_{S}}\in\mathrm{T}^{0}\mathcal{S}\) for \(\mathbf{R}_{1},\mathbf{R}_{2}\in\mathrm{T}^{2}\mathbb{R}^{3}|_{S}\). As in [7], we use arguments in square brackets to denote functional dependencies, e. g. the scalar field \(f[\mathbf{X}_{m},\mathbf{V}_{m}]=\|\mathbf{X}_{m}\|_{\mathbb{R}^{3}}^{2}+\|\mathbf{V}_{m}\|_{ \mathbb{R}^{3}|_{S}}^{2}\in\mathrm{T}^{0}\mathcal{S}\) depends on the material surface parameterization \(\mathbf{X}_{m}\) as a proxy for the surface, w. r. t. the material observer, as well as on its velocity \(\mathbf{V}_{m}=\partial_{t}\mathbf{X}_{m}\). Note that functional dependencies do not have to be mutual independent as we see in the former example. In table 2 frequently used quantities, also related to the chosen observer, are summarized. We would also like to point out that Appendix A contains a collection of lemmas, corollaries and their justifications that may be helpful for understanding the quantities in table 2. For more details on observer related notations, see [6]. Note that for the tangential material derivative, which is defined below, we use a dot over the field symbol. A bar between a term and the dot parenthesize the term under the dot. For instance, we write \(\dot{f}=\dot{\overline{f_{1}f_{2}}}\) for \(f=f_{1}f_{2}\) in context of scalar fields.
### Summary
In this section we provide a summary of the results in section 2 and relate them to the observer-invariant time derivatives derived in [6]. Tables 3 and 5 give an overview about tangential time derivatives on tangential vector fields in \(\mathrm{T}\mathcal{S}\) and 2-tensor fields in \(\mathrm{T}^{2}\mathcal{S}\), which are given in [6]. We formulate these time derivatives w. r. t. an observer parameterization as well as a relation to the material time derivative. Note that these time derivatives are special cases of instantaneous tensor fields in [6]. We use the name prefix "Jaumann" synonymously to the prefix "corotational". Time derivatives on vector fields in \(\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\) and 2-tensor fields in \(\mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\), which are derived in section 2, are summarized in table 4 and 6. We formulate these time derivatives in an orthogonal tangential-normal-decomposition for tangential-normal-decomposed tensor fields, where we are able to use the corresponding tangential time-derivatives. This representation could be useful for analytical perspectives. In contrast, we give also a relation to the material derivative, which could be helpful for numerical implementations, since it is possible to apply a Cartesian frame of the embedding space for the material derivative, i. e. \(\mathrm{D}_{t}^{m}\mathbf{R}=\dot{R}^{k}\mathbf{e}_{A}\) for vector fields \(\mathbf{R}\in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\) and \(\mathrm{D}_{t}^{m}\mathbf{R}=\dot{R}^{kB}\mathbf{e}_{A}\otimes\mathbf{e}_{B}\) for 2-tensor fields \(\mathbf{R}\in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\), see (8) for general \(n\)-tensor fields. A very useful property of the material and Jaumann derivative is their inner product compatibility, i. e. the inner product of tensor fields obey the product rule, see corollaries 2, 11 for vector fields and 4, 13 for 2-tensor fields. Likewise, they yield a compatible product rule with the tensor-vector product \(\mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\times\mathrm{T}\mathbb{R}^{3}|_{ \mathcal{S}}\to\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\), see corollaries 5 and 14. Both convected derivatives do not exhibit these behaviors. Note that the material derivative is not an extension of the tangential material derivative, contrary to all other time derivatives we present in this paper. However, the pure tangential part of the material derivative yields such an extension, which in turn describes an extension of [6, Proposition 4] for non-tangential tensor fields.
In context of 2-tensor fields we consider \(\mathrm{Q}\)-tensor fields \(\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}<\mathrm{T}^{2}\mathbb{R}^{3}|_{ \mathcal{S}}\) as a subbundle, where our attention is mainly directed to the material and Jaumann derivative. Since \(\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\) is closed w. r. t. both derivatives, we could use the more general representations for \(\mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\) in table 6. These time derivatives apply in the surface Landau-de Gennes model (36). A more aligned formulation of the material and Jaumann derivative, w. r. t. the orthogonal decomposition (29) for \(\mathrm{Q}\)-tensor fields, can be found in 30 and 31. We also consider surface conforming \(\mathrm{Q}\)-tensor fields \(\mathrm{C}_{\mathcal{S}}\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}<\mathrm{Q}^{2 }\mathbb{R}^{3}|_{\mathcal{S}}\), which are not closed by the material derivative but by the Jaumann derivative. Hence we present an adjusted material derivative \(\mathrm{D}_{t}^{C_{S}m}:=\Pi_{\mathrm{C}_{\mathcal{S}}\mathrm{Q}^{2}\mathbb{R} ^{3}|_{\mathcal{S}}}\circ\mathrm{D}_{t}^{m}\) under the aid of the unique orthogonal projection \(\Pi_{\mathrm{C}_{\mathcal{S}}\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}}: \mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\to\mathrm{C}_{\mathcal{S}}\mathrm{Q} ^{2}\mathbb{R}^{3}|_{\mathcal{S}}\). This surface conforming material derivative and the Jaumann derivative are used in the surface conforming Landau-de Gennes model (37). An orthogonal decomposition of both time derivatives on
surface conforming Q-tensor fields can be found in table 7 and apply in the equivalent formulation (40) of the surface conforming Landau-de Gennes model.
## 2 Derivations
### General Approach and Scalar Fields
Formally, we could define an arbitrary time-derivative on \(\mathbf{R}\in\mathbb{T}^{n}\mathbb{R}^{3}|_{S}\) by
\[(\mathrm{D}_{t}\,\mathbf{R})[X_{m}](t,y_{m}^{1},y_{m}^{2}):=\lim_{\tau\to 0}\frac{1}{ \tau}\left((\Phi_{t,\tau}^{*}\mathbf{R}[\mathbf{X}_{m}]]_{t+\tau})(t,y_{m}^{1},y_{m}^{2 })-\mathbf{R}[\mathbf{X}_{m}](t,y_{m}^{1},y_{m}^{2})\right) \tag{2}\]
where \(\Phi_{t,\tau}^{*}:\mathbb{T}^{n}\mathbb{R}^{3}|_{S,t+\tau}\to\mathbb{T}^{n} \mathbb{R}^{3}|_{S,t}\) is a convenient pullback by the map
\[\Phi_{t,\tau}:\mathcal{S}|_{t}\to\mathcal{S}|_{t+\tau}:\quad X_{m}(t,y_{m}^{1 },y_{m}^{2})\mapsto\mathbf{X}_{m}(t+\tau,y_{m}^{1},y_{m}^{2})\,.\]
Even if the time derivative is described by a material observer, w. r. t. its parameterization \(\mathbf{X}_{m}\), we are able to evaluate (2) by an arbitrary observer, w. r. t. parameterization \(\mathbf{X}_{0}\) with the aid of relation
\[\mathbf{R}[\mathbf{X}_{m}](t,y_{m}^{1},y_{m}^{2})=\mathbf{R}[\mathbf{X}_{0}](t,(\mathbf{X}_{0}|_{t }^{-1}\circ\mathbf{X}_{m})(t,y_{m}^{1},y_{m}^{2}))\in\mathbb{T}^{n}_{\mathbf{X}_{m}(t, y_{m}^{1},y_{m}^{2})}\mathbb{R}^{3}|_{\mathcal{S}}\,, \tag{3}\]
respectively, the inverse relation
\[\mathbf{R}[\mathbf{X}_{0}](t,y_{0}^{1},y_{0}^{2})=\mathbf{R}[\mathbf{X}_{m}](t,(\mathbf{X}_{m}|_{t }^{-1}\circ\mathbf{X}_{0})(t,y_{0}^{1},y_{0}^{2}))\in\mathbb{T}^{n}_{\mathbf{X}_{m}(t, y_{0}^{1},y_{0}^{2})}\mathbb{R}^{3}|_{\mathcal{S}}\,.\]
The general proceeding is to assume a pullback, conclude the associated time derivative w. r. t. the material observer and transform it w. r. t. an arbitrary observer to establish observer-invariance.
For scalar fields \(f\in\mathbb{T}^{0}\mathbb{R}^{3}|_{\mathcal{S}}=\mathbb{T}^{0}\mathcal{S}\), i. e. \(n=0\), the only noteworthy pullback is simply given by
\[(\Phi_{t,\tau}^{*_{0}}f[\mathbf{X}_{m}]_{t+\tau})(t,y_{m}^{1},y_{m}^{2})=f[\mathbf{X} _{m}](t+\tau,y_{m}^{1},y_{m}^{2})\in\mathbb{T}^{0}_{\mathbf{X}_{m}(t,y_{m}^{1},y_ {m}^{2})}\mathcal{S}\,. \tag{4}\]
Hence, with \(f:=\mathrm{D}_{t}\,|_{\Phi_{t,\tau}^{*}=\Phi_{t,\tau}^{*_{0}}}f\), (2) becomes
\[\hat{f}[\mathbf{X}_{m}](t,y_{m}^{1},y_{m}^{2})=\partial_{t}f[\mathbf{X}_{m}](t,y_{m} ^{1},y_{m}^{2})\in\mathbb{T}^{0}_{\mathbf{X}_{m}(t,y_{m}^{1},y_{m}^{2})}\mathcal{S}\]
for a material observer. Getting the time derivative \(\dot{f}[\mathbf{X}_{\rm o}]\) for an arbitrary observer given by parameterization \(\mathbf{X}_{\rm o}\) is more difficult. The time derivative (2) as well as the pullback (4) have to be evaluated w. r. t. the relation (3). As we can see in appendix B.1, applying a Taylor expansion to this pullback at \(\tau=0\) leads to
\[f[\mathbf{X}_{\rm o}](t,y_{\rm o}^{1},y_{\rm o}^{2}) =\partial_{t}f[\mathbf{X}_{\rm o}](t,y_{\rm o}^{1},y_{\rm o}^{2})+( \nabla_{\mathbf{u}}f)[\mathbf{X}_{\rm o}](t,y_{\rm o}^{1},y_{\rm o}^{2})\in\mathrm{T}^{ 0}_{\mathbf{X}_{\rm o}(t,y_{\rm o}^{1},y_{\rm o}^{2})}\mathcal{S}\,,\] \[\text{where}\quad\mathbf{u}=\mathbf{u}[\mathbf{X}_{\rm o},\mathbf{X}_{\rm m}](t,y_{ \rm o}^{1},y_{\rm o}^{1}) :=V_{\rm m}[\mathbf{X}_{\rm m}](t,(X_{\rm m}]_{t}^{-1}\circ\mathbf{X}_{\rm o})(t,y_{ \rm o}^{1},y_{\rm o}^{2})-\mathbf{V}_{0}[\mathbf{X}_{\rm o}](t,y_{\rm o}^{1},y_{\rm o }^{1})\]
is the relative velocity, \(\mathbf{V}_{\rm o}[\mathbf{X}_{\rm o}](t,y_{\rm o}^{1},y_{\rm o}^{1}):=\partial_{t}\bm {X}_{\rm o}(t,y_{\rm o}^{1},y_{\rm o}^{1})\) the observer velocity and \(\mathbf{V}_{\rm m}[\mathbf{X}_{\rm m}](t,y_{\rm m}^{1},y_{\rm m}^{1}):=\partial_{t}\bm {X}_{\rm o}(t,y_{\rm m}^{1},y_{\rm m}^{1})\) the material velocity. This is also consistent with the scalar-valued time derivative given in [6]. Since the observer is arbitrary and for sake of simplicity, we also write
\[\dot{f}=\partial_{t}f+\nabla_{\mathbf{u}}f\in\mathrm{T}^{0}\mathcal{S} \tag{5}\]
for short, which is the common form in context of non(-Einstein)-relativistic settings [16], and ALE (Arbitrary Lagrangian-Eulerian) methods on non-stationary surfaces [17; 18]. A material perspective, i. e. \(\mathbf{u}=0\), applies to Lagrangian particle methods [19; 20] for instance. If \(f\) is extended in a volume around \(\mathcal{S}\), there are also alternative formulation of \(\dot{f}\), see [21] for instance.
Since we consider \(\mathbb{R}^{3}\) quantities, even though restricted to the surface, we show at the end of each of the following subsections that all considered time derivatives are consistent to their counterpart in a volume, i. e. the thin film limit of a time derivative in a bulk equals its time derivatives on the surface. We use the thin film parameterization \(\chi[\mathbf{X}]\), defined by
\[\chi[\mathbf{X}](t,y^{1},y^{2},\xi):=\mathbf{X}(t,y^{1},y^{2})+\xi\nu[\mathbf{X}](t,y^{1},y ^{2}) \tag{6}\]
with \(\xi\in[-h,h]\), to describe the thin film \(\mathcal{S}_{h}\) around \(\mathcal{S}\), see [13] for more details. Therefore, \(\chi[\mathbf{X}_{\text{m}}]\) is the material and \(\chi[\mathbf{X}_{\text{o}}]\) an arbitrary observer thin film parameterization. According to this, \(\widetilde{\mathbf{V}}_{\text{m}}:=\partial_{\nu}\chi[\mathbf{X}_{\text{m}}]\) is the material and \(\widetilde{\mathbf{V}}_{\text{o}}:=\partial_{\nu}\chi[\mathbf{X}_{\text{o}}]\) the observer thin film velocity. We obtain the relative thin film velocity
\[\widetilde{\mathbf{V}}_{\text{m}}-\widetilde{\mathbf{V}}_{\text{o}}=\mathbf{u}-\xi\mathbf{H} \mathbf{u}\]
as a consequence by (A.13). For extended scalar fields \(\widehat{f}\in\mathrm{T}^{0}\mathcal{S}_{h}\), which are sufficing \(\widehat{f}|_{\xi=0}=f\in\mathrm{T}^{0}\mathcal{S}\), we use the Taylor expansion
\[\widehat{f}=f+\xi(\partial_{\xi}\widehat{f})|_{\xi=0}+\mathcal{O}(\xi^{2})\]
at \(\xi=0\). Note that the normal coordinate \(\xi\) and the time parameter \(t\) are mutually independent, i. e. \(\partial_{\xi}\) and \(\partial_{t}\) are commuting on scalar fields. Eventually, this yields
\[\widetilde{f}=\partial_{t}\widehat{f}+\widetilde{\nabla}_{\widetilde{\mathbf{V}} _{\text{m}}-\widetilde{\mathbf{V}}_{\text{s}}}\widehat{f}=\dot{f}+\mathcal{O}(\xi )\,, \tag{7}\]
i. e. it holds \(\widetilde{f}\to\dot{f}\) for \(h\to 0\).
### Material Derivative
In order to obtain the material time derivative we could simply use the Cartesian frame \(\{\mathbf{e}_{A}\}\), which is Eulerian and constant in space. Though it seems that an additional frame, which is not given by the chart through the parameterization, would complicate the situation at first glance, the material pullback becomes quite easy. The pullback implements the scalar pullback (4) on each Cartesian component. This yields the definition
\[(\Phi_{t,\tau}^{\text{s}_{m}}\mathbf{R}[\mathbf{X}_{\text{m}}]|_{t+\tau})(t,y_{\text{ m}}^{1},y_{\text{m}}^{2}):=R^{A_{1}-A_{\text{o}}}[\mathbf{X}_{\text{m}}](t+\tau,y_{ \text{m}}^{1},y_{\text{m}}^{2})\bigotimes_{\alpha=1}^{n}\mathbf{e}_{A_{\alpha}} \in\mathrm{T}^{n}_{\mathbf{X}_{\text{m}}(t,y_{\text{m}}^{1},y_{\text{m}}^{2})} \mathbb{R}^{3}|_{\mathcal{S}}\,.\]
Therefore the material derivative is given by \(\mathrm{D}_{t}^{m}:=\mathrm{D}_{t}\,|_{\partial_{\nu,\tau}^{\text{s}_{m}}= \partial_{\nu,\tau}^{\text{s}_{m}}}\), i. e.
\[(\mathrm{D}_{t}^{m}\,\mathbf{R})[\mathbf{X}_{\text{m}}](t,y_{\text{m}}^{1},y_{\text{m }}^{2})=\partial_{t}R^{A_{1}-A_{\text{o}}}[\mathbf{X}_{\text{m}}](t,y_{\text{m}}^{ 1},y_{\text{m}}^{2})\bigotimes_{\alpha=1}^{n}\mathbf{e}_{A_{\alpha}}\in\mathrm{T}^ {n}_{\mathbf{X}_{\text{m}}(t,y_{\text{m}}^{1},y_{\text{m}}^{2})}\mathbb{R}^{3}|_{ \mathcal{S}}\,,\]
for the material observer. Since the frame is constant, we only have to consider the scalar Cartesian proxy fields \(R^{A_{1}-A_{\text{o}}}[\mathbf{X}_{\text{m}}]\in\mathrm{T}^{0}\mathcal{S}\). For an arbitrary observer, (5) yields
\[\mathrm{D}_{t}^{m}\,\mathbf{R}=R^{A_{1}-A_{\text{o}}}\bigotimes_{\alpha=1}^{n}\mathbf{ e}_{A_{\alpha}}\,. \tag{8}\]
One first observation of (8) is that this time derivative equals the material time-derivative in a volume up to the restriction to the surface, i. e. it does not depend on behaviors of the surface at all. This is not to be expected by other time-derivatives. Moreover, (8), contrary to (2) in general, is now represented in context of an arbitrary observer chart, i. e. all we have to do is calculating (8) also in terms of an arbitrary extended surface observer frame \(\{\partial_{1}\mathbf{X}_{\text{o}},\partial_{2}\mathbf{X}_{\text{o}},\nu\}\). Note that the Cartesian frame yields
\[\mathbf{e}_{A}=\delta_{AB}\left(g_{\text{o}}^{ij}\partial_{j}X_{\text{o}}^{B} \partial_{i}X_{\text{o}}+\nu^{B}\mathbf{v}\right)\,,\]
at all local events \((t,y_{1}^{1},y_{2}^{2})\). In the following subsections we transform the frame and the associated proxy fields to the extended observer frame especially for vector and 2-tensor fields.
For extended tensor fields \(\widetilde{\mathbf{R}}\in\mathrm{T}^{n}\mathcal{S}_{h}\), which are sufficing \(\widetilde{\mathbf{R}}|_{\xi=0}=\mathbf{R}\in\mathrm{T}^{n}\mathbb{R}^{3}|_{\mathcal{S}}\), we conclude from (8) and (7) that
\[\dot{\widetilde{\mathbf{R}}}=\dot{\widetilde{R}}^{A_{1}-A_{a}}\bigotimes_{\alpha=1 }^{n}\mathbf{e}_{A_{\alpha}}\to\mathrm{D}_{t}^{m}\,\mathbf{R} \tag{9}\]
is valid for \(h\to 0\).
#### 2.2.1 Vector Fields
To represent the material derivative \(\mathrm{D}_{t}^{m}\,\mathbf{R}=\dot{R}^{A}\mathbf{e}_{A}\) (8) on vector fields \(\mathbf{R}\in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\), we use the orthogonal decomposition
\[\mathbf{R}=\mathbf{r}+\phi\mathbf{\nu}\in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\,, \tag{10}\]
where \(\mathbf{r}\in\mathrm{T}\mathcal{S}\) and \(\phi\in\mathrm{T}^{0}\mathcal{S}\) are given by \(\mathbf{R}\) uniquely. The tangential covariant observer proxy of \(\mathrm{D}_{t}^{m}\,\mathbf{R}\) yields
\[\langle\mathrm{D}_{t}^{m}\,\mathbf{R},\partial_{k}\mathbf{X}_{\theta} \rangle_{\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}} =\delta_{AB}\dot{R}^{A}\partial_{k}\mathbf{X}_{\theta}^{B}=\dot{ \overbrace{\langle\mathbf{R},\partial_{k}\mathbf{X}_{\theta}\rangle_{\mathrm{T}\mathbb{ R}^{3}|_{\mathcal{S}}}}}-R\dot{\overline{\partial_{k}X^{B}_{\theta}}}\] \[=\partial_{t}r_{k}+u^{i}\partial_{i}r_{k}-(r_{B}+\phi\mathbf{\nu}_{R} )\left(\partial_{k}V_{\theta}^{B}+u^{i}\partial_{i}\partial_{k}X_{\theta}^{B}\right)\]
by time derivative (5) on scalar fields and decomposition (10). With (A.11), (A.8) and (A.1) we obtain
\[\langle\mathrm{D}_{t}^{m}\,\mathbf{R},\partial_{k}\mathbf{X}_{\theta} \rangle_{\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}} =g_{kl}\dot{\partial}_{t}r^{j}+u^{i}\left(\partial_{i}r_{k}-\Gamma _{\alpha k}^{j}r_{j}\right)+G_{ij}[\mathbf{V}_{\theta}]r^{j}-\phi\left(u^{i}\dot{ H}_{ik}+b_{k}[V_{\theta}]\right)\] \[=g_{kl}\dot{\partial}_{t}r^{j}+u^{i}r_{kl}+G_{ij}[\mathbf{V}_{\theta} ]r^{j}-\phi b_{k}[\mathbf{V}_{m}]=[\dot{\mathbf{r}}-\phi\mathbf{b}[\mathbf{V}_{m}]]_{k}\ \,,\]
where \(\dot{\mathbf{r}}\in\mathrm{T}\mathcal{S}\) is the material derivative of the tangential vector field \(\mathbf{r}\) given in table 3. For the normal part of \(\mathrm{D}_{t}^{m}\,\mathbf{R}\) we use the time derivative (5) on scalar fields again and the rate of the normal field given in (A.14). This yields
\[\langle\mathrm{D}_{t}^{m}\,\mathbf{R},\mathbf{\nu}\rangle_{\mathrm{T} \mathbb{R}^{3}|_{\mathcal{S}}}=\delta_{AB}\dot{R}^{A}\mathbf{\nu}^{B}=\dot{ \overbrace{\langle\mathbf{R},\mathbf{\nu}\rangle_{\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S }}}}}-R_{\mathcal{B}}\dot{\nu}^{B}=\dot{\phi}+\langle\mathbf{r},\mathbf{b}[\mathbf{V}_{m}] \rangle_{\mathrm{T}\mathcal{S}}\.\]
**Corollary 1**.: _For all \(\mathbf{R}=\mathbf{r}+\phi\mathbf{\nu}\in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\), \(\mathbf{r}\in\mathrm{T}\mathcal{S}\) and \(\phi\in\mathrm{T}^{0}\mathcal{S}\) holds_
\[\mathrm{D}_{t}^{m}\,\mathbf{R}=\dot{\mathbf{r}}-\phi\mathbf{b}[\mathbf{V}_{m}]+\left(\dot{\phi }+\langle\mathbf{r},\mathbf{b}[\mathbf{V}_{m}]\rangle_{\mathrm{T}\mathcal{S}}\right)\mathbf{ \nu}\,. \tag{11}\]
Note that \(\mathrm{D}_{t}^{m}\,\mathbf{V}_{m}\) equals the material acceleration in an observer-invariant representation, see [6; 22]. To show inner product compatibility of the material derivative, we use that the proxy \(\delta_{AB}\) of the Cartesian metric tensor is in the kernel of the scalar time derivative (5), i. e. it holds \(\delta_{AB}=\partial_{t}\delta_{AB}+u^{k}\partial_{k}\delta_{AB}=0\). Hence, we obtain \(\dot{\overbrace{\langle\mathbf{R}_{1},\mathbf{R}_{2}\rangle_{\mathrm{T}\mathbb{R}^{3} |_{\mathcal{S}}}}}=\delta_{AB}(\dot{R}_{1}^{A}R_{2}^{B}+R_{1}^{A}R_{2}^{B})\) for all \(\mathbf{R}_{1},\mathbf{R}_{2}\in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\), which gives the following corollary.
**Corollary 2**.: _The material derivative on vector fields is compatible with the inner product, i. e. for all \(\mathbf{R}_{1}=\mathbf{r}_{1}+\phi_{1}\mathbf{\nu},\mathbf{R}_{2}=\mathbf{r}_{2}+\phi_{2}\mathbf{\nu}\in \mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\) holds_
\[\overline{\langle\mathbf{R}_{1},\mathbf{R}_{2}\rangle_{\mathrm{T}\mathbb{R }^{3}|_{\mathcal{S}}}} =\langle\mathrm{D}_{t}^{m}\,\mathbf{R}_{1},\mathbf{R}_{2}\rangle_{\mathrm{T} \mathbb{R}^{3}|_{\mathcal{S}}}+\langle\mathbf{R}_{1},\mathrm{D}_{t}^{m}\,\mathbf{R}_{2} \rangle_{\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}} \tag{12}\] \[=\langle\mathbf{\dot{r}}_{1},\mathbf{r}_{2}\rangle_{\mathrm{T}\mathcal{S} }+\langle\mathbf{r}_{1},\mathbf{\dot{r}}_{2}\rangle_{\mathrm{T}\mathcal{S}}+\dot{\phi}_{ 1}\phi_{2}+\phi_{1}\dot{\phi}_{2}\,.\]
#### 2.2.2 2-Tensor Fields
To represent the material derivative (8) on 2-tensor fields \(\mathbf{R}\), i. e. \(\mathrm{D}_{t}^{m}\,\mathbf{R}=\dot{R}^{A}\mathbf{e}_{A}\otimes\mathbf{e}_{B}\), we use the orthogonal decomposition
\[\mathbf{R}=\mathbf{r}+\mathbf{\eta}_{L}\otimes\mathbf{\nu}+\mathbf{\nu}\otimes\mathbf{\eta}_{R}+\phi \mathbf{\nu}\otimes\mathbf{\nu}\in\mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\,, \tag{13}\]
where \(\mathbf{r}\in\mathrm{T}^{2}\mathcal{S}\), \(\mathbf{\eta}_{L},\mathbf{\eta}_{R}\in\mathrm{T}\mathcal{S}\) and \(\phi\in\mathrm{T}^{0}\mathcal{S}\) are given by \(\mathbf{R}\) uniquely. The tangential covariant observer proxy of \(\mathrm{D}_{t}^{m}\,\mathbf{R}\) yields
\[\left\langle\mathrm{D}_{t}^{m}\,\mathbf{R},\partial_{m}\mathbf{X}_{\mathsf{ o}}\otimes\partial_{n}\mathbf{X}_{\mathsf{e}}\right\rangle_{\mathrm{T}^{2}\mathbb{R}^{ \parallel}|_{\mathcal{S}}}\] \[\quad=\dot{\mathcal{R}}^{AB}\delta_{AC}\delta_{BD}\partial_{m}X_{ \mathsf{e}}^{C}\partial_{n}X_{\mathsf{e}}^{D}\] \[\quad=\overline{\langle\mathbf{R},\partial_{m}\mathbf{X}_{\mathsf{o}} \otimes\partial_{n}\mathbf{X}_{\mathsf{o}}\rangle_{\mathrm{T}^{2}\mathbb{R}^{ \parallel}|_{\mathcal{S}}}}-R_{CD}\left(\dot{\partial_{m}X_{\mathsf{o}}^{C} \partial_{n}X_{\mathsf{o}}^{D}+\partial_{m}X_{\mathsf{o}}^{C}\dot{\partial_{n }X_{\mathsf{o}}^{D}}\right)\] \[\quad=\partial_{t}r_{mn}+u^{k}\partial_{k}r_{mn}-R_{CD}\left( \partial_{m}V_{\mathsf{e}}^{C}\partial_{n}X_{\mathsf{o}}^{D}+\partial_{m}X_{ \mathsf{e}}^{C}\partial_{n}V_{\mathsf{o}}^{D}+u^{k}\partial_{k}\partial_{m}X_{ \mathsf{o}}^{C}\partial_{n}X_{\mathsf{o}}^{D}+u^{k}\partial_{m}X_{\mathsf{e}}^ {C}\partial_{k}\partial_{n}X_{\mathsf{o}}^{D}\right)\]
by time derivative (5) on scalar fields and decomposition (13), which is read \(R_{CD}=r_{CD}+\eta_{LC}v_{D}+\nu_{C}\eta_{RD}+\phi v_{C}\nu_{D}\) in the Cartesian proxy notation. With (A.12), (A.8) and (A.1) we obtain
\[\left\langle\mathrm{D}_{t}^{m}\,\mathbf{R},\partial_{m}\mathbf{X}_{\mathsf{ o}}\otimes\partial_{n}\mathbf{X}_{\mathsf{o}}\right\rangle_{\mathrm{T}^{2} \mathbb{R}^{\parallel}|_{\mathcal{S}}}\] \[\quad=g_{ami}g_{mj}\partial_{t}r^{jj}+u^{k}\partial_{k}r_{mn}+r_{ in}G_{m}^{\phantom{m}i}[\mathbf{V}_{\mathsf{o}}]+r_{m}G_{n}^{\phantom{m}i}[\mathbf{V}_{ \mathsf{o}}]-\eta_{Rn}b_{m}[\mathbf{V}_{\mathsf{o}}]-\eta_{Ln}b_{n}[\mathbf{V}_{ \mathsf{o}}]\] \[\qquad\qquad-u^{k}\left(r_{i}^{\prime}\Gamma_{cmki}+r_{m}^{ \phantom{m}i}\Gamma_{cmki}+\eta_{Rn}H_{mk}+\eta_{Lm}H_{mk}\right)\] \[\quad=g_{ami}g_{mj}\partial_{t}r^{jj}+u^{k}r_{m|k}+r_{in}G_{m}^{ \phantom{m}i}[\mathbf{V}_{\mathsf{o}}]+r_{m}G_{n}^{\phantom{m}i}[\mathbf{V}_{\mathsf{o }}]-\eta_{Rn}b_{m}[\mathbf{V}_{\mathsf{m}}]-\eta_{Ln}b_{n}[\mathbf{V}_{\mathsf{m}}]\] \[\quad=\left[\dot{\mathbf{r}}-\mathbf{\eta}_{L}\otimes\mathbf{b}[\mathbf{V}_{ \mathsf{m}}]-\mathbf{b}[\mathbf{V}_{\mathsf{m}}]\otimes\mathbf{\eta}_{R}\right]_{mn}\,\]
where \(\dot{\mathbf{r}}\in\mathrm{T}^{2}\mathcal{S}\) is the material derivative of the tangential 2-tensor field \(\mathbf{r}\) given in table 5. In the same manner we calculate the covariant observer proxy of the tangential-normal part. Hence, with (5), (13), (A.11), (A.8), (A.14) and (A.1), we get
\[\left\langle\mathrm{D}_{t}^{m}\,\mathbf{R},\partial_{m}\mathbf{X}_{\mathsf{ o}}\otimes\mathbf{\nu}\right\rangle_{\mathrm{T}^{2}\mathbb{R}^{ \parallel}|_{\mathcal{S}}}\] \[\quad=\dot{\mathcal{R}}^{AB}\delta_{AC}\delta_{BD}\partial_{m}X_{ \mathsf{o}}^{C}\nu^{D}=\overline{\langle\mathbf{R},\partial_{m}\dot{\mathbf{X}}_{ \mathsf{o}}\otimes\mathbf{\nu}\rangle_{\mathrm{T}^{2}\mathbb{R}^{\parallel}|_{ \mathcal{S}}}}-R_{CD}\left(\dot{\partial_{m}X_{\mathsf{e}}^{C}\nu^{D}+\partial _{m}X_{\mathsf{e}}^{C}\dot{\nu}^{D}\right)\] \[\quad=g_{ami}\partial_{t}\eta_{Lm}^{i}+u^{k}\partial_{k}\eta_{Lm} -k_{CD}\left(\partial_{m}V_{\mathsf{e}}^{C}\nu^{D}+u^{k}\partial_{k}\partial_{ m}X_{\mathsf{e}}^{C}\nu^{D}-\partial_{m}X_{\mathsf{e}}^{C}b^{D}[\mathbf{V}_{ \mathsf{m}}]\right)\] \[\quad=g_{ami}\partial_{t}\eta_{L}^{i}+u^{k}\partial_{k}\eta_{Lm} -u^{k}\eta_{L}^{i}\eta_{L\alpha mi}+G_{m}[\mathbf{V}_{\mathsf{o}}]\eta_{L}^{i}- \phi b_{m}[\mathbf{V}_{\mathsf{o}}]-\phi u^{k}H_{mk}+r_{m}b^{i}[\mathbf{V}_{\mathsf{m}}]\] \[\quad=g_{ami}\partial_{t}\eta_{L}^{i}+u^{k}\eta_{Lm|k}+G_{m}[\mathbf{ V}_{\mathsf{o}}]\eta_{L}^{i}-\phi b_{m}[\mathbf{V}_{\mathsf{m}}]+r_{m}b^{i}[\mathbf{V}_{ \mathsf{m}}]=\left[\dot{\mathbf{\eta}}_{L}+\mathbf{r}\mathbf{b}[\mathbf{V}_{\mathsf{m}}]-\phi b [\mathbf{V}_{\mathsf{m}}]\right]_{m}\,\]
where \(\dot{\mathbf{\eta}}_{L}\in\mathrm{T}\mathcal{S}\) is the material derivative of the tangential vector field \(\mathbf{\eta}_{L}\) given in table 5. Since the material derivative is compatible with transposition, i. e. it is \(\mathrm{D}_{t}^{m}\,\mathbf{R}^{T}:=\mathrm{D}_{t}^{m}(\mathbf{R}^{T})=(\mathrm{D}_{t}^{m }\,\mathbf{R})^{T}\) valid, we get the normal-tangential part by
\[\left\langle\mathrm{D}_{t}^{m}\,\mathbf{R},\mathbf{\nu}\otimes\partial_{n}\mathbf{X}_{ \mathsf{o}}\right\rangle_{\mathrm{T}^{2}\mathbb{R}^{\parallel}|_{\mathcal{S}}} =\left\langle\mathrm{D}_{t}^{m}\,\mathbf{R}^{T},\partial_{n}\mathbf{X}_{ \mathsf{o}}\otimes\mathbf{\nu}\right\rangle_{\mathrm{T}^{2}\mathbb{R}^{\parallel}|_{ \mathcal{S}}}=\left[\dot{\mathbf{\eta}}_{R}+\mathbf{b}[\mathbf{V}_{\mathsf{m}}]\mathbf{r}-\phi b [\mathbf{V}_{\mathsf{m}}]\right]_{n}\]
as a consequence. The pure normal part of the material derivative yields
\[\left\langle\mathrm{D}_{t}^{m}\,\mathbf{R},\mathbf{\nu}\otimes\mathbf{\nu} \right\rangle_{\mathrm{T}^{2}\mathbb{R}^{\parallel}|_{\mathcal{S}}} =\dot{\mathcal{R}}^{AB}\delta_{AC}\delta_{BD}\nu^{C}\nu^{D}=\dot{ \overline{\langle\mathbf{R},\mathbf{\nu}\otimes\mathbf{\nu}\rangle_{\mathrm{T}^{2} \mathbb{R}^{\parallel}|_{\mathcal{S}}}}}-R_{CD}\left(\dot{\nu}^{C}\nu^{D}+\nu^{C} \dot{\nu}^{D}\right)\] \[=\dot{\phi}+\left\langle\mathbf{\eta}_{L}+\mathbf{\eta}_{R},\mathbf{b}[\mathbf{V}_{ \mathsf{m}}]\right\rangle_{\mathrm{T}\mathcal{S}}\.\]
by (5) and (A.14).
**Corollary 3**.: _For all \(\mathbf{R}=\mathbf{r}+\mathbf{\eta}_{L}\otimes\mathbf{\nu}+\mathbf{\nu}\otimes\mathbf{\eta}_{R}+\phi\mathbf{ \nu}\otimes\mathbf{\nu}\in\mathrm{T}^{2}\mathbb{R}^{\parallel}|_{\mathcal{S}}\), \(\mathbf{r}\in\mathrm{T}^{2}\mathcal{S}\), \(\mathbf{\eta}_{L},\mathbf{\eta}_{R}\in\mathrm{T}\mathcal{S}\) and \(\phi\in\mathrm{T}^{0}\mathcal{S}\) holds_
\[\mathrm{D}_{t}^{m}\,\mathbf{R}=\dot{\mathbf{r}}-\mathbf{\eta}_{L}\otimes\mathbf{ b}[\mathbf{V}_{\mathsf{m}}]-\mathbf{b}[\mathbf{V}_{\mathsf{m}}]\otimes\mathbf{\eta}_{R}+ \left(\dot{\phi}+\langle\mathbf{\eta}_{L}+\mathbf{\eta}_{R},\mathbf{b}[\mathbf{V}_{\mathsf{m}}] \rangle_{\mathrm{
**Corollary 4**.: _The material derivative on 2-tensor fields is compatible with the inner product, i. e. for all \(\mathbf{R}_{\alpha}=\mathbf{r}_{\alpha}+\mathbf{\eta}_{aL}\otimes\mathbf{v}+\mathbf{\nu}\otimes\mathbf{ \eta}_{aR}+\phi_{\alpha}\mathbf{\nu}\otimes\mathbf{v}\in\mathrm{T}^{2}\mathbb{R}^{3}|_ {\mathcal{S}}\), with \(\alpha=1,2\), holds_
\[\overline{\langle\mathbf{R}_{1},\mathbf{R}_{2}\rangle_{\mathrm{T}^{2} \mathbb{R}^{3}|_{\mathcal{S}}}} =\langle\mathrm{D}_{t}^{\mathrm{m}}\,\mathbf{R}_{1},\mathbf{R}_{2} \rangle_{\mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}}+\langle\mathbf{R}_{1}, \mathrm{D}_{t}^{\mathrm{m}}\,\mathbf{R}_{2}\rangle_{\mathrm{T}^{2}\mathbb{R}^{3}| _{\mathcal{S}}} \tag{15}\] \[=\langle\dot{\mathbf{r}}_{1},\mathbf{r}_{2}\rangle_{\mathrm{T}^{2} \mathcal{S}}+\langle\mathbf{r}_{1},\dot{\mathbf{r}}_{2}\rangle_{\mathrm{T}^{2} \mathcal{S}}+\dot{\phi}_{1}\phi_{2}+\phi_{1}\dot{\phi}_{2}\] \[\quad+\langle\dot{\mathbf{\eta}}_{1L},\mathbf{\eta}_{2L}\rangle_{ \mathrm{T}\mathcal{S}}+\langle\mathbf{\eta}_{1L},\dot{\mathbf{\eta}}_{2L}\rangle_{ \mathrm{T}\mathcal{S}}+\langle\dot{\mathbf{\eta}}_{1R},\mathbf{\eta}_{2R}\rangle_{ \mathrm{T}\mathcal{S}}+\langle\mathbf{\eta}_{1R},\dot{\mathbf{\eta}}_{2R}\rangle_{ \mathrm{T}\mathcal{S}}\,.\]
Since for all \(\mathbf{P}\in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\) is \(\mathrm{D}_{t}^{\mathrm{m}}(\mathbf{R}\mathbf{P})=\overline{R^{AB}P_{B}}\mathbf{e}_{A}=( \dot{R}^{AB}P_{B}+\dot{R}^{A}_{\phantom{A}B}\dot{P}^{B})\mathbf{e}_{A}\) valid, we obtain the following corollary.
**Corollary 5**.: _The material derivative is compatible with the 2-tensor-vector product, i. e. for all \(\mathbf{R}=\mathbf{r}+\mathbf{\eta}_{L}\otimes\mathbf{v}+\mathbf{\nu}\otimes\mathbf{\eta}_{R}+\phi\mathbf{ \nu}\otimes\mathbf{\nu}\in\mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\) and \(\mathbf{P}=\mathbf{p}+\psi\mathbf{\nu}\in\mathrm{T}\mathbb{S}\), \(\mathbf{r}\in\mathrm{T}^{2}\mathcal{S}\), \(\mathbf{\eta}_{L},\mathbf{\eta}_{R},\mathbf{p}\in\mathrm{T}\mathcal{S}\) and \(\phi,\psi\in\mathrm{T}^{0}\mathcal{S}\) holds_
\[\mathrm{D}_{t}^{\mathrm{m}}(\mathbf{R}\mathbf{P}) =(\mathrm{D}_{t}^{\mathrm{m}}\,\mathbf{R})\mathbf{P}+\mathbf{R}(\mathrm{D}_{t }^{\mathrm{m}}\,\mathbf{P}) \tag{16}\] \[=\dot{\mathbf{r}}\mathbf{p}+\mathbf{r}\dot{\mathbf{p}}+\dot{\phi}\mathbf{\eta}_{L}+ \psi\dot{\mathbf{\eta}}_{L}-(\phi\psi+\langle\mathbf{\eta}_{R},\mathbf{p}\rangle_{ \mathrm{T}\mathcal{S}})\mathbf{b}[\mathbf{V}_{\mathrm{m}}]\] \[\quad+\left(\dot{\phi}\psi+\phi\dot{\psi}+\langle\dot{\mathbf{\eta} }_{R},\mathbf{p}\rangle_{\mathrm{T}\mathcal{S}}+\langle\mathbf{\eta}_{R},\dot{\mathbf{p}} \rangle_{\mathrm{T}\mathcal{S}}+\langle\mathbf{r}\mathbf{p}+\psi\mathbf{\eta}_{L},\mathbf{b}[ \mathbf{V}_{\mathrm{m}}]\rangle_{\mathrm{T}\mathcal{S}}\right)\mathbf{\nu}\,.\]
### Upper-Convected Derivative
In order to obtain the upper-convected derivative, we choose a pullback for the time derivative (2), which adhere to the contravariant material proxy instead of the Cartesian proxy as it is stipulated for the material derivative. We give the exact definition for the vector and tensor field case in its associated subsections. In contrast to [6], we use the short naming "upper-convected" for "upper-upper-convected" or "fully-upper-convected", since we do not treat any mixed-convected derivative in this paper.
#### 2.3.1 Vector Fields
We consider the upper-convected pullback \(\Phi_{t,\tau}^{*}:\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S},t+\tau}\to\mathrm{T} \mathbb{R}^{3}|_{\mathcal{S},t}\) given by
\[(\Phi_{t,\tau}^{*}\mathbf{R}[\mathbf{X}_{\mathrm{m}}]|_{t+\tau})(t,y_{ \mathrm{m}}^{1},y_{\mathrm{m}}^{2}):=r^{\prime}[\mathbf{X}_{\mathrm{m}}](t+\tau,y_ {\mathrm{m}}^{1},y_{\mathrm{m}}^{2})\partial_{i}\mathbf{X}_{\mathrm{m}}(t,y_{ \mathrm{m}}^{1},y_{\mathrm{m}}^{2})+\phi[\mathbf{X}_{\mathrm{m}}](t+\tau,y_{ \mathrm{m}}^{1},y_{\mathrm{m}}^{2})\mathbf{\nu}[\mathbf{X}_{\mathrm{m}}](t,y_{ \mathrm{m}}^{1},y_{\mathrm{m}}^{2})\]
for decompositions (10) of vector fields \(\mathbf{R}\). With this we define the upper-convected derivative by \(\mathrm{D}_{t}^{\sharp}:=\mathrm{D}_{t}\,|_{\Phi_{t,\tau}^{*}=\Phi_{t,\tau}^{*}}\), i. e. the time derivative (2) yields
\[\mathrm{D}_{t}^{\sharp}\,\mathbf{R}[\mathbf{X}_{\mathrm{m}}]=(\partial_{t}r^{\prime}[ \mathbf{X}_{\mathrm{m}}])\partial_{i}\mathbf{X}_{\mathrm{m}}+(\partial_{t}\phi[\mathbf{X}_{ \mathrm{m}}])\mathbf{\nu}[\mathbf{X}_{\mathrm{m}}] \tag{17}\]
w. r. t. the material observer locally at material events \((t,y_{\mathrm{m}}^{1},y_{\mathrm{m}}^{2})\). Instead of transforming the frame to an arbitrary observer frame, we simply relate this to the material derivative (11), where we already know the observer-invariant description. Term by term, we obtain
\[(\partial_{t}r^{\prime}[\mathbf{X}_{\mathrm{m}}])\partial_{i}\mathbf{X}_{ \mathrm{m}} =\partial_{t}(r^{A}[\mathbf{X}_{\mathrm{m}}]\mathbf{e}_{A})-r^{\prime}[\bm {X}_{\mathrm{m}}]\partial_{i}\mathbf{V}_{\mathrm{m}}\] \[=\mathrm{D}_{t}^{\mathrm{m}}\,\mathbf{r}[\mathbf{X}_{\mathrm{m}}]-\mathbf{G}[ \mathbf{X}_{\mathrm{m}},\mathbf{V}_{\mathrm{m}}]\mathbf{r}[\mathbf{X}_{\mathrm{m}}]-\langle \mathbf{r}[\mathbf{X}_{\mathrm{m}}],\mathbf{b}[\mathbf{X}_{\mathrm{m}},\mathbf{V}_{\mathrm{m}}] \rangle_{\mathrm{T}\mathcal{S}}\,\mathbf{\nu}[\mathbf{X}_{\mathrm{m}}]\] \[(\partial_{t}\phi[\mathbf{X}_{\mathrm{m}}])\mathbf{\nu}[\mathbf{X}_{\mathrm{m}}] =\partial_{t}(\phi[\mathbf{X}_{\mathrm{m}}]\mathbf{\nu}[\mathbf{X}_{\mathrm{m}}] )-\phi[\mathbf{X}_{\mathrm{m}}]\partial_{t}\mathbf{\nu}[\mathbf{X}_{\mathrm{m}}]=D_{t}^{ \mathrm{m}}(\phi\mathbf{\nu})[\mathbf{X}_{\mathrm{m}}]+\phi[\mathbf{X}_{\mathrm{m}}]\mathbf{b}[ \mathbf{X}_{\mathrm{m}},\mathbf{V}_{\mathrm{m}}]\]
with (A.8) and (A.13). The first summands are adding up to the material derivative \(\mathrm{D}_{t}^{\mathrm{m}}\,\mathbf{R}[\mathbf{X}_{\mathrm{m}}]\), which has an observer-invariant representation. The remaining summands are instantaneous and hence are observer-invariant a priori. Therefore we can express the upper-convected derivative by an arbitrary observer. This justified to omit the parameterization argument \(\mathbf{X}_{\mathrm{o}}\). For instance, we write \(\mathrm{D}_{t}^{\sharp}\,\mathbf{R}=\mathrm{D}_{t}^{\sharp}\,\mathbf{R}[\mathbf{X}_{ \mathrm{o}}]\) for short. Moreover, we could relate \(\dot{\mathbf{r}}\) to the tangential upper-convected derivative by \(\mathfrak{L}^{\sharp}\mathbf{r}=\dot{\mathbf{r}}-\mathbf{G}[\mathbf{V}_{\mathrm{m}}]\mathbf{r}\in \mathrm{T}\mathcal{S}\) given in table 3. We summarize this in the following corollary under the aid of the tensor field
\[\mathbf{G}[\mathbf{V}_{\mathrm{m}}] :=\mathbf{G}[\mathbf{V}_{\mathrm{m}}]+\mathbf{\nu}\otimes\mathbf{b}[\mathbf{V}_{ \mathrm{m}}]-\mathbf{b}[\mathbf{V}_{\mathrm{m}}]\otimes\mathbf{\nu}\] (18) \[=\mathbf{\nabla}_{\mathrm{m}}-v_{\perp}\mathbf{I}\mathbf{I}+\mathbf{\nu}\otimes( \nabla v_{\perp}+\mathbf{I}\mathbf{I}\mathbf{v}_{\mathrm{m}})-(\nabla v_{\perp}+\mathbf{I}\mathbf{v}_{ \mathrm{m}})\otimes\mathbf{\nu}\in\mathrm{T}^{2}\mathbb{R}^{3}|
**Corollary 6**.: _For all \(\mathbf{R}=\mathbf{r}+\phi\mathbf{\nu}\in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\), \(\mathbf{r}\in\mathrm{T}\mathcal{S}\) and \(\phi\in\mathrm{T}^{0}\mathcal{S}\) holds_
\[\mathrm{D}_{t}^{\sharp}\mathbf{R}=\mathrm{D}_{t}^{m}\,\mathbf{R}-\mathbf{G}[\mathbf{V}_{m}]\mathbf{R }=\mathfrak{C}^{\sharp}\mathbf{r}+\dot{\phi}\mathbf{\nu}\,. \tag{19}\]
Note that in contrast to the material or Jaumann derivative, the upper-convected derivative is not compatible with the inner product in general. Substituting (19) into (12) yields
\[\frac{\dot{\mathbf{R}}}{\langle\mathbf{R}_{1},\mathbf{R}_{2}\rangle_{\mathrm{T} \mathbb{R}^{3}|_{\mathcal{S}}}}=\left\langle\mathrm{D}_{t}^{\sharp}\,\mathbf{R}_{ 1},\mathbf{R}_{2}\right\rangle_{\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}}+\left \langle\mathbf{R}_{1},\mathrm{D}_{t}^{\sharp}\,\mathbf{R}_{2}\right\rangle_{\mathrm{ T}\mathbb{R}^{3}|_{\mathcal{S}}}+\left\langle\mathbf{G}[\mathbf{V}_{m}]+\mathbf{G}^{T}[ \mathbf{V}_{m}],\mathbf{R}_{1}\otimes\mathbf{R}_{2}\right\rangle_{\mathrm{T}\mathbb{R}^{ 3}|_{\mathcal{S}}}\]
for all \(\mathbf{R}_{1},\mathbf{R}_{2}\in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\), where \(\mathbf{G}[\mathbf{V}_{m}]+\mathbf{G}^{T}[\mathbf{V}_{m}]\) is vanishing if and only if the material carries out a rigid body motion.
For extended vector fields \(\widetilde{\mathbf{R}}\in\mathrm{T}\mathcal{S}_{h}\), which are sufficing \(\widetilde{\mathbf{R}}|_{\mathcal{S}=0}=\mathbf{R}\in\mathrm{T}\mathbb{R}^{3}|_{ \mathcal{S}}\), we conclude from (9) and (A.15) that for the upper-convected \(\mathbb{R}^{3}\)-time derivative
\[\dot{\widetilde{\mathbf{R}}}-(\widehat{\nabla}\,\widetilde{\mathbf{V}}_{m})\widetilde {\mathbf{R}}\to\mathrm{D}_{t}^{\sharp}\,\mathbf{R}\]
is valid for \(h\to 0\).
#### 2.3.2 2-Tensor Fields
We consider the upper-convected pullback \(\Phi_{t,\tau}^{*_{t}}:\mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S},t+\tau}\to \mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S},t}\) given by
\[(\Phi_{t,\tau}^{*_{t}}\mathbf{R}[\mathbf{X}_{m}]|_{t+\tau})(t,y_{m}^{1},y _{m}^{2}):=r^{j}|[\mathbf{X}_{m}](t+\tau,y_{m}^{1},y_{m}^{2})\partial_{i}\mathbf{X}_{ m}(t,y_{m}^{1},y_{m}^{2})\otimes\partial_{j}\mathbf{X}_{m}(t,y_{m}^{1},y_{m}^{2})\] \[\qquad\qquad+\eta_{L}^{j}[\mathbf{X}_{m}](t+\tau,y_{m}^{1},y_{m}^{2}) \partial_{i}\mathbf{X}_{m}(t,y_{m}^{1},y_{m}^{2})\otimes\nu[\mathbf{X}_{m}](t,y_{m}^{ 1},y_{m}^{2})\] \[\qquad\qquad+\eta_{R}^{j}[\mathbf{X}_{m}](t+\tau,y_{m}^{1},y_{m}^{2}) \nu[\mathbf{X}_{m}](t,y_{m}^{1},y_{m}^{2})\otimes\partial_{j}\mathbf{X}_{m}(t,y_{m}^{ 1},y_{m}^{2})\] \[\qquad\qquad+\phi[\mathbf{X}_{m}](t+\tau,y_{m}^{1},y_{m}^{2})\nu[\mathbf{ X}_{m}](t,y_{m}^{1},y_{m}^{2})\otimes\nu[\mathbf{X}_{m}](t,y_{m}^{1},y_{m}^{2})\]
for decompositions (13) of 2-tensor fields \(\mathbf{R}\). With this we define the upper-convected derivative by \(\mathrm{D}_{t}^{\sharp}:=\mathrm{D}_{t}\,|_{\Phi_{t,\tau}=\Phi_{t,\tau}^{*_{t}}}\) i. e. the time derivative (2) yields
\[\mathrm{D}_{t}^{\sharp}\,\mathbf{R}[\mathbf{X}_{m}]=(\partial_{t}r^{j}[ \mathbf{X}_{m}])\partial_{i}\mathbf{X}_{m}\otimes\partial_{j}\mathbf{X}_{m}+(\partial_{t} \phi[\mathbf{X}_{m}])\nu[\mathbf{X}_{m}]\otimes\nu[\mathbf{X}_{m}]\] \[\qquad\qquad+(\partial_{t}\eta_{L}^{j}[\mathbf{X}_{m}])\partial_{i}\bm {X}_{m}\otimes\nu[\mathbf{X}_{m}]+(\partial_{t}\eta_{R}^{j}[\mathbf{X}_{m}])\nu[\mathbf{ X}_{m}]\otimes\partial_{j}\mathbf{X}_{m} \tag{20}\]
w. r. t. the material observer locally at material events \((t,y_{m}^{1},y_{m}^{2})\). Similar to the proceeding for vector fields, we just relate this to the material derivative. Term by term, we obtain
\[(\partial_{t}r^{j}[\mathbf{X}_{m}])\partial_{i}\mathbf{X}_{m}\otimes \partial_{j}\mathbf{X}_{m}=\partial_{t} \left(r^{\alpha B}[\mathbf{X}_{m}]\mathbf{e}_{A}\otimes\mathbf{e}_{B}\right)\] \[\qquad-r^{j}[\mathbf{X}_{m}]\left(G^{\pm}[\mathbf{X}_{m},\mathbf{V}_{m}] \partial_{k}\mathbf{X}_{m}+b_{l}[\mathbf{X}_{m},\mathbf{V}_{m}]\nu[\mathbf{X}_{m}]\right) \otimes\partial_{j}\mathbf{X}_{m}\] \[\qquad-r^{j}[\mathbf{X}_{m}]\partial_{i}\mathbf{X}_{m}\otimes\left(G^{\pm }[\mathbf{X}_{m},\mathbf{V}_{m}]\partial_{k}\mathbf{X}_{m}+b_{j}[\mathbf{X}_{m},\mathbf{V}_{m}] \nu[\mathbf{X}_{m}]\right)\] \[=\mathrm{D}_{t}^{m}\,\mathbf{r}[\mathbf{X}_{m}]-\mathbf{G}[\mathbf{X}_{m},\mathbf{V}_{ m}]r[\mathbf{X}_{m}]-\mathbf{r}[\mathbf{X}_{m}]\mathbf{G}^{T}[\mathbf{X}_{m},\mathbf{V}_{m}]\] \[\qquad-\nu[\mathbf{X}_{m}]\otimes\mathbf{b}[\mathbf{X}_{m},\mathbf{V}_{m}][\mathbf{X}_{m }]-\mathbf{r}[\mathbf{X}_{m}]\mathbf{b}[\mathbf{X}_{m},\mathbf{V}_{m}]\otimes\nu[\mathbf{X}_{m}]\] \[(\partial_{t}\eta_{L}^{j}[\mathbf{X}_{m}])\partial_{i}\mathbf{X}_{m} \otimes\nu[\mathbf{X}_{m}] =\mathrm{D}_{t}^{\sharp}\,\eta_{L}[\mathbf{X}_{m}]\otimes\nu[\mathbf{X}_{m}] =(\mathrm{D}_{t}^{m}\,\mathbf{\eta}_{L}[\mathbf{X}_{m}]-\mathbf{G}[\mathbf{X}_{m},\mathbf{V}_{ m}]\eta_{L}[\mathbf{X}_{m}])\otimes\nu[\mathbf{X}_{m}]\] \[=\mathrm{D}_{t}^{m}(\eta_{L}\otimes\nu)[\mathbf{X}_{m}]+\eta_{L}[\bm {X}_{m}]\otimes\mathbf{b}[\mathbf{X}_{m},\mathbf{V}_{m}]-\mathbf{G}[\mathbf{X}_{m},\mathbf{V}_{m}] \eta_{L}[\mathbf{X}_{m}]\otimes\nu[\mathbf{X}_{m}]\] \[\qquad-\left\langle\eta_{L}[\mathbf{X}_{m}],\mathbf{b}[\mathbf{X}_{m},\mathbf{V} _{m}]\right\rangle_{\mathrm{T}\mathcal{S}}\nu[\mathbf{X}_{m}]\otimes\nu[\mathbf{X}_{m}]\] \[\qquad-\left\langle\eta_{L}[\mathbf{X}_{m}],\mathbf{b}[\mathbf{X}_{m},\mathbf{V} _{m}]\right\rangle_{\mathrm{T}\mathcal{S}}\nu[\mathbf{X}_{m}]\] \[\qquad-\left\langle\eta_{R}[\mathbf{X}_{m}],\mathbf{b}[\mathbf{X}_{m},\mathbf{V} _{m}]\right\rangle_{\mathrm{T}\mathcal{S}}\nu[\mathbf{X}_{m}]\otimes\nu[\mathbf{X}_{m}]\] \[(\partial_{t}\phi[\mathbf{X}_{m}])\nu[\mathbf{X}_{m}]\otimes\nu[\mathbf{X}_{m}] =\partial[X_{m}]\nu[\mathbf{X}_{m}]\otimes\nu[\mathbf{X}_{m}]\] \[=\mathrm{D}_{t}^{m}(\phi\nu\otimes\nu)[\mathbf{X}_{m}]+\phi[\mathbf{X}_{m}] \left(\mathbf{b}[\mathbf{X}_{m},\mathbf{V}_{m}]\otimes\nu[\mathbf{X}_{m}]+\nu[\mathbf{X}_{m}] \otimes\mathbf{b}[\mathbf{X}_{m},\mathbf{V}_{m}]\right)
due to (A.8), (17), (19) and (14). The first summands are adding up to the material derivative \(\mathrm{D}_{t}^{\mathrm{m}}\,\mathbf{R}[\mathbf{X}_{\mathrm{m}}]\), which has an observer-invariant representation. The remaining summands are instantaneous and hence observer-invariant a priori. Therefore we can express the upper-convected derivative by an arbitrary observer. This justifies to omit observer arguments in square brackets.
**Corollary 7**.: _For all \(\mathbf{R}=\mathbf{r}+\mathbf{\eta}_{L}\otimes\mathbf{\nu}+\mathbf{\nu}\otimes\mathbf{\eta}_{R}+\phi\mathbf{ \nu}\otimes\mathbf{\nu}\in\mathrm{T}^{2}\mathbb{R}^{3}|_{S}\), \(\mathbf{r}\in\mathrm{T}^{2}\mathcal{S}\), \(\mathbf{\eta}_{L},\mathbf{\eta}_{R}\in\mathrm{T}\mathcal{S}\) and \(\phi\in\mathrm{T}^{0}\mathcal{S}\) holds_
\[\mathrm{D}_{t}^{\sharp}\,\mathbf{R}=\mathrm{D}_{t}^{\mathrm{m}}\,\mathbf{R}-\mathbf{G}[ \mathbf{V}_{\mathrm{m}}]\mathbf{R}-\mathbf{R}\mathbf{G}^{T}[\mathbf{V}_{\mathrm{m}}]=\mathfrak{C} ^{\sharp\sharp}\mathbf{r}+\mathfrak{x}^{\sharp}\mathbf{\eta}_{L}\otimes\mathbf{\nu}+\mathbf{ \nu}\otimes\mathfrak{C}^{\sharp}\mathbf{\eta}_{R}+\dot{\phi}\mathbf{\nu}\otimes\mathbf{ \nu}\,. \tag{21}\]
Tangential upper-convected derivatives \(\mathfrak{x}^{\sharp\sharp}\) and \(\mathfrak{x}^{\sharp}\) on tangential tensor fields are given in table 5 and 3. Note that in contrast to the material or Jaumann derivative, the upper-convected derivative is neither compatible with the inner nor the tensor product in general. Substituting (21) into (15), resp. (21) and (19) into (16), yields
\[\begin{split}\dot{\overbrace{\left\langle\mathbf{R}_{1},\mathbf{R}_{2} \right\rangle}^{\mathrm{T}\mathbb{R}^{3}|_{S}}}&=\left\langle \mathrm{D}_{t}^{\sharp}\,\mathbf{R}_{1},\mathbf{R}_{2}\right\rangle_{\mathrm{T}^{2} \mathbb{R}^{3}|_{S}}+\left\langle\mathbf{R}_{1},\mathrm{D}_{t}^{\sharp}\,\mathbf{R}_{2 }\right\rangle_{\mathrm{T}^{2}\mathbb{R}^{3}|_{S}}+\left\langle\mathbf{G}[\mathbf{V}_{ \mathrm{m}}]+\mathbf{G}^{T}[\mathbf{V}_{\mathrm{m}}],\mathbf{R}_{1}\mathbf{R}_{2}^{T}+\mathbf{R}_{ 1}^{T}\mathbf{R}_{2}\right\rangle_{\mathrm{T}^{2}\mathbb{R}^{3}|_{S}}\\ \mathrm{D}_{t}^{\sharp}(\mathbf{R}\mathbf{P})&=(\mathrm{D}_{t }^{\sharp}\,\mathbf{R})\mathbf{P}+\mathbf{R}(\mathrm{D}_{t}^{\sharp}\,\mathbf{P})+\mathbf{R}\left( \mathbf{G}[\mathbf{V}_{\mathrm{m}}]+\mathbf{G}^{T}[\mathbf{V}_{\mathrm{m}}]\right)\mathbf{P}\end{split}\]
for all \(\mathbf{R}_{1},\mathbf{R}_{2},\mathbf{R}\in\mathrm{T}^{2}\mathbb{R}^{3}|_{S}\) and \(\mathbf{P}\in\mathrm{T}\mathbb{R}^{3}|_{S}\), where \(\mathbf{G}[\mathbf{V}_{\mathrm{m}}]+\mathbf{G}^{T}[\mathbf{V}_{\mathrm{m}}]\) is vanishing if and only if the material carries out a rigid body motion.
For extended 2-tensor fields \(\widehat{\mathbf{R}}\in\mathrm{T}^{2}\mathcal{S}_{k}\), which are sufficing \(\widehat{\mathbf{R}}|_{\xi=0}=\mathbf{R}\in\mathrm{T}^{2}\mathbb{R}^{3}|_{S}\), we conclude from (9) and (A.15) that for the upper-convected \(\mathbb{R}^{3}\)-time derivative
\[\dot{\overbrace{\mathbf{R}}}-(\widehat{\nabla}\,\widehat{\mathbf{V}}_{\mathrm{m}}) \widehat{\mathbf{R}}-\widehat{\mathbf{R}}(\widehat{\nabla}\,\widehat{\mathbf{V}}_{\mathrm{ m}})^{T}\to\mathrm{D}_{t}^{\sharp}\,\mathbf{R}\]
is valid for \(h\to 0\).
### Lower-Convected Derivative
In order to obtain the lower-convected derivative, we choose a pullback for the time derivative (2), which adhere to the covariant material proxy instead of the contravariant material proxy as it is stipulated for the upper-convected derivative. We give the exact definition for the vector and tensor field case in its associated subsections. In contrast to [6], we use the short naming "lower-convected" for "lower-lower-convected" or "fully-lower-convected", since we do not treat any mixed-convected derivative in this paper.
#### 2.4.1 Vector Fields
We consider the lower-convected pullback \(\Phi_{t,\tau}^{\ast}:\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S},t+\tau}\to\mathrm{ T}\mathbb{R}^{3}|_{\mathcal{S},t}\) given by
\[(\Phi_{t,\tau}^{\ast\flat}\mathbf{R}[\mathbf{X}_{\mathrm{m}}]_{t+\tau})(t,y_{\mathrm{m }}^{1},y_{\mathrm{m}}^{2}):=r_{\mathrm{i}}[\mathbf{X}_{\mathrm{m}}](t+\tau,y_{ \mathrm{m}}^{1},y_{\mathrm{m}}^{2})\partial^{j}\mathbf{X}_{\mathrm{m}}(t,y_{ \mathrm{m}}^{1},y_{\mathrm{m}}^{2})+\phi[\mathbf{X}_{\mathrm{m}}](t+\tau,y_{ \mathrm{m}}^{1},y_{\mathrm{m}}^{2})\mathbf{\nu}[\mathbf{X}_{\mathrm{m}}](t,y_{\mathrm{m }}^{1},y_{\mathrm{m}}^{2})\]
for decompositions (10) of vector fields \(\mathbf{R}\), where \(\partial^{j}\mathbf{X}_{\mathrm{m}}:=g_{\mathrm{m}}^{ij}\partial_{j}\mathbf{X}_{\mathrm{m}}\) at all \((t,y_{\mathrm{m}}^{1},y_{\mathrm{m}}^{2})\). With this we define the upper-convected derivative by \(\mathrm{D}_{t}^{\flat}:=\mathrm{D}_{t}\,|_{\Phi_{t,\tau}^{\ast\flat}=\Phi_{t, \tau}^{\ast\flat}}\), i. e. the time derivative (2) yields
\[\mathrm{D}_{t}^{\flat}\,\mathbf{R}[\mathbf{X}_{\mathrm{m}}]=g_{\mathrm{m}}^{ij}( \partial_{t}r_{j}[\mathbf{X}_{\mathrm{m}}])\partial_{i}\mathbf{X}_{\mathrm{m}}+( \partial_{t}\phi[\mathbf{X}_{\mathrm{m}}])\mathbf{\nu}[\mathbf{X}_{\mathrm{m}}]\]
w. r. t. the material observer locally at material events \((t,y_{\mathrm{m}}^{1},y_{\mathrm{m}}^{2})\). With (A.11), this is relatable to the upper-convected derivative (17) by
\[\mathrm{D}_{t}^{\flat}\,\mathbf{R}[\mathbf{X}_{\mathrm{m}}]=\mathrm{D}_{t}^{\sharp}\, \mathbf{R}[\mathbf{X}_{\mathrm{m}}]+(\mathbf{G}[\mathbf{X}_{\mathrm{m}},\mathbf{V}_{\mathrm{m}}]+\mathbf{G} ^{T}[\mathbf{X}_{\mathrm{m}},\mathbf{V}_{\mathrm{m}}])\mathbf{r}[\mathbf{X}_{\mathrm{m}}]\,.\]
This expression can be represented by an observer-invariant formulation. Therefore, we omit the observer argument in square brackets. Moreover, since all normal parts of \(\mathbf{G}[\mathbf{V}_{\mathrm{m}}]\) (18) are antisymmetric, i. e. it is \(\mathbf{G}[\mathbf{V}_{\mathrm{m}}]+\mathbf{G}^{T}[\mathbf{V}_{\mathrm{m}}]=\mathbf{G}[\mathbf{V}_{ \mathrm{m}}]+\mathbf{G}^{T}[\mathbf{V}_{\mathrm{m}}]\) valid, we conclude the following corollary.
**Corollary 8**.: _For all \(\mathbf{R}=\mathbf{r}+\phi\mathbf{\nu}\in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\), \(\mathbf{r}\in\mathrm{T}\mathcal{S}\) and \(\phi\in\mathrm{T}^{0}\mathcal{S}\) holds_
\[\mathrm{D}_{t}^{\flat}\,\mathbf{R}=\mathrm{D}_{t}^{\mathrm{m}}\,\mathbf{R}+\mathbf{G}^{T}[ \mathbf{V}_{\mathrm{m}}]\mathbf{R}=\mathfrak{C}^{\flat}\mathbf{r}+\dot{\phi}\mathbf{\nu}\,. \tag{22}\]
The tangential lower-convected derivative \(\mathfrak{L}^{\flat}\) on tangential vector fields is given in table 3. Note that in contrast to the material or Jaumann derivative, the lower-convected derivative is not compatible with the inner product in general. Substituting (22) into (12) yields
\[\overline{\left\langle\mathbf{R}_{1},\mathbf{R}_{2}\right\rangle_{\mathrm{T}\mathbb{R}^{ \natural}|_{S}}}=\left\langle\mathrm{D}_{t}^{\flat}\,\mathbf{R}_{1},\mathbf{R}_{2} \right\rangle_{\mathrm{T}\mathbb{R}^{\natural}|_{S}}+\left\langle\mathbf{R}_{1}, \mathrm{D}_{t}^{\flat}\,\mathbf{R}_{2}\right\rangle_{\mathrm{T}\mathbb{R}^{ \natural}|_{S}}-\left\langle\mathbf{G}[\mathbf{V}_{m}]+\mathbf{G}^{T}[\mathbf{V}_{m}],\mathbf{R}_{1} \otimes\mathbf{R}_{2}\right\rangle_{\mathrm{T}\mathbb{T}\mathbb{R}^{\natural}|_{S}}\]
for all \(\mathbf{R}_{1},\mathbf{R}_{2}\in\mathrm{T}\mathbb{R}^{\natural}|_{S}\), where \(\mathbf{G}[\mathbf{V}_{m}]+\mathbf{G}^{T}[\mathbf{V}_{m}]\) is vanishing if and only if the material carries out a rigid body motion.
For extended vector fields \(\widehat{\mathbf{R}}\in\mathrm{T}\mathcal{S}_{h}\), which are sufficing \(\widehat{\mathbf{R}}|_{\xi=0}=\mathbf{R}\in\mathrm{T}\mathbb{R}^{\natural}|_{S}\), we conclude from (9) and (A.15) that for the lower-convected \(\mathbb{R}^{\natural}\)-time derivative
\[\widehat{\mathbf{R}}+(\widehat{\nabla}\,\widehat{\mathbf{V}}_{m})^{T}\widehat{\mathbf{R}} \to\mathrm{D}_{t}^{\flat}\,\mathbf{R}\]
is valid for \(h\to 0\).
#### 2.4.2 2-Tensor Fields
We consider the lower-convected pullback \(\Phi_{t,\tau}^{\flat_{\tau}}:\mathrm{T}^{2}\mathbb{R}^{\natural}|_{S,\tau+ \tau}\to\mathrm{T}^{2}\mathbb{R}^{\natural}|_{S,\tau}\) given by
\[(\Phi_{t,\tau}^{\flat_{\tau}}\mathbf{R}[\mathbf{X}_{m}]|_{t+\tau})(t,y_{m} ^{1},y_{m}^{2}) =r_{ij}[\mathbf{X}_{m}](t+\tau,y_{m}^{1},y_{m}^{2})\partial^{i}\mathbf{X} _{m}(t,y_{m}^{1},y_{m}^{2})\otimes\partial^{j}\mathbf{X}_{m}(t,y_{m}^{1},y_{m}^{2 })\] \[\qquad+\eta_{L}[\mathbf{X}_{m}](t+\tau,y_{m}^{1},y_{m}^{2})\partial^{i }\mathbf{X}_{m}(t,y_{m}^{1},y_{m}^{2})\otimes\nu[\mathbf{X}_{m}](t,y_{m}^{1},y_{m}^{2 })\] \[\qquad+\eta_{Rj}[\mathbf{X}_{m}](t+\tau,y_{m}^{1},y_{m}^{2})\nu[\mathbf{X} _{m}](t,y_{m}^{1},y_{m}^{2})\otimes\partial^{j}\mathbf{X}_{m}(t,y_{m}^{1},y_{m}^{2 })\] \[\qquad+\phi[\mathbf{X}_{m}](t+\tau,y_{m}^{1},y_{m}^{2})\nu[\mathbf{X}_{m}] (t,y_{m}^{1},y_{m}^{2})\otimes\nu[\mathbf{X}_{m}](t,y_{m}^{1},y_{m}^{2})\]
for decompositions (13) of 2-tensor fields \(\mathbf{R}\). With this we define the upper-convected derivative by \(\mathrm{D}_{t}^{\flat}:=\mathrm{D}_{t}\,|_{\Phi_{t,\tau}^{\flat}=\Phi_{t, \tau}^{\flat_{\tau}}}\), i. e. the time derivative (2) yields
\[\mathrm{D}_{t}^{\flat}\,\mathbf{R}[\mathbf{X}_{m}] =(\partial_{t}r_{ij}[\mathbf{X}_{m}])g_{m}^{ik}\,g_{m}^{jl}\partial_{ k}\mathbf{X}_{m}\otimes\partial_{l}\mathbf{X}_{m}+(\partial_{t}\phi[\mathbf{X}_{m}])\nu[\mathbf{X} _{m}]\otimes\nu[\mathbf{X}_{m}]\] \[\qquad+(\partial_{t}\eta_{L}[\mathbf{X}_{m}])g_{m}^{ik}\partial_{k} \mathbf{X}_{m}\otimes\nu[\mathbf{X}_{m}]+(\partial_{t}\eta_{Rj}[\mathbf{X}_{m}])g_{m}^{jl }\nu[\mathbf{X}_{m}]\otimes\partial_{l}\mathbf{X}_{m}\]
w. r. t. the material observer locally at material events \((t,y_{m}^{1},y_{m}^{2})\). With (A.12) and (A.11), this is relatable to the upper-convected derivative (20) by
\[\mathrm{D}_{t}^{\flat}\,\mathbf{R}[\mathbf{X}_{m}] =\mathrm{D}_{t}^{\sharp}\,\mathbf{R}[\mathbf{X}_{m}]+2\mathbf{S}[\mathbf{X}_{m}, \mathbf{V}_{m}]\mathbf{r}[\mathbf{X}_{m}]+2\mathbf{r}[\mathbf{X}_{m}]\mathbf{S}[\mathbf{X}_{m},\mathbf{V}_{m}]\] \[\qquad+2\mathbf{S}[\mathbf{X}_{m},\mathbf{V}_{m}]\mathbf{\eta}_{L}[\mathbf{X}_{m}] \otimes\nu[\mathbf{X}_{m}]+2\mathbf{v}[\mathbf{X}_{m}]\otimes\mathbf{S}[\mathbf{X}_{m},\mathbf{V}_{m}] \mathbf{\eta}_{R}[\mathbf{X}_{m}]\,,\]
where \(2\mathbf{S}[\mathbf{X}_{m},\mathbf{V}_{m}]:=\mathbf{G}[\mathbf{X}_{m},\mathbf{V}_{m}]+\mathbf{G}^{T}[\mathbf{X} _{m},\mathbf{V}_{m}]=\mathbf{G}[\mathbf{V}_{m}]+\mathbf{G}^{T}[\mathbf{V}_{m}]\). This expression can be represented by an observer-invariant formulation. Therefore, we omit the observer argument in square brackets and conclude the following corollary.
**Corollary 9**.: _For all \(\mathbf{R}=\mathbf{r}+\mathbf{\eta}_{L}\otimes\mathbf{\nu}+\mathbf{\nu}\otimes\mathbf{\eta}_{R}+\phi \mathbf{\nu}\otimes\mathbf{\nu}\in\mathrm{T}^{2}\mathbb{R}^{\natural}|_{S}\), \(\mathbf{r}\in\mathrm{T}^{2}\mathcal{S}\), \(\mathbf{\eta}_{L},\mathbf{\eta}_{R}\in\mathrm{T}\mathcal{S}\) and \(\phi\in\mathrm{T}^{0}\mathcal{S}\) holds_
\[\mathrm{D}_{t}^{\flat}\,\mathbf{R}=\mathrm{D}_{t}^{\flat}\,\mathbf{R}+\mathbf{G}^{T}[\mathbf{V}_{ m}]\mathbf{R}+\mathbf{R}\mathbf{G}[\mathbf{V}_{m}]=\mathfrak{L}^{\flat\flat}\mathbf{r}+\mathfrak{L}^{ \flat}\mathbf{\eta}_{L}\otimes\mathbf{\nu}+\mathbf{\nu}\otimes\mathfrak{L}^{\flat}\mathbf{\eta} _{R}+\dot{\phi}\mathbf{\nu}\otimes\mathbf{\nu}\,. \tag{23}\]
Tangential lower-convected derivatives \(\mathfrak{L}^{\flat\flat\flat}\) and \(\mathfrak{L}^{\flat}\) on tangential tensor fields are given in table 5 and 3. Note that in contrast to the material or Jaumann derivative, the upper-convected derivative is neither compatible with the inner nor the tensor product in general. Substituting (23) into (15), resp. (23) and (22) into (16), yields
\[\dot{\overbrace{\left\langle\mathbf{R}_{1},\mathbf{R}_{2}\right\rangle_{ \mathrm{T}^{2}\mathbb{R}^{\natural}|_{S}}}} =\left\langle\mathrm{D}_{t}^{\flat}\,\mathbf{R}_{1},\mathbf{R}_{2}\right\rangle_{ \mathrm{T}^{2}\mathbb{R}^{\natural}|_{S}}+\left\langle\mathbf{R}_{1},\mathrm{D}_{t }^{\flat}\,\mathbf{R}_{2}\right\rangle_{\mathrm{T}^{2}\mathbb{R}^{\natural}|_{S}}- \left\langle\mathbf{G}[\mathbf{V}_{m}],\mathbf{R}_{1}\mathbf{R}_{2}^{T}+\mathbf{R}_{1}^{\dagger}\mathbf{R}_{2 }\right\rangle_{\mathrm{T}^{2}\mathbb{R}^{\natural}|_{S}}\] \[\mathrm{D}_{t}^{\flat}(\mathbf{R}\mathbf{P}) =(\mathrm{D}_{t}^{\flat}\,\mathbf{R})\mathbf{P}+\mathbf{R}(\mathrm{D}_{t}^{ \flat}\,\mathbf{P})-\mathbf{R}\left(\mathbf{G}[\mathbf{V}_{m}]+\mathbf{G}^{T}[\mathbf{V}_{m}] \right)\mathbf{P}\]
for all \(\mathbf{R}_{1},\mathbf{R}_{2},\mathbf{R}\in\mathrm{T}^{2}\mathbb{R}^{3}|_{S}\) and \(\mathbf{P}\in\mathrm{T}\mathbb{R}^{3}|_{S}\), where \(\mathbf{G}[\mathbf{V}_{m}]+\mathbf{G}^{T}[\mathbf{V}_{m}]\) is vanishing if and only if the material carries out a rigid body motion.
For extended 2-tensor fields \(\widehat{\mathbf{R}}\in\mathrm{T}^{2}\mathcal{S}_{h}\), which are sufficing \(\widehat{\mathbf{R}}|_{\xi=0}=\mathbf{R}
### Jaumann Derivative
For the sake of simplicity, we define the Jaumann derivative by
\[\mathrm{D}_{t}^{\mathcal{J}}\,\mathbf{R}:=\frac{1}{2}\left(\mathrm{D}_{t}^{\mathbf{\xi}} \,\mathbf{R}+\mathrm{D}_{t}^{\mathrm{b}}\,\mathbf{R}\right) \tag{24}\]
for all \(\mathbf{R}\in\mathrm{T}^{\mathrm{m}}\mathbb{R}^{3}|_{\mathcal{S}}\) instead stipulating a pullback \(\Phi_{t,\tau}^{\tau,g}\), see the discussion in section 2.5.3 for more details. Therefore, the Jaumann derivative is observer-invariant a priori, since the upper- and lower-convected are observer-invariant. Note that the Jaumann derivative is also often named corotational derivative.
#### 2.5.1 Vector Fields
By defining the skew-symmetric tensor field
\[\mathcal{A}[\mathbf{V}_{m}]:=\frac{\mathbf{\mathcal{G}}[\mathbf{V}_{m}]-\mathbf{\mathcal{G}}^{ T}[\mathbf{V}_{m}]}{2}=\frac{\nabla\mathbf{v}_{m}-(\nabla\mathbf{v}_{m})^{T}}{2}+\mathbf{\nu} \otimes(\nabla v_{\perp}+\mathbf{H}\mathbf{v}_{m})-(\nabla v_{\perp}+\mathbf{H}\mathbf{v}_{m} )\otimes\mathbf{\nu}\in\mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\,, \tag{25}\]
we deduce the following corollary by (24), (19) and (22).
**Corollary 10**.: _For all \(\mathbf{R}=\mathbf{r}+\phi\mathbf{\nu}\in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\), \(\mathbf{r}\in\mathrm{T}\mathcal{S}\) and \(\phi\in\mathrm{T}^{0}\mathcal{S}\) holds_
\[\mathrm{D}_{t}^{\mathcal{J}}\,\mathbf{R}=\mathrm{D}_{t}^{\mathrm{m}}\,\mathbf{R}- \mathcal{A}[\mathbf{V}_{m}]\mathbf{R}=\mathfrak{Im}\mathbf{r}+\dot{\phi}\mathbf{\nu}\,. \tag{26}\]
The tangential Jaumann derivative \(\mathfrak{Im}\) on tangential vector fields is given in table 3. Since \(\mathcal{A}[\mathbf{V}_{m}]\) is skew-symmetric, substituting (26) into (12) yields the following corollary.
**Corollary 11**.: _The Jaumann derivative on vector fields is compatible with the inner product, i. e. for all \(\mathbf{R}_{1}=\mathbf{r}_{1}+\phi_{1}\mathbf{\nu},\mathbf{R}_{2}=\mathbf{r}_{2}+\phi_{2}\mathbf{\nu} \in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\) holds_
\[\dot{\overbrace{\langle\mathbf{R}_{1},\mathbf{R}_{2}\rangle_{\mathrm{T} \mathbb{R}^{3}|_{\mathcal{S}}}}} =\left\langle\mathrm{D}_{t}^{\mathcal{J}}\,\mathbf{R}_{1},\mathbf{R}_{2} \right\rangle_{\mathrm{T}\mathbb{R}^{3|_{\mathcal{S}}}}+\left\langle\mathbf{R}_{1 },\mathrm{D}_{t}^{\mathcal{J}}\,\mathbf{R}_{2}\right\rangle_{\mathrm{T}\mathbb{R}^ {3|_{\mathcal{S}}}}\] \[=\left\langle\Im\mathbf{r}_{1},\mathbf{r}_{2}\right\rangle_{\mathrm{T} \mathcal{S}}+\langle\mathbf{r}_{1},\mathfrak{Im}\mathbf{r}_{2}\rangle_{\mathrm{T} \mathcal{S}}+\dot{\phi}_{1}\phi_{2}+\phi_{1}\dot{\phi}_{2}\,.\]
For extended vector fields \(\widehat{\mathbf{R}}\in\mathrm{T}\mathcal{S}_{h}\), which are sufficing \(\widehat{\mathbf{R}}|_{\mathcal{S}=0}=\mathbf{R}\in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{ S}}\), we conclude from (9) and (A.15) that for the Jaumann \(\mathbb{R}^{3}\)-time derivative
\[\dot{\widehat{\mathbf{R}}}-\left(\widehat{\nabla}\,\widehat{\mathbf{V}}_{m}-(\widehat {\nabla}\,\widehat{\mathbf{V}}_{m})^{T}\right)\widehat{\mathbf{R}}\to\mathrm{D}_{t}^ {\mathcal{J}}\,\mathbf{R}\]
is valid for \(h\to 0\).
#### 2.5.2 2-Tensor Fields
Using the tensor field \(\mathcal{A}[\mathbf{V}_{m}]\) (25), we deduce the following corollary by (24), (21) and (23).
**Corollary 12**.: _For all \(\mathbf{R}=\mathbf{r}+\mathbf{\eta}_{L}\otimes\mathbf{\nu}+\mathbf{\nu}\otimes\mathbf{\eta}_{R}+\phi \mathbf{\nu}\otimes\mathbf{\nu}\in\mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\), \(\mathbf{r}\in\mathrm{T}^{2}\mathcal{S}\), \(\mathbf{\eta}_{L},\mathbf{\eta}_{R}\in\mathrm{T}\mathcal{S}\) and \(\phi\in\mathrm{T}^{0}\mathcal{S}\) holds_
\[\mathrm{D}_{t}^{\mathcal{J}}\,\mathbf{R}=\mathrm{D}_{t}^{\mathrm{m}}\,\mathbf{R}- \mathcal{A}[\mathbf{V}_{m}]\mathbf{R}+\mathcal{A}[\mathbf{V}_{m}]=\mathfrak{Im}\mathbf{r}+ \mathfrak{Im}\mathbf{\eta}_{L}\otimes\mathbf{\nu}+\mathbf{\nu}\otimes\mathfrak{Im}\mathbf{ \eta}_{R}+\dot{\phi}\mathbf{\nu}\otimes\mathbf{\nu}\,. \tag{27}\]
The tangential Jaumann derivative \(\mathfrak{Im}\) is given in table 5 for tangential 2-tensor fields and in table 3 for tangential vector fields. Since \(\mathcal{A}[\mathbf{V}_{m}]\) is skew-symmetric, substituting (27) into (15) yields the following corollary.
**Corollary 13**.: _The Jaumann derivative on 2-tensor fields is compatible with the inner product, i. e. for all \(\mathbf{R}_{\alpha}=\mathbf{r}_{\alpha}+\mathbf{\eta}_{\alpha L}\otimes\mathbf{\nu}+\mathbf{\nu} \otimes\mathbf{\eta}_{\alpha R}+\phi_{\alpha}\mathbf{\nu}\otimes\mathbf{\nu}\in\mathrm{T}^ {2}\mathbb{R}^{3}|_{\mathcal{S}}\), with \(\alpha=1,2\), holds_
\[\dot{\overbrace{\langle\mathbf{R}_{1},\mathbf{R}_{2}\rangle_{\mathrm{T}^{2} \mathbb{R}^{3}|_{\mathcal{S}}}}} =\left\langle\mathrm{D}_{t}^{\mathcal{J}}\,\mathbf{R}_{1},\mathbf{R}_{2} \right\rangle_{\mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}}+\left\langle\mathbf{R}_{1 },\mathrm{D}_{t}^{\mathcal{J}}\,\mathbf{R}_{2}\right\rangle_{\mathrm{T}^{2} \mathbb{R}^{3}|_{\mathcal{S}}}\] \[=\left\langle\mathfrak{Im}\mathbf{r}_{1},\mathbf{r}_{2}\right\rangle_{ \mathrm{T}\mathcal{S}}+\langle\mathbf{r}_{1},\mathfrak{Im}\mathbf{r}_{2}\rangle_{ \mathrm{T}\mathcal{S}}+\dot{\phi}_{1}\phi_{2}+\phi_{1}\dot{\phi}_{2}\] \[\quad+\left\langle\mathfrak{Im}\mathbf{\eta}_{1L},\mathbf{\eta}_{2L}\right\rangle _{\mathrm{T}\mathcal{S}}+\langle\mathfrak{Im}\mathbf{\eta}_{1L},\mathfrak{Im}\mathbf{ \eta}_{2L}\rangle_{\mathrm{T}\mathcal{S}}+\left\langle\mathfrak{Im}\mathbf{\eta}_{1R}, \mathfrak{Im}\mathbf{\eta}_{2R}\right\rangle_{\mathrm{T}\mathcal{S}}\,.\]
Substituting (27) and (26) into (16) results in the following corollary.
**Corollary 14**.: _The Jaumann derivative is compatible with the 2-tensor-vector product, i. e. for all \(\mathbf{R}=\mathbf{r}+\eta_{L}\otimes\mathbf{\nu}+\mathbf{\nu}\otimes\mathbf{\eta}_{R}+\phi\mathbf{\nu} \otimes\mathbf{\nu}\in\mathrm{T}^{2}\mathbb{R}^{3}|_{S}\) and \(\mathbf{P}=\mathbf{p}+\psi\mathbf{\nu}\in\mathrm{T}\mathcal{S}\), \(\mathbf{r}\in\mathrm{T}^{2}\mathcal{S}\), \(\mathbf{\eta}_{L},\mathbf{\eta}_{R},\mathbf{p}\in\mathrm{T}\mathcal{S}\) and \(\phi,\psi\in\mathrm{T}^{0}\mathcal{S}\) holds_
\[\mathrm{D}_{t}^{\mathcal{T}}(\mathbf{R}\mathbf{P}) =(\mathrm{D}_{t}^{\mathcal{T}}\ \mathbf{R})\mathbf{P}+\mathbf{R}(\mathrm{D}_{t}^{ \mathcal{T}}\ \mathbf{P}) \tag{28}\] \[=(\Im\mathbf{r})\mathbf{p}+\mathbf{r}(\Im\mathbf{p})+\psi\Im\eta_{L}+\phi\mathbf{\eta }_{L}+\left(\phi\psi+\phi\psi+\left\langle\Im\eta_{R},\mathbf{p}\right\rangle_{ \mathrm{T}\mathcal{S}}+\left\langle\eta_{R},\Im\mathbf{p}\right\rangle_{\mathrm{T }\mathcal{S}}\right)\mathbf{\nu}.\]
For extended 2-tensor fields \(\widehat{\mathbf{R}}\in\mathrm{T}^{2}\mathcal{S}_{h}\), which are sufficing \(\widehat{\mathbf{R}}|_{\xi=0}=\mathbf{R}\in\mathrm{T}^{2}\mathbb{R}^{3}|_{S}\), we conclude from (9) and (A.15) that for the Jaumann \(\mathbb{R}^{3}\)-time derivative
\[\hat{\widehat{\mathbf{R}}}-\left(\widehat{\nabla}\,\widehat{\mathbf{V}}_{\mathrm{m}} -(\widehat{\nabla}\,\widehat{\mathbf{V}}_{\mathrm{m}})^{T}\right)\widehat{\mathbf{R}} +\widehat{\mathbf{R}}\left(\widehat{\nabla}\,\widehat{\mathbf{V}}_{\mathrm{m}}-( \widehat{\nabla}\,\widehat{\mathbf{V}}_{\mathrm{m}})^{T}\right)\to\mathrm{D}_{t}^{ \mathcal{T}}\ \mathbf{R}\]
is valid for \(h\to 0\).
#### 2.5.3 Discussion of Approach (24)
The choice of a pullback in (2) is indeed sufficient to determine the associated time derivative, but it is not necessary, i. e. we can find other pullbacks, which define the same time derivative. The fully Taylor expansion
\[(\Phi_{t,\tau}^{*}\mathbf{R}[\mathbf{X}_{\mathrm{m}}]|_{t+\tau})(t,y_{\mathrm{m}}^{1},y_{\mathrm{m}}^{2})=\sum_{\alpha=0}^{\infty}\frac{\tau^{\alpha}}{\alpha!}( \mathrm{D}_{t}^{(\alpha)}\,\mathbf{R})[\mathbf{X}_{\mathrm{m}}](t,y_{\mathrm{m}}^{1},y_{\mathrm{m}}^{2})\]
at \(\tau=0\) might be the easiest way to see this, where \(\mathrm{D}_{t}^{(\alpha)}\) is a time derivative of \(\alpha\)th order. The reason we mention this is that we stipulate the identity (24) to define the Jaumann derivative in relation to the upper- and lower-convected derivative. Indeed, averaging the associated pullbacks to \(\Phi_{t,\tau}^{*,\gamma}:=\frac{1}{2}\left(\Phi_{t,\tau}^{*,\gamma}+\Phi_{t, \tau}^{*,\gamma}\right)\) in the same way would be sufficient to obtain the Jaumann derivative (24) also by \(\mathrm{D}_{t}^{\mathcal{T}}=\mathrm{D}_{t}\,|_{\Phi_{t,\gamma}^{*,\gamma}= \Phi_{t,\tau}^{*,\gamma}}\). However, despite this proceeding would be reasoned, it is not very intuitive. More tangible would be a pullback structurally given by \(\Phi_{t,\tau}^{*,\gamma}\mathbf{R}[\mathbf{X}_{\mathrm{m}}]|_{t+\tau}:=\mathbf{\Omega}_{t,\tau}^{T}[\mathbf{X}_{\mathrm{m}}]|R[\mathbf{X}_{\mathrm{m}}]|_{t+\tau}\) for vector fields, resp. \(\Phi_{t,\tau}^{*,\gamma}\mathbf{R}[\mathbf{X}_{\mathrm{m}}]|_{t+\tau}:=\mathbf{\Omega}_{t,\tau}^{T}[\mathbf{X}_{\mathrm{m}}]|_{t+\tau}+\mathbf{R}[\mathbf{X}_{\mathrm{m}}]|_{t+\tau }\mathbf{\Omega}_{t,\tau}[\mathbf{X}_{\mathrm{m}}]\) for 2-tensor fields, where \(\mathbf{\Omega}_{t,\tau}[\mathbf{X}_{\mathrm{m}}]\in\mathrm{T}^{2}\mathbb{R}^{3}|_{S}\) is the rotation tensor, which rotate every local tangential plane and normal according to the material deformation \(\mathcal{S}|_{t}\to\mathcal{S}|_{t+\tau}\). It is only ensured that this pullback equals the former pullback first orderly w. r. t. the Taylor expansion above. For the sake of simplicity we decided to approach the Jaumann derivative by (24) rather than determining \(\mathbf{\Omega}_{t,\tau}^{-1}\) and its necessary derivatives.
### Q-Tensor Fields
#### 2.6.1 General Q-Tensor Fields
Beside the orthogonal decomposition \(\mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}=\mathrm{T}^{2}\mathcal{S}\oplus( \mathrm{T}\mathcal{S}\otimes\mathbf{\nu})\oplus(\mathbf{\nu}\otimes\mathrm{T} \mathcal{S})\oplus(\mathrm{T}^{0}\mathcal{S}\mathbf{\nu}\otimes\mathbf{\nu})\) realized by (13), there is another useful orthogonal decomposition for 2-tensor fields, namely \(\mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}=(\mathrm{T}^{0}\mathbf{SId})\oplus \mathrm{A}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\oplus\mathrm{Q}^{2}\mathbb{R}^{3}|_ {\mathcal{S}}\), where \(\mathbf{Id}\) is the Euclidean identity tensor fields, e. g. implemented by \(\mathbf{Id}=\delta^{AB}\mathbf{e}_{A}\otimes\mathbf{e}_{B}\) w. r. t. a Cartesian frame, \(\mathrm{A}^{2}\mathbb{R}^{3}|_{\mathcal{S}}:=\{A\in\mathrm{T}^{2}\mathbb{R}^{3}|_{ \mathcal{S}}:A=-A^{T}\}\) is the space of skew-symmetric tensor fields and
\[\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}:=\left\{\mathbf{Q}\in\mathrm{T}^{2} \mathbb{R}^{3}|_{\mathcal{S}}:\mathbf{Q}=\mathbf{Q}^{T}\ \text{and}\ \ \mathrm{Tr}\,\mathbf{Q}=0\right\}\]
is the space of symmetric and trace-free tensor fields, also called Q-tensor fields. In this section we examine the latter in context of time derivatives in more detail.
To describe \(\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\), which established a 5-dimensional vector bundle on \(\mathcal{S}\), by tangential quantities, we introduce the orthogonal decomposition
\[\mathbf{Q}=\mathbf{Q}\left[\mathbf{q},\mathbf{\eta},\beta\right]:=\mathbf{q}+\mathbf{\eta}\otimes\mathbf{ \nu}+\mathbf{\nu}\otimes\mathbf{\eta}+\beta\left(\mathbf{\nu}\otimes\mathbf{\nu}-\frac{1}{2} \mathbf{Id}_{\mathcal{S}}\right)\in\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\,, \tag{29}\]
where \(\mathbf{q}\in\mathrm{Q}^{2}\mathcal{S}:=\{\mathbf{q}\in\mathrm{T}^{2}\mathcal{S}:\mathbf{q }=\mathbf{q}^{T}\text{and}\ \mathrm{Tr}\,\mathbf{q}=0\}\) is a tangential Q-tensor field, and \(\mathbf{\eta}\in\mathrm{T}\mathcal{S}\) and \(\beta=\mathrm{T}^{0}\mathcal{S}\) are determined uniquely for all \(\mathbf{Q}\in\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\). \(\mathbf{Id}_{\mathcal{S}}\) is the tangential identity tensor fields, e. g. implemented by
\(g^{ij}\partial_{i}\mathbf{X}\otimes\partial_{j}\mathbf{X}\) w. r. t. the local tangential frame or \(\mathbf{Id}_{\mathcal{S}}=(\delta^{AB}-V^{A}\otimes V^{B})\mathbf{e}_{A}\otimes\mathbf{e}_{B}\) w. r. t. a Cartesian frame. This decomposition is consistent to decomposition (13) for \(\mathbf{R}=\mathbf{Q}\), \(\mathbf{r}=\mathbf{q}-\frac{\beta}{2}\mathbf{Id}_{\mathcal{S}}\), \(\mathbf{\eta}_{L}=\mathbf{\eta}_{R}=\mathbf{\eta}\) and \(\phi=\beta\). Therefore, (14), (27), \(\hat{\mathbf{q}}\in\mathrm{Q}^{2}\mathcal{S}\), \(\Im\mathbf{q}\in\mathrm{Q}^{2}\mathcal{S}\) and \(\mathbf{Id}_{\mathcal{S}}=\Im\mathbf{Id}_{\mathcal{S}}=0\) yields
\[\mathrm{D}_{t}^{m}\mathbf{Q} =\mathbf{Q}\left[\hat{\mathbf{q}}-2\,\Pi_{Q^{2}\mathcal{S}}(\mathbf{\eta} \otimes\mathbf{b}[V_{m}]),\hat{\mathbf{\eta}}+\mathbf{q}\mathbf{b}[V_{m}]-\frac{3\beta}{2}\mathbf{ b}[V_{m}],\beta+2\left\langle\mathbf{\eta},\mathbf{b}[V_{m}]\right\rangle_{\mathrm{T} \mathcal{S}}\right]\,, \tag{30}\] \[\mathrm{D}_{t}^{\mathcal{J}}\mathbf{Q} =\mathbf{Q}\left[\Im\mathbf{q},\Im\mathbf{\eta},\hat{\beta}\right]\,, \tag{31}\]
where \(\Pi_{Q^{2}\mathcal{S}}:\mathrm{T}^{2}\mathcal{S}\rightarrow\mathrm{Q}^{2} \mathcal{S}\) is the orthogonal projection given by \(\Pi_{Q^{2}\mathcal{S}}\mathbf{r}=\frac{1}{2}(\mathbf{r}+\mathbf{r}^{T}-(\mathrm{Tr}\,\mathbf{r })\mathbf{Id}_{\mathcal{S}})\) for all \(\mathbf{r}\in\mathrm{T}\mathcal{S}\). As a consequence, the space of \(\mathrm{Q}\)-tensor fields is closed by the material derivative as well as the Jaumann derivative. Unfortunately, the upper- and lower-convected derivative fail this behavior. Symmetric tensor fields are closed by them, but trace-free tensor fields are not, since (21) and (23) yield
\[\mathrm{Tr}\,\mathrm{D}_{t}^{\sharp}\mathbf{Q}=-\,\mathrm{Tr}\,\mathrm{D}_{t}^{ \flat}\mathbf{Q}=-2\left\langle\mathbf{G}[V_{m}],\mathbf{Q}\right\rangle_{\mathrm{T}^{2} \mathbb{R}^{3}|_{\mathcal{S}}}=\beta\,\mathrm{Tr}\,\mathbf{G}[V_{m}]-2\left\langle \mathbf{G}[V_{m}],\mathbf{q}\right\rangle_{\mathrm{T}^{2}\mathcal{S}}\,,\]
which is only vanishing for rigid body motions in general.
Note that for an eigenvector field \(\mathbf{P}\in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\) with eigenvalue field \(\lambda\in\mathrm{T}^{0}\mathcal{S}\), i. e. it holds \(\mathbf{Q}\mathbf{P}=\lambda\mathbf{P}\), yields
\[(\mathrm{D}_{t}^{m}\mathbf{Q})\mathbf{P} =\lambda\mathbf{P}-(\mathbf{Q}-\lambda\mathbf{Id})\,\mathrm{D}_{t}^{m}\,\mathbf{P}\,,\] \[(\mathrm{D}_{t}^{\mathcal{J}}\mathbf{Q})\mathbf{P} =\lambda\mathbf{P}-(\mathbf{Q}-\lambda\mathbf{Id})\,\mathrm{D}_{t}^{\mathcal{ J}}\,\mathbf{P}\]
by (16) and (28). Since \(\mathbf{Q}\) is a real-valued symmetric 2-tensor field, the union of all eigenspaces is spanning \(\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\). As a consequence we obtain the following corollary.
**Corollary 15**.: _For all \(\mathbf{Q}\in\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\), \(\mathbf{P}\in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\) and \(\lambda\in\mathrm{T}^{0}\mathcal{S}\), s. t. \(\mathbf{Q}\mathbf{P}=\lambda\mathbf{P}\) and \(\hat{\lambda}=0\) is valid, holds_
\[\mathrm{D}_{t}^{m}\,\mathbf{P} =0 \Longrightarrow \mathrm{D}_{t}^{m}\,\mathbf{Q} =0\,,\] \[\mathrm{D}_{t}^{\mathcal{J}}\,\mathbf{P} =0 \Longrightarrow \mathrm{D}_{t}^{\mathcal{J}}\,\mathbf{Q} =0\,.\]
The converses are not true without further ado. The main reason is that the eigenvector fields of a \(\mathrm{Q}\)-tensor field do not have to be differentiable, neither spatially nor temporally. If the \(\mathrm{Q}\)-tensor field comprises \(\pm\frac{1}{2}\)-defects, eigenvector fields even has to be discontinuous to represent such defects. Certainly, it is feasible to show the converses by modifying the time-derivatives w. r. t. sign-sensitivity for instance. However, for the sake simplicity, we leave this issue as an open question in this paper. Note that corollary 15 would not hold in the same way for the upper- and lower convected derivative.
#### 2.6.2 Surface Conforming \(Q\)-Tensor Fields
One useful subset of the space of \(\mathrm{Q}\)-tensor fields is the space of surface conforming \(\mathrm{Q}\)-tensor fields \(\mathrm{C}_{\mathcal{S}}\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}:=\mathbf{Q}[ \mathrm{Q}^{2}\mathcal{S},0,\mathrm{T}^{0}\mathcal{S}]\), which is a subtensor field of the \(\mathrm{Q}\)-tensor field space, i. e. \(\mathrm{C}_{\mathcal{S}}\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}<\mathrm{Q}^{ 2}\mathbb{R}^{3}|_{\mathcal{S}}<\mathrm{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\), see [14; 5; 15]. The associated orthogonal projection \(\Pi_{\mathrm{C}_{\mathcal{S}}\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}}: \mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\rightarrow\mathrm{C}_{\mathcal{S}} \mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\) is given by
\[\Pi_{\mathrm{C}_{\mathcal{S}}\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}}\,\mathbf{Q} :=\mathbf{Q}-\Pi_{\mathrm{T}\mathcal{S}}(\mathbf{Q}\mathbf{\nu})\otimes\mathbf{\nu}-\mathbf{\nu} \otimes\Pi_{\mathrm{T}\mathcal{S}}(\mathbf{Q}\mathbf{\nu}) \tag{32}\]
for all \(\mathbf{Q}\in\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\). Since decomposition (29) yields \(\mathbf{Q}\mathbf{\nu}=\mathbf{\eta}+\beta\mathbf{\nu}\), we could summarize the situation in the following corollary.
**Corollary 16**.: _A \(Q\)-tensor field \(\mathbf{Q}=\mathbf{q}+\mathbf{\eta}\otimes\mathbf{\nu}+\mathbf{\nu}\otimes\mathbf{\eta}+\beta\left(\mathbf{ \nu}\otimes\mathbf{\nu}-\frac{1}{2}\mathbf{Id}_{\mathcal{S}}\right)\in\mathrm{Q}^{2} \mathbb{R}^{3}|_{\mathcal{S}}\) with \(\mathbf{q}\in\mathrm{Q}^{2}\mathcal{S}\), \(\mathbf{\eta}\in\mathrm{T}\mathcal{S}\) and \(\beta=\mathrm{T}^{0}\mathcal{S}\), is surface conforming, if and only if one of the following equivalent statements is true:_
1. \(\mathbf{Q}=\Pi_{\mathrm{C}_{\mathcal{S}}\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}}\, \mathbf{Q}\in\mathrm{C}_{\mathcal{S}}\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\)_._
2. \(\Pi_{\mathrm{T}\mathcal{S}}(\mathbf{Q}\mathbf{\nu})=\mathbf{\eta}=0\)_,_
3. \(\mathbf{\nu}\) _is an eigenvector field of_ \(\mathbf{Q}\) _and_ \(\beta\) _is its associated eigenvalue._
In contrast to the Jaumann derivative (31), the space of surface conforming Q-tensor fields is not closed by the material derivative (30), since \(\Pi_{\mathrm{T}S}((\mathrm{D}_{t}^{m}\mathbf{Q})\mathbf{v})=\mathbf{q}\mathbf{b}[V_{m}]-\frac{3 \beta}{2}\mathbf{b}[V_{m}]\) is not vanishing generally for \(\mathbf{Q}\in\mathrm{C}_{S}\mathrm{Q}^{2}\mathbb{R}^{3}|_{S}\). To obtain such a closing we use the orthogonal projection (32) and call the resulting time derivative \(\mathrm{D}_{t}^{\mathrm{C}_{S}m}:=\Pi_{\mathrm{C}_{S}\mathrm{Q}^{2}\mathbb{R}^{ 3}|_{S}}\circ\mathrm{D}_{t}^{m}\mid_{\mathrm{C}_{S}\mathrm{Q}^{2}\mathbb{R}^{3} |_{S}}\) surface conforming material derivative. Taken all together, this yields
\[\mathrm{D}_{t}^{\mathrm{C}_{S}m}\mathbf{Q} =\mathbf{Q}[\dot{\mathbf{q}},0,\dot{\beta}] =\dot{\mathbf{q}}+\dot{\beta}\left(\mathbf{v}\otimes\mathbf{v}-\frac{1}{2}\bm {Id}_{S}\right) \in\mathrm{C}_{S}\mathrm{Q}^{2}\mathbb{R}^{3}|_{S} \tag{33}\] \[\mathrm{D}_{t}^{\mathcal{J}}\mathbf{Q} =\mathbf{Q}[\Im\mathbf{q},0,\dot{\beta}] =\Im\mathbf{q}+\dot{\beta}\left(\mathbf{v}\otimes\mathbf{v}-\frac{1}{2}\bm {Id}_{S}\right) \in\mathrm{C}_{S}\mathrm{Q}^{2}\mathbb{R}^{3}|_{S} \tag{34}\]
for all surface conforming Q-tensor fields \(\mathbf{Q}=\mathbf{q}+\beta(\mathbf{v}\otimes\mathbf{v}-\frac{1}{2}\mathbf{Id}_{S})\in\mathrm{C}_ {S}\mathrm{Q}^{2}\mathbb{R}^{3}|_{S}\), where \(\mathbf{q}\in\mathrm{Q}^{2}\mathcal{S}\) and \(\beta\in\mathrm{T}^{0}\mathcal{S}\).
One simple special case of conforming Q-tensor fields are the tangential Q-tensor fields in \(\mathrm{Q}^{2}\mathcal{S}=\mathbf{Q}[\mathrm{Q}^{2}\mathcal{S},0,0]<\mathrm{C}_{S} \mathrm{Q}^{2}\mathbb{R}^{3}|_{S}\). Here, \(\mathrm{Q}^{2}\mathcal{S}\) is closed by the surface conforming material derivative (33), resp. Jaumann derivative (34), which coincides with the tangential material derivative, resp. tangential Jaumann derivative, given in [6].
#### 2.6.3 Surface Landau-de Gennes models
As already demonstrated in [6; 7] for tangential tensor-fields the dynamics of these models differ. To sensitize the reader for this difference in applying the models, e.g. in the context of morphogenesis [8; 9; 10], is the main motivation for this research.
As a simple, but not trivial, example we consider the one-constant Landau-de Gennes free energy
\[\mathbb{U}[\mathbf{Q}]:=\frac{L}{2}\left\|\nabla_{\mathrm{C}}\mathbf{Q}\right\|_{ \mathrm{L}^{2}(\mathrm{T}^{3}\mathbb{R}^{3}|_{S})}^{2}+\int_{\mathcal{S}}a \operatorname{Tr}\mathbf{Q}^{2}+\frac{2b}{3}\operatorname{Tr}\mathbf{Q}^{3}+c \operatorname{Tr}\mathbf{Q}^{4}\operatorname{d}\mathcal{S} \tag{35}\]
for Q-tensor fields \(\mathbf{Q}\in\mathrm{Q}^{2}\mathbb{R}^{3}|_{S}\), elastic parameter \(L>0\) and thermotropic coefficients \(a,b,c\in\mathbb{R}\). Moreover, we assume that the surface \(\mathcal{S}\) is boundaryless, i. e. \(\partial\mathcal{S}=\emptyset\), and the motion of the surface is prescribed by the material velocity \(\mathbf{V}_{m}\in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\). The associated \(\mathrm{L}^{2}\)-gradient flow with dynamics driven by \(\mathrm{D}_{t}\mathbf{Q}\in\{\mathrm{D}_{t}^{m}\mathbf{Q},\mathrm{D}_{t}^{\mathcal{J }}\mathbf{Q}\}\) is
\[\mathrm{D}_{t}\mathbf{Q}=-\nabla_{\mathrm{L}^{2}(\mathrm{Q}^{2}\mathbb{R}^{3}|_{S} )}\mathbb{U}=L\,\Delta_{\mathrm{C}}\mathbf{Q}-2\left(a\mathbf{Q}+b\left(\mathbf{Q}^{2}- \frac{\operatorname{Tr}(\mathbf{Q}^{2})}{3}\mathbf{Id}\right)+c\operatorname{Tr}(\mathbf{Q }^{2})\mathbf{Q}\right)\in\mathrm{Q}^{2}\mathbb{R}^{3}|_{S}\,, \tag{36}\]
where the \(\mathrm{L}^{2}\)-gradient \(\nabla_{\mathrm{L}^{2}(\mathrm{Q}^{2}\mathbb{R}^{3}|_{S})}\mathbb{U}\) is given by variation of the energy in arbitrary directions of \(\mathbf{R}\in\mathrm{Q}^{2}\mathbb{R}^{3}|_{S}\), i. e. \(\left\langle\nabla_{\mathrm{L}^{2}(\mathrm{Q}^{2}\mathbb{R}^{3}|_{S})} \mathbb{U},\mathbf{R}\right\rangle_{\mathrm{L}^{2}(\mathrm{Q}^{2}\mathbb{R}^{3}|_ {S})}:=\left\langle\frac{\partial\mathbb{U}}{\partial\mathbf{Q}},\mathbf{R}\right\rangle _{\mathrm{L}^{2}(\mathrm{Q}^{2}\mathbb{R}^{3}|_{S})}\) with aid of lemma 25, which justifies the surface Laplace operator. Note that \(\mathrm{Q}^{2}\mathbb{R}^{3}|_{S}\) is closed by \(\mathrm{D}_{t}=\mathrm{D}_{t}^{m}\) as well as \(\mathrm{D}_{t}=\mathrm{D}_{t}^{\mathcal{J}}\) in \(\mathrm{T}^{2}\mathbb{R}^{3}|_{S}\), see (30) and (31), i. e. one could safely use \(\mathrm{D}_{t}^{\mathcal{J}}\) and \(\mathrm{D}_{t}^{\mathcal{J}}\) given in table 6. For a pure tangential motion of the surface, i. e. \(v_{\perp}=0\), and a therefore valid Eulerian observer, i. e. \(\mathbf{V}=0\), equation (36) equals the Q-tensor equation of the surface Beris-Edwards model in [15] for the Jaumann derivative. Note that \(\Delta_{\mathrm{C}}\mathbf{Q}\in\mathrm{Q}^{2}\mathbb{R}^{3}|_{S}\) holds already, therefore we do not need to apply an extra projection into the space of Q-tensor fields. This can immediately be deduced from lemma 24.
The situation changes if we like to consider the Landau-de Gennes energy (35) w. r. t. surface conforming Q-tensor fields \(\mathrm{C}_{S}\mathrm{Q}^{2}\mathbb{R}^{3}|_{S}\). The associated gradient flow can be obtained either by variation and weak testing in the right space, i. e. using \(\left\langle\Pi_{\mathrm{C}_{S}\mathrm{Q}^{2}\mathbb{R}^{3}|_{S}}\mathrm{D}_{t }\mathbf{Q},\mathbf{R}\right\rangle_{\mathrm{L}^{2}(\mathrm{C}_{S}\mathrm{Q}^{2} \mathbb{R}^{3}|_{S})}=-\left\langle\frac{\delta\mathbb{U}}{\partial\mathbf{Q}},\mathbf{R }\right\rangle_{\mathrm{L}^{2}(\mathrm{C}_{S}\mathrm{Q}^{2}\mathbb{R}^{3}|_{S})}\) for all \(\mathbf{R}\in\mathrm{C}_{S}\mathrm{Q}^{2}\mathbb{R}^{3}|_{S}\), or by Lagrange multiplier technique. Both approaches lead to the same result as we see below. Since \(\mathbf{Q}\mathbf{v}=\beta\mathbf{v}\) holds for all \(\mathbf{Q}\in\mathrm{C}_{S}\mathrm{Q}^{2}\mathbb{R}^{3}|_{S}\) according to corollary 16, we infer \(\mathbf{Q}^{2}\mathbf{v}=\beta^{2}\mathbf{v}\) and from that in turn \(\Pi_{\mathrm{T}S}\left((\mathbf{Q}^{2}-\frac{\operatorname{Tr}(\mathbf{Q}^{2})}{3}\mathbf{Id })\mathbf{v}\right)=0\). Or in other words, if \(\mathbf{Q}\) is conforming, then so is the Q-tensor part of \(\mathbf{Q}^{2}\). This leads to the \(\mathrm{L}^{2}\)-gradient flow
\[\mathrm{D}_{t}^{\mathrm{C}_{S}}\mathbf{Q}=-\nabla_{\mathrm{L}^{2}(\mathrm{C}_{S} \mathrm{Q}^{2}\mathbb{R}^{3}|_{S})}\mathbb{U}=L\,\Delta_{\mathrm{C}}^{ \mathrm{C}_{S}}\mathbf{Q}-2\left(a\mathbf{Q}+b\left(\mathbf{Q}^{2}-\frac{\operatorname{Tr} (\mathbf{Q}^{2})}{3}\mathbf{Id}\right)+c\operatorname{Tr}(\mathbf{Q}^{2})\mathbf{Q}\right) \in\mathrm{C}_{S}\mathrm{Q}^{2}\mathbb{R}^{3}|_{S} \tag{37}\]
for \(\mathbf{Q}\in\mathrm{C}_{S}\mathrm{Q}^{2}\mathbb{R}^{3}|_{S}\), where \(\mathrm{D}_{t}^{C}\mathbf{Q}\) is one of the time derivatives in \(\{\mathrm{D}_{t}^{\mathrm{C}_{S}m}\mathbf{Q},\mathrm{D}_{t}^{\mathcal{J}}\mathbf{Q}\}\), \(\Delta_{\mathrm{C}}^{\mathrm{C}_{S}}:=\Pi_{\mathrm{C}_{S}\mathrm{Q}^{2} \mathbb{R}^{3}|_{S}}\circ\Delta_{\mathrm{C}}\mid_{\mathrm{C}_{S}\mathrm{Q}^{2 }\mathbb{R}^{3}|_{S}}\) is the surface conforming Laplace operator and \(\left\langle\nabla_{\mathrm{L}^{2}(\mathrm{Q}^{2}\mathbb{R}^{3}|_{S})} \mathbb{U},\mathbf{R}\right\rangle_{\mathrm{L}^{2}(\mathrm{C}_{S}\mathrm{Q}^{2} \mathbb{R
where \(\mathbf{\lambda}\in\mathrm{T}\mathcal{S}\) is the Lagrange multiplier, to the Landau-de Gennes energy (35) yields
\[\mathrm{D}_{t}\,\mathbf{Q} =L\,\Delta_{\mathrm{C}}\,\mathbf{Q}-2\left(a\mathbf{Q}+b\left(\mathbf{Q}^{2}- \frac{\mathrm{Tr}(\mathbf{Q}^{2})}{3}\mathbf{Id}\right)+c\,\mathrm{Tr}(\mathbf{Q}^{2})\mathbf{Q }\right)-\frac{1}{2}\left(\mathbf{\lambda}\otimes\mathbf{\nu}+\mathbf{\nu}\otimes\mathbf{ \lambda}\right)\in\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\,, \tag{38}\] \[0 =\Pi_{\mathrm{T}\mathcal{S}}(\mathbf{Q}\mathbf{\nu})\in\mathrm{T}\mathcal{ S}\,, \tag{39}\]
where \(\mathrm{D}_{t}\,\mathbf{Q}\in\{\mathrm{D}_{t}^{\mathrm{m}}\,\mathbf{Q},\mathrm{D}_{t}^{ \mathcal{T}}\,\mathbf{Q}\}\). Substituting (39) into (38) and applying \(\Pi_{\mathrm{C}\mathcal{S}\mathrm{Q}^{2}\mathbb{R}^{3}|_{\mathcal{S}}}\) on both sides of (38) also results in the surface conforming \(\mathrm{L}^{2}\)-gradient flow (37). In contrast to the \(\mathrm{L}^{2}\)-gradient flow (36), the effort to rephrase the conforming flow (37) according to decomposition (29) is significantly less. For tangential time derivatives \(\mathrm{d}_{t}\,\mathbf{q}\in\{\mathbf{q},\Im\mathbf{q}\}\) given in table 5, we obtain the system of tangential Q-tensor and scalar equations
\[\mathrm{d}_{t}\,\mathbf{q} =L\left(\Delta\mathbf{q}-\mathrm{Tr}(\mathbf{I}^{2})\mathbf{q}+3\beta\,\Pi_{ \mathrm{Q}^{2}\mathcal{S}}(\mathbf{I}^{2})\right)-\left(2a-2b\beta+3c\beta^{2}+2c \,\mathrm{Tr}\,\mathbf{q}^{2}\right)\mathbf{q}\in\mathrm{Q}^{2}\mathcal{S}\,, \tag{40}\] \[\beta =L\left(\Delta\beta+\left\langle\mathbf{I}^{2},2\mathbf{q}-3\beta\mathbf{Id} _{\mathcal{S}}\right\rangle_{\mathrm{T}^{2}\mathcal{S}}\right)-\left(2a+b \beta+3c\beta^{2}+2c\,\mathrm{Tr}\,\mathbf{q}^{2}\right)\beta+\frac{2}{3}b\, \mathrm{Tr}\,\mathbf{q}^{2}\in\mathrm{T}^{0}\mathcal{S}\,,\]
which are equivalent to (37), see B.2 for a detailed derivation. We could substitute \(\mathbf{I}^{2}=\mathcal{H}\mathbf{I}-\mathcal{K}\mathbf{Id}_{\mathcal{S}}\) for the third fundamental form, where \(\mathcal{K}:=\det[\mathbf{I}_{j}^{i}]\) is the Gaussian curvature. For the material derivative this yields the surface Landau-de Gennes model in [14] up to the uniaxiality constrain used there. It gives also the same tangential Q-tensor equation in [5] for a constant \(\beta\). Implications of the choice of the time derivative \(\mathrm{d}_{t}\,\mathbf{q}\in\{\mathbf{q},\Im\mathbf{q}\}\) in these models needs to be explored numerically.
## Appendix A Identities
**Lemma 17**.: _For a parameterization \(\mathbf{X}\) holds_
\[\partial_{i}\partial_{j}\mathbf{X} =\Gamma_{ij}^{k}\partial_{k}\mathbf{X}+\mathbf{I}_{ij}\mathbf{\nu}\,, \tag{41}\] \[\partial_{i}\mathbf{\nu} =-\mathbf{I}_{i}^{j}\partial_{j}\mathbf{X}\] (42) \[\text{resp.}\,\,\left\langle\partial_{i}\partial_{j}\mathbf{X},\partial _{k}\mathbf{X}\right\rangle_{\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}} =\Gamma_{ijk}\,,\] (43) \[\left\langle\partial_{i}\partial_{j}\mathbf{X},\mathbf{\nu}\right\rangle_ {\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}} =-\left\langle\partial_{i}\mathbf{\nu},\partial_{j}\mathbf{X}\right\rangle _{\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}}=\mathbf{I}_{ij}\,.\] (44) \[\left\langle\partial_{i}\mathbf{\nu},\mathbf{\nu}\right\rangle_{\mathrm{ T}\mathbb{R}^{3}|_{\mathcal{S}}} =0 \tag{45}\]
Proof.: Equation (43) is an alternative definition of the Christoffel symbols if the metric tensor is given by \(g_{ij}=\left(\partial_{i}\mathbf{X},\partial_{j}\mathbf{X}\right)_{\mathrm{T}\mathbb{R} ^{3}|_{\mathcal{S}}}\). Equations (44) are equivalent definitions of the second fundamental form, since (45) is true, which in turn holds by \(\|\mathbf{v}\|_{\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}}=1\). Identities (41) and (42) summarize (43), (44) and (45).
**Lemma 18**.: _For the metric tensor holds_
\[\partial_{l}g_{ij} =\Gamma_{ij}+\Gamma_{lji} \tag{46}\] \[\partial_{l}g^{ij} =-\left(g^{kj}\Gamma_{lk}^{i}+g^{ki}\Gamma_{lk}^{j}\right)\,. \tag{47}\]
Proof.: Both identities are consequences of the metric compatibility, i. e. \(0=g_{ij}|_{l}=\partial_{l}g_{ij}-\Gamma_{li}^{k}g_{kj}-\Gamma_{lj}^{k}g_{ik}\) and \(0=g_{ij}^{ij}=\partial_{l}g^{ij}+\Gamma_{lk}^{i}g^{kj}+\Gamma_{lk}^{j}g^{ik}\). Alternatively, (46) is a consequence of (43), and (47) of (46) by evaluating \(\partial_{l}g^{ij}=\partial_{l}(g^{ik}g^{jm}g_{km})\) with the aid of the product rule.
**Lemma 19**.: _For a time-depending parameterization \(\mathbf{X}\) with velocity \(\mathbf{V}=\mathbf{\nu}+v_{\perp}\mathbf{\nu}=\partial_{t}\mathbf{X}\in\mathrm{T}\mathbb{R}^{3}|_ {\mathcal{S}}\) holds_
\[\partial_{i}\mathbf{V}=[\nabla_{\mathrm{C}}\,\mathbf{V}]^{A}_{\,\,i}\mathbf{e}_{A}=G^{j}_{\, \,i}[\mathbf{V}]\partial_{j}\mathbf{X}+b_{i}[\mathbf{V}]\mathbf{\nu}\,, \tag{48}\]
_where_
\[\mathbf{G}[\mathbf{V}] :=\Pi_{\mathrm{T}^{2}\mathcal{S}}(\nabla_{\mathrm{C}}\,\mathbf{V})= \nabla\mathbf{\nu}-v_{\perp}\mathbf{I}\mathbf{I}\in\mathrm{T}^{2}\mathcal{S} \tag{49}\] \[\mathbf{b}[\mathbf{V}] :=\mathbf{\nu}\,\nabla_{\mathrm{C}}\,\mathbf{V}=\nabla v_{\perp}+\mathbf{I} \mathbf{\nu}\in\mathrm{T}\mathcal{S}\,. \tag{50}\]
Proof.: See [7].
**Lemma 20**.: _For time-dependent covariance proxy components \(r_{ij}\in\mathrm{T}^{0}\mathcal{S}\), \(\eta_{i}\in\mathrm{T}^{0}\mathcal{S}\), contravariant proxy components \(r^{ij}\in\mathrm{T}^{0}\mathcal{S}\) and \(\eta^{i}\in\mathrm{T}^{0}\mathcal{S}\) of a tangential 2-tensor fields \(\mathbf{r}\in\mathrm{T}^{2}\mathcal{S}\) and vector field \(\mathbf{\eta}\in\mathrm{T}\mathcal{S}\) holds_
\[\partial_{t}\eta_{i} =g_{ik}\partial_{t}\eta^{k}+\left[(\mathbf{G}[\mathbf{V}]+\mathbf{G}^{T}[\mathbf{V} ])\mathbf{\eta}\right]_{i}\,,\] (A.11) \[\partial_{t}r_{ij} =g_{ik}g_{jl}\partial_{t}r^{kl}+\left[\mathbf{r}(\mathbf{G}[\mathbf{V}]+\mathbf{G} ^{T}[\mathbf{V}])+(\mathbf{G}[\mathbf{V}]+\mathbf{G}^{T}[\mathbf{V}])\mathbf{r}\right]_{ij}\,,\] (A.12)
_with \(\mathbf{G}[\mathbf{V}]\in\mathrm{T}^{2}\mathcal{S}\) given in (A.9)._
Proof.: Follows by \(\partial_{t}g_{ij}=G_{ij}[\mathbf{V}]+G_{jl}[\mathbf{V}]\) (see [6]), \(r_{ij}=g_{ik}g_{jl}r^{kl}\), \(\eta_{i}=g_{ik}\eta^{k}\) and product rule.
**Lemma 21**.: _For a time-depending parameterization \(\mathbf{X}\) with velocity \(\mathbf{V}=\mathbf{v}+v_{\perp}\mathbf{v}=\partial_{t}\mathbf{X}\in\mathrm{T}\mathbb{R}^{3}|_ {\mathcal{S}}\) holds_
\[\partial_{t}\mathbf{v} =\partial_{t}(\nu^{A}\mathbf{e}_{A})=-\mathbf{b}[\mathbf{V}]\in\mathrm{T} \mathcal{S}\,,\] (A.13) \[\text{resp.}\ \partial_{t}\nu^{A}=-b^{i}[\mathbf{V}]\partial_{i}\mathbf{X}^{A}\,,\]
_where \(\mathbf{b}[\mathbf{V}]\) is given in (A.10)._
Proof.: Follows from
\[\langle\partial_{t}\mathbf{v},\mathbf{v}\rangle_{\mathrm{T}\mathbb{R}^{3}|_ {\mathcal{S}}}=\partial_{t}\left\langle\mathbf{v},\mathbf{v}\right\rangle_{\mathrm{T} \mathbb{R}^{3}|_{\mathcal{S}}}-\left\langle\mathbf{v},\partial_{t}\mathbf{v}\right\rangle _{\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}}=-\left\langle\partial_{t}\mathbf{v}, \mathbf{v}\right\rangle_{\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}}\stackrel{{ \cong}}{{=}}0\]
and
\[\langle\partial_{t}\mathbf{v},\partial_{i}\mathbf{X}\rangle_{\mathrm{T} \mathbb{R}^{3}|_{\mathcal{S}}}=\partial_{t}\left\langle\mathbf{v},\partial_{i}\bm {X}\right\rangle_{\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}}-\left\langle\mathbf{v}, \partial_{i}\mathbf{V}\right\rangle_{\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}} \stackrel{{\cong}}{{=}}-b_{i}[\mathbf{V}]\,.\]
**Corollary 22**.: _For a time-depending parameterization \(\mathbf{X}\) with velocity \(\mathbf{V}=\mathbf{v}+v_{\perp}\mathbf{v}=\partial_{t}\mathbf{X}\in\mathrm{T}\mathbb{R}^{3}|_ {\mathcal{S}}\) and a tangential vector field \(\mathbf{u}\in\mathrm{T}\mathcal{S}\) holds_
\[\partial_{t}\mathbf{v}+u^{\xi}\partial_{k}\mathbf{v} =\partial_{t}(\nu^{A}\mathbf{e}_{A}+u^{k}\partial_{k}\nu^{A}\mathbf{e}_{A })=-\mathbf{b}[\mathbf{V}+\mathbf{u}]\in\mathrm{T}\mathcal{S}\,,\] (A.14) \[\text{resp.}\ \partial_{t}\nu^{A}+u^{k}\partial_{k}\nu^{A}=-b^{i}[\mathbf{V}+\mathbf{u }]\partial_{i}\mathbf{X}^{A}\,,\]
_where \(\mathbf{b}[\mathbf{V}+\mathbf{u}]=\nabla v_{\perp}+\mathbf{I}\mathbf{l}(\mathbf{v}+\mathbf{u})\) is consistent with definition (A.10)._
Proof.: Follows from (A.13) and (A.2).
**Lemma 23**.: _For a time-depending parameterization \(\mathbf{X}\), with velocity \(\mathbf{V}=\partial_{t}\mathbf{X}\in\mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}}\), thin film parameterization \(\mathbf{\chi}[\mathbf{X}]\) and thin film velocity \(\widehat{\mathbf{V}}=\partial_{t}\mathbf{\chi}[\mathbf{X}]\in\mathrm{T}\mathcal{S}_{h}= \mathrm{T}\mathbb{R}^{3}|_{\mathcal{S}_{h}}\) holds_
\[\widehat{\mathbf{\nabla}}\,\widehat{\mathbf{V}}\to\mathbf{\mathcal{G}}[\mathbf{V}]=\mathbf{G}[ \mathbf{V}]+\mathbf{v}\otimes\mathbf{b}[\mathbf{V}]-\mathbf{b}[\mathbf{V}]\otimes\mathbf{v}\] (A.15)
_for \(h\to 0\)._
Proof.: In the following, we omit the argument \(\mathbf{X}\) in square brackets. The thin film parameterization (6) yields the frame
\[\partial_{t}\mathbf{\chi} =\partial_{t}\mathbf{X}-\xi\mathbf{I}_{i}^{j}\partial_{j}\mathbf{X}\,, \partial_{\xi}\mathbf{\chi}=\mathbf{v}\,.\] (A.16)
Regarding this frame, the covariant thin film proxy of the velocity \(\widehat{\mathbf{V}}=\partial_{t}\mathbf{\chi}=\mathbf{V}-\xi\mathbf{b}[\mathbf{V}]\) is given by
\[\widehat{V}_{i}=\left\langle\widehat{\mathbf{V}},\partial_{t}\mathbf{ \chi}\right\rangle_{\mathrm{T}\mathcal{S}_{h}}=v_{i}-\xi\left(b_{i}[\mathbf{V}]+\mathbf{I }_{ij}v^{j}\right)+\mathcal{O}(\xi^{2})\,, \widehat{V}_{\xi}=\left\langle\widehat{\mathbf{V}},\partial_{\xi}\mathbf{\chi} \right\rangle_{\mathrm{T}\mathcal{S}_{h}}=v_{\perp}\,.\]
The Christoffel symbols of second kind w. r. t. the thin film frame (A.16) are
\[\mathbb{T}^{k}_{ij}=\Gamma^{k}_{ij}+\mathcal{O}(\xi)\,,\quad\mathbb{T}^{\xi}_{ ij}=I\!I_{ij}+\mathcal{O}(\xi)\,,\quad\mathbb{T}^{K}_{\xi\xi}=\mathbb{T}^{\xi}_{l\xi}= \mathbb{T}^{\xi}_{\xi l}=0\,,\quad\text{and}\quad\mathbb{T}^{k}_{i\xi}=\mathbb{ T}^{k}_{\xi i}=-I\!I^{k}_{i}+\mathcal{O}(\xi)\,,\]
where a capital Latin letter \(I,J,K\) comprises a small Latin letter \(i,j,k\) and \(\xi\), see [13] for more details. Therefore the covariant thin film proxy of the velocity gradient \(\widehat{\nabla}\,\widehat{\nabla}=\delta^{C}_{B}\partial_{C}\widehat{V}^{A} \boldsymbol{e}_{A}\otimes\boldsymbol{e}_{B}\) yields
\[[\widehat{\nabla}\,\widehat{\nabla}]_{ij} =\partial_{j}\widehat{V}_{i}-\mathbb{T}^{K}_{ij}\widehat{V}_{K} =\partial_{j}v_{i}-\Gamma^{k}_{ij}v_{k}-v_{\perp}I\!I_{ij}+ \mathcal{O}(\xi)\] \[=G_{ij}[\boldsymbol{V}]+\mathcal{O}(\xi)\] \[=-b_{i}[\boldsymbol{V}]+\mathcal{O}(\xi)\] \[=b_{j}[\boldsymbol{V}]+\mathcal{O}(\xi)\] \[=0\,.\]
The orthogonality \(\partial_{i}\chi\perp\partial_{\xi}\chi\) and the thin film limit of the covariant tangential proxy of the thin film metric tensor w. r. t. frame (A.16), which is \(g_{ij}\), see [13], implies (A.15) finally.
**Lemma 24**.: _The surface Laplace operator \(\Delta_{C}:\mathbb{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\to\mathbb{T}^{2}\mathbb{ R}^{3}|_{\mathcal{S}}\) equals the Cartesian-componentwise Laplace-Beltrami operator, i. e. for all \(\boldsymbol{R}=R^{AB}\boldsymbol{e}_{A}\otimes\boldsymbol{e}_{B}\in\mathbb{T}^{ 2}\mathbb{R}^{3}|_{\mathcal{S}}\) holds_
\[[\Delta_{C}\,\boldsymbol{R}]^{AB}=\Delta R^{AB}=g^{ij}\left(\partial_{i} \partial_{j}R^{AB}-\Gamma^{k}_{ij}\partial_{k}R^{AB}\right)\,.\]
Proof.: Applying product rule yields
\[\left[(\operatorname{Tr}\nabla^{2}_{C})\boldsymbol{R}\right]^{AB}=\delta_{CD }g^{kl}\partial_{l}\left(g^{ij}\partial_{j}R^{AB}\partial_{i}X^{C}\right) \partial_{k}X^{D}=g^{jl}\partial_{l}\partial_{j}R^{AB}+g^{kl}g^{ij}\left\langle \partial_{l}\partial_{i}\boldsymbol{X},\partial_{k}\boldsymbol{X}\right\rangle_ {\mathbb{T}\mathbb{R}^{3}|_{\mathcal{S}}}\partial_{j}R^{AB}+(\partial_{i}g^{ ij})\partial_{j}R^{AB}\,.\]
Substituting \(g^{kl}\left\langle\partial_{l}\partial_{i}\boldsymbol{X},\partial_{k} \boldsymbol{X}\right\rangle_{\mathbb{T}\mathbb{R}^{3}|_{\mathcal{S}}}=\Gamma ^{l}_{li}\) (A.3) and \(\partial_{i}g^{ij}=-(g^{jk}\Gamma^{i}_{\dot{\dot{\alpha}}}+g^{kl}\Gamma^{j}_{ \dot{\dot{\alpha}}})\) (A.7) gives the assertion.
**Lemma 25**.: _The surface Laplace operator \(\Delta_{C}:\mathbb{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\to\mathbb{T}^{2}\mathbb{ R}^{3}|_{\mathcal{S}}\) corresponds to the Bochner-like Laplace operator given by the surface derivative \(\nabla_{C}\), i. e. for all \(\boldsymbol{R}\in\mathbb{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\) holds_
\[\Delta_{C}\,\boldsymbol{R}=-\nabla^{*}_{C}\,\nabla_{C}\,\boldsymbol{R}\,.\]
Proof.: Neglecting any boundary terms, lemma 24 yields
\[\left\langle\Delta_{C}\,\boldsymbol{R},\boldsymbol{\Psi}\right\rangle_{ \mathbb{L}^{2}(\mathbb{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}})}=\left\langle \Delta R^{AB},\Psi_{AB}\right\rangle_{\mathbb{L}^{2}(\mathbb{T}^{2}\mathbb{R} ^{3}|_{\mathcal{S}})}=-\left\langle\nabla R^{AB},\nabla\Psi_{AB}\right\rangle_ {\mathbb{L}^{2}(\mathbb{T}^{1}\mathbb{R}^{3}|_{\mathcal{S}})}^{2}=-\left\langle \nabla_{C}\,\boldsymbol{R},\nabla_{C}\,\boldsymbol{\Psi}\right\rangle_{\mathbb{ L}^{2}(\mathbb{T}^{1}\mathbb{R}^{3}|_{\mathcal{S}})}\]
for all \(\boldsymbol{R},\boldsymbol{\Psi}\in\mathbb{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\).
**Corollary 26**.: _For all \(\boldsymbol{R}=\boldsymbol{r}+\boldsymbol{\eta}_{L}\otimes\boldsymbol{v}+ \boldsymbol{v}\otimes\boldsymbol{\eta}_{R}+\boldsymbol{\phi}\boldsymbol{v} \otimes\boldsymbol{\nu}\in\mathbb{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\), \(\boldsymbol{r}\in\mathbb{T}^{2}S\), \(\boldsymbol{\eta}_{L},\boldsymbol{\eta}_{R}\in\mathbb{T}S\) and \(\phi\in\mathbb{T}^{0}\mathcal{S}\), the surface Laplace operator \(\Delta_{C}:\mathbb{T}^{2}\mathbb{R}^{3}|_{\mathcal{S}}\to\mathbb{T}^{2} \mathbb{R}^{3}|_{\mathcal{S}}\) yields_
\[\Delta_{C}\,\boldsymbol{R} =\Delta\boldsymbol{r}-\left(\boldsymbol{I}^{2}\boldsymbol{r}+ \boldsymbol{r}I\!I^{2}\right)-2\left((\nabla\boldsymbol{\eta}_{L})I\!I\!I+ \!I\!I(\nabla\boldsymbol{\eta}_{R})^{T}\right)-\left(\boldsymbol{\eta}_{L} \otimes\nabla\mathcal{H}+\nabla\mathcal{H}\otimes\boldsymbol{\eta}_{R}\right)+2 \phi I\!I^{2}\] \[\quad+\boldsymbol{\nu}\otimes\left(2(\nabla\boldsymbol{r}^{T}):I\!I \!I+(\nabla\mathcal{H})\boldsymbol{r}+\Delta\boldsymbol{\eta}_{R}- \operatorname{Tr}(\boldsymbol{I}^{2})\boldsymbol{\eta}_{R}-I\!I\!I^{2}\left( \boldsymbol{\eta}_{R}+2\boldsymbol{\eta}_{L}\right)-2I\!I\!\nabla\phi-\phi \nabla\mathcal{H}\right)\otimes\boldsymbol{\nu}\] \[\quad+\boldsymbol{\nu}\otimes\left(2(\nabla\boldsymbol{r}^{T}):I\!I \!I+(\nabla\mathcal{H})\boldsymbol{r}+\Delta\boldsymbol{\eta}_{R}- \operatorname{Tr}(\boldsymbol{I}^{2})\boldsymbol{\eta}_{R}-I\!I\!I^{2}\left( \boldsymbol{\eta}_{R}+2\boldsymbol{\eta}_{L}\right)-2I\!I\!\nabla\phi-\phi \nabla\mathcal{H}\right)\] \[\quad+\left(2\boldsymbol{I}\!I^{2}:\boldsymbol{r}+2(\nabla \boldsymbol{\eta}_{L}+\nabla\boldsymbol{\eta}_{R}):I\!I+(\boldsymbol{\eta}_{L}+ \boldsymbol{\eta}_{R})\nabla\mathcal{H}+\Delta\phi-2\phi\operatorname{Tr}( \boldsymbol{I}\!I^{2})\right)\boldsymbol{\nu}\otimes\boldsymbol{\nu}\,.\] (A.17)
Proof.: In this proof we calculate \(\Delta_{C}\,\boldsymbol{R}=\Delta_{C}\,\boldsymbol{r}+\Delta_{C}(\boldsymbol{ \eta}_{L}\otimes\boldsymbol{\nu})+\Delta_{C}(\boldsymbol{\nu}\otimes\boldsymbol{ \eta}_{R})+\Delta_{C}(\boldsymbol{\phi}\otimes\boldsymbol{\nu})\) term by term in this order using \([\Delta_{C}\,\boldsymbol{R}]^{AB}=g^{ij}\left(\partial_{i}\partial_{k}R^{AB}- \Gamma^{k}_{ij}\partial_{k}R^{AB}\right)\) (lemma 24). This is a straightforward proceeding, where we mainly use \(\partial_{i}\partial_{j}\boldsymbol{X}=\Gamma^{k}_{ij}\partial_{k}\boldsymbol {X}+I\!I_{ij}\boldsymbol{v}\) (A.1) and \(\partial_{i}\boldsymbol{\nu}=-I\!I^{j}_{i}\partial_{j}\boldsymbol{X}\) (A.2), without mentioning it every time. Mixed proxy components \([\nabla_{C}\,\boldsymbol{r}]^{AB}_{\dot{\alpha}}=\partial_{k}r^{AB}\) yield
\[\partial_{k}r^{AB} =\partial_{k}\left(r^{ij}\partial_{i}X^{A}\partial_{j}X^{B} \right)=\partial_{k}r^{ij}\partial_{i}X^{A}\partial_{j}X^{B}+r^{ij}\left(\Gamma^{l}_{ ik}\partial_{j}X^{A}\partial_{j}X^{B}+\Gamma^{l}_{kj}\partial_{i}X^{A} \partial_{l}X^{B}+I\!I\!I_{ki}\partial_{j}X^{A}\nu^{B}\right)\] \[=r^{ij}_{\ |k}\partial_{i}X^{A}\partial_{j}X^{B}+r^{ij}\left(I\!I_{ki} \nu^{A}\partial_{j}X^{B}+I\!I_{ki}\partial_{i}X^{A}\nu^{B}\right)\,.\]
Substituting this into \(\left[\Delta_{\mathrm{C}}\,\mathbf{r}\right]^{AB}\), the product rule gives the summands
\[g^{ij}\partial_{j}\left(r^{lm}\,_{|i}\partial_{l}X^{A}\partial_{m} X^{B}\right) =\left(r^{lm|j}\,_{|j}+g^{ij}\Gamma^{k}_{jj}r^{lm}\,_{|k}\right) \partial_{i}X^{A}\partial_{m}X^{B}+r^{lm}\,_{|j}\left(\mathbf{J}^{ij}_{l}\mathcal{V} ^{A}\partial_{m}X^{B}+\mathbf{I}^{j}_{lm}\partial_{l}X^{A}\mathcal{V}^{B}\right)\] \[g^{ij}\partial_{j}\left(r^{lm}\mathbf{I}_{il}\mathcal{V}^{A}\partial_ {m}X^{B}\right) =r^{lm}\,_{|j}\mathbf{I}^{j}_{l}\mathcal{V}^{A}\partial_{m}X^{B}+r^{lm }\left(\mathbf{I}^{j}_{l|j}+g^{ij}\Gamma^{k}_{jl}\mathbf{I}_{k}\right)\nu^{A}\partial_ {m}X^{B}-r^{lm}\mathbf{I}^{j}_{l}\mathbf{I}^{j}_{l}\partial_{k}X^{A}\partial_{m}X^{B}+r ^{lm}\mathbf{I}^{j}_{l}\mathbf{I}_{jm}\nu^{A}\nu^{B}\] \[g^{ij}\partial_{j}\left(r^{lm}\mathbf{I}_{im}\partial_{l}X^{A}\nu^{B}\right) =r^{lm}\,_{|j}\mathbf{J}^{lm}_{m}\partial_{l}X^{A}\nu^{B}+r^{lm}\left( \mathbf{J}^{ij}_{m|j}+g^{ij}\Gamma^{k}_{jl}\mathbf{I}_{km}\right)\partial_{l}X^{A} \nu^{B}-r^{lm}\mathbf{I}^{j}_{m}\mathbf{I}^{k}_{j}\partial_{l}X^{A}\partial_{k}X^{B}+r ^{lm}\mathbf{I}^{j}_{m}\mathbf{I}_{jl}\nu^{A}\nu^{B}\] \[-g^{ij}\Gamma^{k}_{ij}\partial_{k}\nu^{AB} =-g^{ij}\Gamma^{k}_{ij}\left(r^{lm}\,_{|k}\partial_{l}X^{A}\partial_ {m}X^{B}+r^{lm}\left(\mathbf{I}_{kl}\nu^{A}\partial_{m}X^{B}+\mathbf{I}_{km}\partial_{l }X^{A}\nu^{B}\right)\right)\,,\]
which are adding up to
\[\nabla_{\mathrm{C}}(\phi\nu\otimes\mathbf{\nu})=2\phi\mathbf{I}^{2}-\left(2\mathbf{I}\nabla \phi+\phi\nabla\mathcal{H}\right)\otimes\mathbf{\nu}-\mathbf{\nu}\otimes\left(2\mathbf{I} \nabla\phi+\phi\nabla\mathcal{H}\right)+\left(\Delta\phi-2\phi\operatorname{Tr }(\mathbf{I}^{2})\right)\mathbf{\nu}\otimes\mathbf{\nu}\,.\]
**Lemma 27**.: _For all symmetric tangential 2-tensor fields \(\mathbf{s}\in\mathrm{Sym}^{2}\mathcal{S}:=\{\mathbf{r}\in\mathrm{T}^{2}\mathcal{S}\,|\,\bm {r}=\mathbf{r}^{T}\}\) and tangential \(Q\)-tensor fields \(\mathbf{q}\in\mathrm{Q}^{2}\mathcal{S}\) holds_
\[\Pi_{\mathrm{Q}^{2}\mathcal{S}}(\mathbf{s}^{2}\mathbf{q})=\frac{1}{2}\left\|\mathbf{s} \right\|_{\mathrm{Sym}^{2}\mathcal{S}}^{2}\mathbf{q}\,.\]
Proof.: We use the Levi-Civita tensor \(\mathbf{E}\in\mathrm{A}^{2}\mathcal{S}:=\{\mathbf{r}\in\mathrm{T}^{2}\mathcal{S}\,|\, \mathbf{r}=-\mathbf{r}^{T}\}\). It is a skew-symmetric tangential 2-tensor field defined by its covariant proxy components \(E_{ij}:=\sqrt{\det\mathbf{g}e_{ij}}\), where \(\{\varepsilon_{ij}\}\) are the Levi-Civita symbols, see [13; 7] for more details. This tensor field is very useful in many situations involving tangential tensor fields. We use the properties that \(\mathbf{r}\bot(\mathbf{r}\mathbf{E})\) is valid for all \(\mathbf{r}\in\mathrm{T}^{2}\mathcal{S}\) and \(\mathbf{q}\mathbf{E}\in\mathrm{Q}^{2}\mathcal{S}\). This yields
\[\left\langle\Pi_{\mathrm{Q}^{2}\mathcal{S}}(\mathbf{s}^{2}\mathbf{q}),\mathbf{q}\mathbf{E} \right\rangle_{\mathrm{Q}^{2}\mathcal{S}}=\left\langle\mathbf{s}^{2}\mathbf{q},\mathbf{q} \mathbf{E}\right\rangle_{\mathrm{T}^{2}\mathcal{S}}=\left\langle\mathbf{s}\mathbf{q},(\bm {s}\mathbf{q})\mathbf{E}\right\rangle_{\mathrm{T}^{2}\mathcal{S}}=0\,.\]
Since \(\mathbf{q}^{2}=\frac{\mathrm{Tr}\,\mathbf{q}^{2}}{2}\mathbf{Id}_{\mathcal{S}}\) is valid, see [13; Cor A.4.], we obtain
\[\left\langle\Pi_{\mathrm{Q}^{2}\mathcal{S}}(\mathbf{s}^{2}\mathbf{q}),\mathbf{q}\right\rangle _{\mathrm{Q}^{2}\mathcal{S}}=\left\langle\mathbf{s}^{2}\mathbf{q},\mathbf{q}\right\rangle_{ \mathrm{T}^{2}\mathcal{S}}=\left\langle\mathbf{s}^{2},\mathbf{q}^{2}\right\rangle_{ \mathrm{T}^{2}\mathcal{S}}=\frac{1}{2}\,\mathrm{Tr}\,\mathbf{s}^{2}\,\mathrm{Tr}\, \mathbf{q}^{2}\,.\]
Assuming \(\mathbf{q}\neq 0\) everywhere without loss of generality, we can span the space of Q-tensor fields by \(\mathrm{Q}^{2}\mathcal{S}=\mathrm{Span}_{\mathrm{T}^{0}\mathcal{S}}[\mathbf{q}, \mathbf{q}\mathbf{E}]\). Due to this we get
\[\Pi_{\mathrm{Q}^{2}\mathcal{S}}(\mathbf{s}^{2}\mathbf{q})=\frac{\left\langle\Pi_{ \mathrm{Q}^{2}\mathcal{S}}(\mathbf{s}^{2}\mathbf{q}),\mathbf{q}\right\rangle_{\mathrm{Q}^ {2}\mathcal{S}}}{\left\|\mathbf{q}\right\|_{\mathrm{Q}^{2}\mathcal{S}}^{2}}\mathbf{q}+ \frac{\left\langle\Pi_{\mathrm{Q}^{2}\mathcal{S}}(\mathbf{s}^{2}\mathbf{q}),\mathbf{q}\mathbf{ E}\right\rangle_{\mathrm{Q}^{2}\mathcal{S}}}{\left\|\mathbf{q}\mathbf{E}\right\|_{\mathrm{Q}^{2} \mathcal{S}}^{2}}\mathbf{q}\mathbf{E}=\frac{1}{2}\left\|\mathbf{s}\right\|_{\mathrm{Sym}^ {2}\mathcal{S}}^{2}\mathbf{q}\,,\]
since \(\mathrm{Tr}\,\mathbf{s}^{2}=\left\|\mathbf{s}\right\|_{\mathrm{Sym}^{2}\mathcal{S}}^{2}\) for all \(\mathbf{s}\in\mathrm{Sym}^{2}\mathcal{S}\).
## Appendix B Outsourced Calculations
### Time Derivative on Scalar Fields
Local observer coordinate parameters \((y_{0}^{1},y_{0}^{2})\) can be given by
\[y_{0}^{i}=y_{0}^{i}(t,y_{1}^{1},y_{m}^{2})=(\mathbf{X}_{0}|_{t}^{-1}\circ\mathbf{X}_{m })(t,y_{m}^{1},y_{m}^{2})\]
depended on local material coordinate parameters \((y_{m}^{1},y_{m}^{2})\) at time \(t\). Therefore, with relation (3), a scalar field \(f[\mathbf{X}_{m}]\in\mathrm{T}^{0}\mathcal{S}\) and the pullback (4) yields
\[f[\mathbf{X}_{m}](t,y_{m}^{1},y_{m}^{2}) =f[\mathbf{X}_{0}](t,y_{0}^{1}(t,y_{m}^{1},y_{m}^{2}),y_{0}^{2}(t,y_{ m}^{1},y_{m}^{2}))\] \[(\Phi_{t,t}^{x_{0}}f[\mathbf{X}_{m}]|_{t+\tau})(t,y_{m}^{1},y_{m}^{2}) =f[\mathbf{X}_{0}](t+\tau,y_{0}^{1}(t+\tau,y_{m}^{1},y_{m}^{2}),y_{0}^{ 2}(t+\tau,y_{m}^{1},y_{m}^{2}))\,.\]
Taylor expansion of the pullback at \(\tau=0\) gives
\[f[\mathbf{X}_{0}](t+\tau,y_{0}^{1}(t+\tau,y_{m}^{1},y_{m}^{2}),y_{0}^ {2}(t+\tau,y_{m}^{1},y_{m}^{2}))\] \[=f[\mathbf{X}_{0}](t,y_{0}^{1}(t,y_{m}^{1},y_{m}^{2}),y_{0}^{2}(t,y_{ m}^{1},y_{m}^{2}),y_{m}^{2}))+\tau\partial_{t}f[\mathbf{X}_{0}](t,y_{0}^{1}(t,y_{m}^{1},y_{m}^{ 2}),y_{0}^{2}(t,y_{m}^{1},y_{m}^{2}))\] \[\quad+\tau\partial_{t}y_{0}^{i}(t,y_{m}^{1},y_{m}^{2})\partial_ {t}f[\mathbf{X}_{0}](t,y_{0}^{1}(t,y_{m}^{1},y_{m}^{2}),y_{0}^{2}(t,y_{m}^{1},y_{m }^{2}))+\mathcal{O}(\tau^{2})\,. \tag{10}\]
To express \(\partial_{t}y_{0}^{i}(t,y_{m}^{1},y_{m}^{2})\) also in terms of \(y_{0}^{i}(t,y_{m}^{1},y_{m}^{2})\) we calculate
\[\partial_{t}y_{0}^{i}(t,y_{m}^{1},y_{m}^{2})\partial_{t}\mathbf{X}_{0 }(t,y_{0}^{1}(t,y_{m}^{1},y_{m}^{2}),y_{0}^{2}(t,y_{m}^{1},y_{m}^{2}))\] \[\quad=\frac{d}{dt}\mathbf{X}_{0}(t,y_{0}^{1}(t,y_{m}^{1},y_{m}^{2}),y_ {0}^{2}(t,y_{m}^{1},y_{m}^{2}))-\partial_{t}\mathbf{X}_{0}(t,y_{0}^{1}(t,y_{m}^{1},y_ {m}^{2}),y_{0}^{2}(t,y_{m}^{1},y_{m}^{2}))\] \[\quad=\partial_{t}\mathbf{X}_{m}(t,y_{m}^{1},y_{m}^{2})-\partial_{t} \mathbf{X}_{0}(t,y_{0}^{1}(t,y_{m}^{1},y_{m}^{2}),y_{0}^{2}(t,y_{m}^{1},y_{m}^{2}))\] \[\quad=\mathbf{V}_{m}[\mathbf{X}_{m}](t,y_{m}^{1},y_{m}^{2})-\mathbf{V}_{0}[ \mathbf{X}_{0}](t,y_{0}^{1}(t,y_{m}^{1},y_{m}^{2}),y_{0}^{2}(t,y_{m}^{1},y_{m}^{2}))\] \[\quad=\mathbf{u}[\mathbf{X}_{0},\mathbf{X}_{m}](t,y_{0}^{1}(t,y_{m}^{1},y_{m} ^{2}),y_{0}^{2}(t,y_{m}^{1},y_{m}^{2}))\,,\]
i. e. it holds \(\partial_{t}y_{0}^{i}(t,y_{m}^{1},y_{m}^{2})=u^{i}[\mathbf{X}_{0},\mathbf{X}_{m}](t,y_{0}^{ 1}(t,y_{m}^{1},y_{m}^{2}),y_{0}^{2}(t,y_{m}^{1},y_{m}^{2}))\) w. r. t. the observer frame induced by \(\mathbf{X}_{0}\). With \(\dot{f}:=\mathrm{D}_{t}\,|_{\mathbf{\psi}_{t=0}^{\infty}}\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
|
2306.07459
|
Log-concavity for partitions without sequences
|
We prove log-concavity for the function counting partitions without
sequences. We use an exact formula for a mixed-mock modular form of weight
zero, explicit estimates on modified Kloosterman sums and the saddle point
method.
|
Lukas Mauth
|
2023-06-12T23:25:12Z
|
http://arxiv.org/abs/2306.07459v1
|
# Log-concavity for partitions without sequences
###### Abstract.
We prove log-concavity for the function counting partitions without sequences. We use an exact formula for a mixed-mock modular form of weight zero, explicit estimates on modified Kloosterman sums and the saddle point method.
Key words and phrases:Circle Method, \(\eta\)-function, partitions 2020 Mathematics Subject Classification: 11B57, 11F03, 11F20, 11F30, 11F37, 11P82
## 1. Introduction and statement of results
A _partition_ of a non-negative integer \(n\) is a finite non-increasing sequence of positive integers which sum to \(n.\) We denote the number of partitions of \(n\) by \(p(n).\) A classical tool for studying partitions is its generating function [1]
\[P(q):=\sum_{n=0}^{\infty}p(n)q^{n}=\prod_{n=1}^{\infty}\frac{1}{1-q^{n}}=\frac {1}{(q;q)_{\infty}}, \tag{1.1}\]
where for \(a\in\mathbb{C}\) and \(n\in\mathbb{N}\cup\{\infty\}\) we define the \(q\)-Pochhammer symbol \((a)_{n}=(a;q)_{n}:=\prod_{k=1}^{n-1}(1-aq^{j}).\)
This form of the generating function was used by G. H. Hardy and S. Ramanujan to develop the Circle Method and led them to prove the following asymptotics [14]
\[p(n)\sim\frac{1}{4n\sqrt{3}}e^{\pi\sqrt{\frac{2n}{3}}},\quad n\to\infty. \tag{1.2}\]
Later, H. Rademacher improved the Circle Method to obtain an exact formula for \(p(n)\)[16]. To state this formula, define the _Kloostermann sums_
\[A_{k}(n):=\sum_{h\ (\mathrm{mod}\ k)^{*}}\omega_{h,k}e^{\frac{-2\pi inh}{k}},\]
where \(*\) indicates that \(h\) only runs over those residue classes that are coprime to \(k\) and \(\omega_{h,k}\) is a \(24k\)-th root of unity defined by
\[\omega_{h,k}:=\begin{cases}\left(\frac{-k}{h}\right)e^{-\pi i\left(\frac{1}{4 }(2-hk-h)+\frac{1}{12}\left(k-\frac{1}{k}\right)\left(2h-h^{\prime}+h^{2}h^{ \prime}\right)\right)}&\text{ if $h$ is odd},\\ \left(\frac{-h}{k}\right)e^{-\pi i\left(\frac{1}{4}(k-1)+\frac{1}{12}\left(k- \frac{1}{k}\right)\left(2h-h^{\prime}+h^{2}h^{\prime}\right)\right)}&\text{ if $k$ is odd},\end{cases}\]
where \(h^{\prime}\) is a solution to \(hh^{\prime}\equiv-1\ (\mathrm{mod}\ k),\) and \(\left(\cdot\right)\) denotes the Kronecker symbol. Furthermore, let \(I_{\kappa}\) denote the modified Bessel function of order \(\kappa.\) Rademacher's exact formula for \(p(n)\) is then given by
\[p(n)=\frac{2\pi}{(24n-1)^{3/4}}\sum_{k=1}^{\infty}\frac{A_{k}(n)}{k}I_{\frac{ 3}{2}}\left(\frac{\pi\sqrt{24n-1}}{6k}\right).\]
The first term in the sum recovers the asymptotics found by Hardy and Ramanujan. Rademacher's proof made extensive use of the fact that the generating function \(P(q)\) is essentially a modular form. Continuing their study, Rademacher and H. Zuckerman showed exact formulas for the Fourier coefficients (at any cusp) of all modular forms of _negative_ weight of any finite index subgroup of \(\operatorname{SL}_{2}(\mathbb{Z})\)[17, 21].
From a combinatorial point of view it is a classical question to ask whether \(p(n)\) is log-concave, where we call a sequence \(\{a_{n}\}\)_log-concave_ if it satisfies for all \(n\geq 1\) the inequality
\[a_{n}^{2}\geq a_{n-1}a_{n+1}.\]
Many important sequences that arise naturally in combinatorics are known to be log-concave, among them are binomial coefficients, Stirling numbers and Bessel numbers [19]. The first proof that \(p(n)\) is log-concave for \(n\geq 26\) and all even \(n<26\) was given by J.-L. Nicolas [12] and was later reproved independently by S. Desalvo and I. Pak [8]. K. Ono, S. Pujahari and L. Rolen showed in [15] that the function counting plane partitions is log-concave for all \(n\geq 12.\) All these proofs follow the same principle. Using the Circle Method (or an exact formula obtained by the Circle Method) one obtains strong asymptotics with explicit error bounds. The main term is easily seen to be log-concave and the difficulty lies in finding analytically a small bound after which the main term dominates the error term in the log-concavity condition. The remaining case are then checked directly.
In this paper we intend to carry out this program for the function \(p_{2}(n)\) which is defined as the number of partitions of \(n\) that do not contain any consecutive integers as parts. These types of partitions were studied by P. MacMahon [11] and arise in certain probability models and the study of threshold growth in cellular automata [3, 9]. G. Andrews showed [2] the following formula for the generating function:
\[G_{2}(q):=\sum_{n=0}^{\infty}p_{2}(n)q^{n}=\frac{(-q^{3};q^{3})_{\infty}}{(q^{2 };q^{2})_{\infty}}\chi(q)\]
where \(\chi(q)\) denotes the third order mock theta function
\[\chi(q):=\sum_{n=0}^{\infty}\frac{(-q;q)_{n}}{(-q^{3};q^{3})_{n}}q^{n^{2}}.\]
In contrast to \(p(n)\) the generating function of \(p_{2}(n)\) is not anymore a modular form but the product of a mock theta function and a modular form with an overall weight of \(0.\) This increases difficulty of proving asymptotics for \(p_{2}(n).\) Bringmann and K. Mahlburg first succeeded in proving an asymptotic formula for \(p_{2}(n)\)[7] and recent work [5] by W. Bridges and K. Bringmann improves their result and gives an exact formula for \(p_{2}(n),\) and is the first example of an exact formula for the coefficients of a mixed-mock modular form. To state their result we need to introduce some notation. For \(b\in\mathbb{R},k\in\mathbb{N},\) and \(\nu\in\mathbb{Z}\) define
\[\mathcal{I}_{b,k,\nu}(n):=\int_{-1}^{1}\frac{\sqrt{1-x^{2}}I_{1}\left(\frac{2 \pi}{k}\sqrt{2bn(1-x^{2})}\right)}{\cosh\left(\frac{\pi i}{k}\left(\nu-\frac{1 }{6}\right)-\frac{\pi}{k}\sqrt{\frac{b}{3}x}\right)}dx.\]
Moreover, for \(\nu,n\in\mathbb{Z}\) and for all \(k\in\mathbb{N}\) we define the following Kloosterman sum
\[K_{k}^{[4]}(\nu;n) :=\sum_{\begin{subarray}{c}0\leq h<k\\ \gcd(h,k)=1\\ 8|h^{\prime}\end{subarray}}\frac{\omega_{h,k}\omega_{h,\frac{k}{2}}\omega_{3h,k }}{\omega_{3h,\frac{k}{2}}}e^{\frac{\pi i}{k}\left(-3\nu^{2}+v\right)h^{\prime} }e^{\frac{2\pi inh}{k}},\quad\gcd(k,6)=2,\] \[K_{k}^{[6]}(\nu;n) :=\sum_{\begin{subarray}{c}0\leq h<k\\ \gcd(h,k)=1\\ 8|h^{\prime}\end{subarray}}\frac{\omega_{h,k}\omega_{2h,k}\omega_{h,\frac{k} {3}}}{\omega_{2h,\frac{k}{3}}}e^{\frac{\pi i}{k}\left(-3\nu^{2}+v\right)h^{ \prime}}e^{\frac{2\pi inh}{k}},\quad\gcd(k,6)=3,\] \[K_{k}^{[8]}(\nu;n) :=\sum_{\begin{subarray}{c}0\leq h<k\\ \gcd(h,k)=1\\ 24|h^{\prime}\end{subarray}}\frac{\omega_{h,k}\omega_{2h,k}\omega_{3h,k}}{ \omega_{6h,k}}e^{\frac{\pi i}{k}\left(-3\nu^{2}-v\right)h^{\prime}}e^{\frac{ 2\pi inh}{k}},\quad\gcd(k,6)=1,\] \[\mathcal{K}_{k}(n) :=\sum_{h\ (\mathrm{mod}\ k)^{*}}\frac{\omega_{h,k}\omega_{2h,k} \omega_{6h,k}}{\omega_{3h,k}^{3}}e^{-\frac{2\pi inh}{k}},\quad\gcd(k,6)=1.\]
The exact formula for \(p_{2}(n)\) then reads as follows,
**Theorem**.: _For \(n\geq 2\) we have_
\[p_{2}(n) =\frac{\pi}{6\sqrt{n}}\sum_{\begin{subarray}{c}k\geq 1\\ \gcd(k,6)=1\end{subarray}}\frac{\mathcal{K}_{k}(n)}{k^{2}}I_{1}\left(\frac{2 \pi\sqrt{n}}{3k}\right)\] \[+\frac{\pi}{18\sqrt{6n}}\sum_{\begin{subarray}{c}k\geq 1\\ \gcd(k,6)=1\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}K_{k}^{[8]}(\nu;n) \mathcal{I}_{\frac{1}{18},k,\nu}(n)\] \[+\frac{5\pi}{36\sqrt{6n}}\sum_{\begin{subarray}{c}k\geq 1\\ \gcd(k,6)=2\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}K_{k}^{[4]}(v;n) \mathcal{I}_{\frac{5}{36},k,\nu}(n)\] \[+\frac{\pi}{6\sqrt{6n}}\sum_{\begin{subarray}{c}k\geq 1\\ \gcd(k,6)=3\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}K_{k}^{[6]}(\nu;n) \mathcal{I}_{\frac{1}{6},k,\nu}(n). \tag{1.3}\]
We are going to use this formula to obtain strong asymptotics for \(p_{2}(n)\) in terms of elementary functions with an explicit error term. To carry out this program we need explicit estimates for the Kloostermans sums and the integrals \(\mathcal{I}_{b,k,v}.\) We will derive these estimates in Section 2. It turns out that if we need a really strong estimate for \(\mathcal{I}_{\frac{1}{18},1,0}(n).\) This will be done with the saddle point method in Section 3. In Section 4 we are going to prove our main theorem
**Theorem**.: _We have for \(n\geq 482\) and all even \(2\leq n<482\) that_
\[p_{2}^{2}(n)-p_{2}(n-1)p_{2}(n+1)\geq 0.\]
## Acknowledgements
The author wishes to thank Kathrin Bringmann and Walter Bridges for suggesting and supervising this project, William Craig, Johann Franke and Badri Vishal Pandey for helpful discussions concerning log-concavity problems. Furthermore, the author wishes to thank Ben Kane, Andreas Mono, Joshua Males and Caner Nazaroglu for helping with the verification of the remaining 450000
cases for log-concavity. The author recieved funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 101001179).
## 2. Estimates for Kloosterman sums and Bessel Integrals
First we are going to estimates the terms that come from \(k\geq 2\) in (1.3) as they are exponentially smaller than the first term. We consider two different ranges, depending on whether \(k\) is small or large. We say that \(k\) is small if \(k<2\sqrt{n}\) and we say that \(k\) is large if \(k\geq 2\sqrt{n}.\) We state the following premliminary results. First we need explicit upper bounds for the Bessel function \(I_{1}(x)\) for small and large \(x.\)
**Lemma 2.1**.: _The following are true:_
\[I_{1}(x) \leq\sqrt{\frac{2}{\pi x}}e^{x},\quad x>1,\] \[I_{1}(x) \leq x,\quad 0\leq x\leq 1.\]
We want to write the modified Kloosterman sums in terms of classical ones. To that end, recall the definition of the _classical Kloosterman_ sum for \(a,b\in\mathbb{Z}\) and \(k\in\mathbb{N}\) we define
\[K(a,b,k)=\sum_{h\ (\mathrm{mod}\ k)^{*}}e^{\frac{2\pi i}{k}(ah+b[h]_{k})},\]
where \([h]_{k}\) denotes the inverse of \(h\) mod \(k.\) For classical Kloosterman sums we have the Weil bound, see [10] for a proof. Let \(\tau(n)\) denote the number of divisors of \(n\).
**Theorem 2.2** (Weil bound for Kloosterman sums).: _For \(a,b\in\mathbb{Z}\) and \(k\in\mathbb{N}\) we have_
\[|K(a,b,k)|\leq\tau(n)\sqrt{\gcd(a,b,k)}k^{\frac{1}{2}}.\]
Furthermore, it is well known that \(\tau(n)\ll_{\varepsilon}n^{\varepsilon}.\) The constant, depending on \(\varepsilon\) can be worked out explicitly. We take the following result from [13].
**Lemma 2.3**.: _We have for all \(n\geq 1\) that_
\[\tau(n)\leq 576\left(\frac{n}{21621600}\right)^{\frac{1}{4}}\leq 9n^{\frac{1}{ 4}}.\]
We also need the following result from [5] on the multipliers of the Kloosterman sums \(K_{k}^{[4]},K_{k}^{[6]}\) and \(K_{k}^{[8]}\) to write them in terms of classical Kloosterman sums.
**Lemma 2.4**.: _We have_
\[\frac{\omega_{h,k}\omega_{h,\frac{k}{2}}\omega_{3h,k}}{\omega_{3h,\frac{k}{2}}}=e^{\frac{2\pi i}{k}\left(\frac{k(k+2)}{8}h-\frac{k^{2}+2}{18} h^{\prime}\right)},\] \[\frac{\omega_{h,k}\omega_{2h,k}\omega_{h,\frac{k}{3}}}{\omega_{2h,\frac{k}{3}}}=(-1)^{\frac{k+1}{2}}e^{\frac{4\pi ikh}{9}-\frac{2\pi i}{24k}(k ^{2}-3)h^{\prime}},\] \[\frac{\omega_{h,k}\omega_{2h,k}\omega^{3}{h,k}}{\omega_{6h,k}}=(- 1)^{\frac{k+1}{2}}e^{\frac{10\pi i(k^{2}-1)h^{\prime}}{72k}}.\]
We are now ready to estimate the modified Kloosterman sums appearing in (1.3). This is an explicit version of the estimates given in [5] and follows their proof closely.
**Lemma 2.5**.: _For all \(k\geq 1\) and \(0\leq\nu<k,\) we have_
\[\left|K_{k}^{[4]}(\nu;n)\right|\leq 26\sqrt{n}k^{\frac{3}{4}},\quad\gcd(k,6)=2,\]
\[\left|K_{k}^{[6]}(\nu;n)\right|\leq 27\sqrt{n}k^{\frac{3}{4}},\quad\gcd(k,6)=3,\]
\[\left|K_{k}^{[8]}(\nu;n)\right|\leq 9\sqrt{n}k^{\frac{3}{4}},\quad\gcd(k,6)=1,\]
\[|\mathcal{K}_{k}(n)|\leq k.\]
Proof.: Suppose \(\gcd(k,6)=2.\) We then write using Lemma 2.4
\[K_{k}^{[4]}(\nu;n)=\sum_{\begin{subarray}{c}0\leq h<k\\ \gcd(h,k)=1\\ 3|h^{\prime}\end{subarray}}e^{\frac{2\pi i}{k}}\left(\left(\frac{k(k+2)}{8}-n \right)h+\left(\frac{-k^{2}+2}{18}-\frac{3\nu^{2}+\nu}{2}\right)h^{\prime} \right).\]
We now change variables \(h^{\prime}\mapsto-3h^{\prime}\) and \(h\mapsto[-1]_{k}[3]_{k}h\) to obtain
\[K_{k}^{[4]}(\nu;n)=K\left([-1]_{k}[3]_{k}\left(\frac{k(k+2)}{8}-n\right), \frac{k^{2}+2}{6}+\frac{9\nu^{2}+3\nu}{2},k\right)\]
Using Weil's bound, Theorem 2.2, we estimate this by
\[K_{k}^{[4]}(\nu;n)\leq\tau(k)\sqrt{\gcd\left([-1]_{k}[3]_{k}\left(\frac{k(k+2 )}{8}-n\right),\frac{k^{2}+2}{6}+\frac{9\nu^{2}+3\nu}{2},k\right)}k^{\frac{1}{ 2}}.\]
We continue to estimate
\[\gcd\left([-1]_{k}[3]_{k}\left(\frac{k(k+2)}{8}-n\right),\frac{k^{2}+2}{6}+ \frac{9\nu^{2}+3\nu}{2},k\right)\leq\gcd\left([-1]_{k}[3]_{k}\left(\frac{k(k+ 2)}{8}-n\right),k\right)\]
\[\leq\gcd\left([-1]_{k}[3]_{k}8n,k\right)=\gcd(8n,k)\leq 8n.\]
Combined with the Lemma 2.3 this yields as claimed
\[|K_{k}^{[4]}(\nu;n)|\leq 9k^{\frac{1}{4}}\sqrt{8n}k^{\frac{1}{2}}\leq 26\sqrt{n }k^{\frac{3}{4}}.\]
The estimates for \(K_{k}^{[6]}(\nu;n)\) and \(K_{k}^{[8]}(\nu;n)\) follow in the same way. The estimate for \(\mathcal{K}_{k}(n)\) is the trivial bound.
In the next step we rewrite the integrals \(\mathcal{I}_{b,k,\nu}(n).\) We start with the following elementary result, whose proof is a straightforward calculation.
**Lemma 2.6**.: _Let \(a,b\in\mathbb{R}\) and \(x\in[0,1].\) Then,_
\[f_{a,b}(x):=\frac{1}{\cosh(a+ib)}+\frac{1}{\cosh(a-ib)}=4\frac{\cos(a)\cosh(bx) }{\cos(2a)+\cosh(2bx)}.\]
_In particular, \(f_{a,b}\) is a real valued function with constant sign on \([0,1].\) Furthermore, \(|f_{a,b}(x)|\) is monotonically decreasing on \([0,1].\)_
**Corollary 2.7**.: _Let \(b\in\mathbb{R},k\in\mathbb{N}\) and \(\nu\in\mathbb{Z}.\) We set \(a:=\frac{\pi}{k}\left(\nu-\frac{1}{6}\right)\) and \(b:=\frac{\pi}{k}\sqrt{\frac{b}{3}}.\) Then, we have the estimate_
\[|\mathcal{I}_{b,k,\nu}(n)|\leq|2\sec(a)|I_{1}\left(\frac{2\pi}{k}\sqrt{2bn} \right).\]
Proof.: We use the previous lemma to write
\[\mathcal{I}_{b,k,\nu}(n)=\int_{0}^{1}f_{a,b}(x)\sqrt{1-x^{2}}I_{1}\left(\frac{ 2\pi}{k}\sqrt{2bn(1-x^{2})}\right).\]
Recall that \(|f_{a,b}(x)|\) is decreasing. Now, the result follows immediately as the maximum of the integrand in absolute value is at \(x=0\) and by noting that \(f_{a,b}(0)=2\sec(a).\)
### Estimates for large k
In this section we always assume that \(k\geq 2\sqrt{n}.\)
**Lemma 2.8**.: _Let \(n\geq 1.\) Furthermore, let \(b\in\mathbb{R}\) and \(\nu\in\mathbb{N}.\) Then,_
\[\sum_{\nu\ (\mathrm{mod}\ k)}|\mathcal{I}_{b,k,\nu}(n)|\leq 32\pi\log(k)\sqrt{ 2bn}.\]
Proof.: By Corollary 2.7 and Lemma 2.1 we find that
\[|\mathcal{I}_{b,k,\nu}(n)|\leq 2\left|\sec\left(\frac{\pi}{k}\left(\nu-\frac{1} {6}\right)\right)\right|\frac{2\pi}{k}\sqrt{2bn}.\]
For \(x\in[0,1],x\neq\frac{\pi}{2}\) one has the inequality
\[\left|\frac{1}{\cos(x)}\right|\leq\frac{\pi}{2}\frac{1}{\left|x-\frac{\pi}{2} \right|}.\]
Using this one inequality, one sees easily that
\[\sum_{\nu\ (\mathrm{mod}\ k)}\left|\frac{1}{\cos\left(\frac{\pi}{k}\left(\nu- \frac{1}{6}\right)\right)}\right|\leq 8k\log(k),\]
which finishes the proof immediately.
This allows us to estimate the contribution from large \(k\) to the sum in (1.3).
**Lemma 2.9**.: _Let \(n\geq 1.\) Then, we have_
\[\sum_{\nu\ (\mathrm{mod}\ k)}|\mathcal{I}_{b,k,\nu}(n)|\leq 32\pi\log(k)\sqrt{ 2bn}.\]
Proof.: By Corollary 2.7 and Lemma 2.1 we find that
\[|\mathcal{I}_{b,k,\nu}(n)|\leq 2\left|\sec\left(\frac{\pi}{k}\left(\nu-\frac{1} {6}\right)\right)\right|\frac{2\pi}{k}\sqrt{2bn}.\]
For \(x\in[0,1],x\neq\frac{\pi}{2}\) one has the inequality
\[\left|\frac{1}{\cos(x)}\right|\leq\frac{\pi}{2}\frac{1}{\left|x-\frac{\pi}{2} \right|}.\]
Using this one inequality, one sees easily that
\[\sum_{\nu\ (\mathrm{mod}\ k)}\left|\frac{1}{\cos\left(\frac{\pi}{k}\left(\nu- \frac{1}{6}\right)\right)}\right|\leq 8k\log(k),\]
which finishes the proof immediately.
This allows us to estimate the contribution from large \(k\) to the sum in (1.3).
**Lemma 2.10**.: _Let \(n\geq 1.\) Then, we have_
\[\sum_{\nu\ (\mathrm{mod}\ k)}|\mathcal{I}_{b,k,\nu}(n)|\leq 32\pi\log(k) \sqrt{2bn}.\]
Proof.: By Corollary 2.7 and Lemma 2.1 we find that
\[|\mathcal{I}_{b,k,\nu}(n)|\leq 2\left|\sec\left(\frac{\pi}{k}\left(\nu-\frac{1 }{6}\right)\right)\right|\frac{2\pi}{k}\sqrt{2bn}.\]
For \(x\in[0,1],x\neq\frac{\pi}{2}\) one has the inequality
\[\left|\frac{1}{\cos(x)}\right|\leq\frac{\pi}{2}\frac{1}{\left|x-\frac{\pi}{2} \right|}.\]
Using this one inequality, one sees easily that
\[\sum_{\nu\ (\mathrm{mod}\ k)}\left|\frac{1}{\cos\left(\frac{\pi}{k}\left(\nu- \frac{1}{6}\right)\right)}\right|\leq 8k\log(k),\]
which finishes the proof immediately.
This allows us to estimate the contribution from large \(k\) to the sum in (1.3).
**Lemma 2.11**.: _Let \(n\geq 1.\) Then, we have_
\[\sum_{\nu\ (\mathrm{mod}\ k)}|\mathcal{I}_{b,k,\nu}(n)|\leq 32\pi\log(k) \sqrt{2bn}.\]
Proof.: By Corollary 2.7 and Lemma 2.11 we find that
\[|\mathcal{I}_{b,k,\nu}(n)|\leq 2\left|\sec\left(\frac{\pi}{k}\left(\nu-\frac{1 }{6}\right)\right)\right|\frac{2\pi}{k}\sqrt{2bn}.
\[\frac{\pi}{6\sqrt{n}}\left|\sum_{\begin{subarray}{c}k\geq 2\sqrt{n}\\ \gcd(k,6)=1\end{subarray}}\frac{\mathcal{K}_{k}(n)}{k^{2}}I_{1}\left(\frac{2 \pi\sqrt{n}}{3k}\right)\right|\] \[\quad+\frac{\pi}{18\sqrt{6n}}\left|\sum_{\begin{subarray}{c}k\geq 2 \sqrt{n}\\ \gcd(k,6)=1\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}K_{k}^{[8]}(\nu ;n)\mathcal{I}_{\frac{1}{18},k,\nu}(n)\right|\] \[\quad+\frac{5\pi}{36\sqrt{6n}}\left|\sum_{\begin{subarray}{c}k\geq 2 \sqrt{n}\\ \gcd(k,6)=2\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}K_{k}^{[4]}(v ;n)\mathcal{I}_{\frac{5}{36},k,\nu}(n)\right|\] \[\quad+\frac{\pi}{6\sqrt{6n}}\left|\sum_{\begin{subarray}{c}k\geq 2 \sqrt{n}\\ \gcd(k,6)=3\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}K_{k}^{[6]}(\nu ;n)\mathcal{I}_{\frac{1}{6},k,\nu}(n)\right|\leq 25265n^{\frac{7}{8}}. \tag{2.1}\]
Proof.: We start with the first sum. By Lemma 2.1 and Lemma 2.5 we see
\[\sum_{\begin{subarray}{c}k\geq 2\sqrt{n}\\ \gcd(k,6)=1\end{subarray}}\frac{|\mathcal{K}_{k}(n)|}{k^{2}}\left|I_{1}\left( \frac{2\pi\sqrt{n}}{3k}\right)\right|\leq\frac{2\pi}{3}\sum_{k\geq\sqrt{n}} \frac{1}{k^{2}}\leq 2\pi\int_{\sqrt{n}}^{\infty}\frac{1}{x^{2}}dx=\frac{2\pi}{ \sqrt{n}}\leq 2\pi n^{\frac{7}{8}}.\]
For the second sum we find using Lemma 2.5 and Lemma 2.8 that
\[\sum_{\begin{subarray}{c}k\geq 2\sqrt{n}\\ \gcd(k,6)=1\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}\left|K_{k}^{[8]}(\nu ;n)\right|\left|\mathcal{I}_{\frac{1}{18},k,\nu}(n)\right|\leq 9\sqrt{n}\sum_{k \geq 2\sqrt{n}}\frac{1}{k^{\frac{5}{4}}}\sum_{\nu\ (\mathrm{mod}\ k)}\left|\mathcal{I}_{\frac{1}{18},k,\nu}(n)\right|\] \[\qquad\leq 96\pi n\sum_{k\geq\sqrt{n}}\frac{\log(k)}{k^{\frac{5}{4}} }\leq 300\pi n\sum_{k\geq\sqrt{n}}\frac{1}{k^{\frac{9}{8}}}\leq 600\pi n\int_{ \sqrt{n}}^{\infty}\frac{1}{t^{\frac{9}{8}}}dt\leq 2400\pi n^{\frac{7}{8}},\]
where we used \(\log(n)\leq 3n^{\frac{1}{8}}\) for all \(n\geq 1\). The other sums can be estimated similarly and we obtain the bounds
\[\sum_{\begin{subarray}{c}k\geq 2\sqrt{n}\\ \gcd(k,6)=2\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}\left|K_{k}^{[4]}(v ;n)\right|\left|\mathcal{I}_{\frac{5}{36},k,\nu}(n)\right|\leq 2640\pi n^{ \frac{7}{8}},\] \[\sum_{\begin{subarray}{c}k\geq 2\sqrt{n}\\ \gcd(k,6)=3\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}\left|K_{k}^{[6]}(\nu ;n)\right|\left|\mathcal{I}_{1/6,k,\nu}(n)\right|\leq 3000\pi n^{\frac{7}{8}}.\]
Furthermore, we have that
\[\max\left(\frac{5\pi}{36\sqrt{6n}},\frac{\pi}{6\sqrt{6n}},\frac{\pi}{6\sqrt{n }},\frac{\pi}{18\sqrt{6n}}\right)=\frac{\pi}{\sqrt{6n}}\leq\frac{1}{\sqrt{n}}.\]
Combining all estimates proves the claim.
### Estimates for small \(\mathbf{k}\)
We assume throughout this section that \(k\leq 2\sqrt{n}.\) We start with the corresponding result to Lemma 2.8 for small \(k.\)
**Lemma 2.10**.: _Let \(n\geq 1\) be fixed. Then we have_
\[\frac{\pi}{6\sqrt{n}}\left|\sum_{\begin{subarray}{c}2\leq k\leq 2 \sqrt{n}\\ \gcd(k,6)=1\end{subarray}}\frac{\mathcal{K}_{k}(n)}{k^{2}}I_{1}\left(\frac{2 \pi\sqrt{n}}{3k}\right)\right|\] \[+\frac{\pi}{18\sqrt{6n}}\left|\sum_{\begin{subarray}{c}2\leq k \leq 2\sqrt{n}\\ \gcd(k,6)=1\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}K_{k}^{[8]}(\nu ;n)\mathcal{I}_{\frac{1}{18},k,\nu}(n)\right|\] \[+\frac{5\pi}{36\sqrt{6n}}\left|\sum_{\begin{subarray}{c}2\leq k \leq 2\sqrt{n}\\ \gcd(k,6)=2\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}K_{k}^{[4]}(v ;n)\mathcal{I}_{\frac{5}{36},k,\nu}(n)\right|\] \[+\frac{\pi}{6\sqrt{6n}}\left|\sum_{\begin{subarray}{c}2\leq k \leq 2\sqrt{n}\\ \gcd(k,6)=3\end{subarray}}\frac{1}{k^{2}}\sum_{\nu\ (\mathrm{mod}\ k)}(-1)^{\nu}K_{k}^{[6]}(\nu ;n)\mathcal{I}_{\frac{1}{6},k,\nu}(n)\right|\leq 4e^{\frac{\sqrt{3}\pi}{3} \sqrt{n}}. \tag{2.2}\]
Proof.: First of all we start by identifying the dominating term in this sum. We set \(a_{\nu}=\frac{\pi}{2}\left(\nu-\frac{1}{6}\right).\) By Corollary 2.7 for \(k=2\) we have
\[\left|\mathcal{I}_{\frac{1}{18},2,\nu}(n)\right| \leq 2|\sec(a_{\nu})|I_{1}\left(\frac{\pi}{3}\sqrt{n}\right),\] \[\left|\mathcal{I}_{\frac{5}{36},2,\nu}(n)\right| \leq 2|\sec(a_{\nu})|I_{1}\left(\frac{\pi\sqrt{10}}{6}\sqrt{n} \right),\] \[\left|\mathcal{I}_{\frac{1}{6},2,\nu}(n)\right| \leq 2|\sec(a_{\nu})|I_{1}\left(\frac{\sqrt{3}\pi}{3}\sqrt{n} \right).\]
By monotonicity of the Bessel function we see that the last term grows the fastest. Lemma 2.1 gives
\[I_{1}\left(\frac{\sqrt{3}\pi}{3}\sqrt{n}\right)\leq\frac{\sqrt{2}3^{1/4}}{\pi n ^{\frac{1}{4}}}e^{\frac{\sqrt{3}\pi}{3}\sqrt{n}}\leq e^{\frac{\sqrt{3}\pi}{3} \sqrt{n}}.\]
Combining this with the trivial estimate on the Kloosterman shows that the expression from the statement is bounded by
\[\frac{2}{\sqrt{n}}\sum_{k=2}^{2\sqrt{n}}e^{\frac{\sqrt{3}\pi}{3}\sqrt{n}} \leq 4e^{\frac{\sqrt{3}\pi}{3}\sqrt{n}}.\]
For two functions \(f,g\) we introduce the notation
\[f=O_{\leq}(g),\]
if \(f=O(g)\) and the implicit constant can be chosen to be \(1.\) Combing the results we obtain the following Lemma.
**Lemma 2.11**.: _Let \(n\geq 1.\) Then_
\[p_{2}(n)=\frac{\pi}{6\sqrt{n}}I_{1}\left(\frac{2\pi\sqrt{n}}{3}\right)+\frac{ \pi}{18\sqrt{6n}}\mathcal{I}_{\frac{1}{18},1,0}(n)+O_{\leq}\left(4e^{\frac{ \sqrt{3}\pi}{3}\sqrt{n}}+2565n^{\frac{7}{8}}\right).\]
## 3. Asymptotic formula for \(p_{2}(n)\) and log-concavity
For proving log-concavity strong asymptotics are required. Thus, we will have to use a strong asymptotic formula for the Bessel function and \(\mathcal{I}_{\frac{1}{18},1,0}(n).\) The latter is the more difficult problem. We will find such a formula with the saddle point method. We define the following list of real numbers:
\[a_{1}:=\frac{1}{4\sqrt{3}},\quad a_{2}:=\frac{1}{18\sqrt{2}}, \quad a_{3}:=-\frac{3\sqrt{3}}{64\pi},\quad a_{4}:=-\frac{324+5\pi^{2}}{3888 \sqrt{2}\pi},\quad a_{5}:=-\frac{45\sqrt{3}}{2048\pi^{2}}\quad a_{6}:=\frac{1 080+17\pi^{2}}{186624\sqrt{2}},\] \[a_{7}:=-\frac{945\sqrt{3}}{32768\pi^{3}},\quad a_{8}:=-\frac{3499 20+33048\pi^{2}+455\pi^{4}}{40310784\sqrt{2}\pi},\quad a_{9}:=-\frac{127575 \sqrt{3}}{2097152\pi^{4}}.\]
**Theorem 3.1**.: _Let \(n\geq 1.\) Then,_
\[p_{2}(n)=\bigg{(}\frac{a_{1}}{n^{\frac{3}{4}}}+\frac{a_{2}}{n}+\frac{a_{3}}{ n^{\frac{5}{4}}}+\frac{a_{4}}{n^{\frac{3}{2}}}+\frac{a_{5}}{n^{\frac{7}{4}}}+ \frac{a_{6}}{n^{2}}+\frac{a_{7}}{n^{\frac{9}{4}}}+\frac{a_{8}}{n^{\frac{5}{2} }}+\frac{a_{9}}{n^{\frac{11}{4}}}\bigg{)}e^{\frac{2\pi}{3}\sqrt{n}}+O_{\leq} \left(16\frac{e^{\frac{2\pi}{3}\sqrt{n}}}{n^{3}}\right). \tag{3.1}\]
Proof.: Define
\[a_{k}(\nu):=(-1)^{k}\frac{\left(\frac{1}{2}-\nu\right)_{n}\left( \frac{1}{2}+\nu\right)_{n}}{2^{k}k!},\quad k=0,1,2,\ldots\] \[(\lambda)_{n}:=\frac{\Gamma(\lambda+n)}{\Gamma(\lambda)}.\]
We have for \(\nu\in\mathbb{N}\) the well-known asymptotic formulas (see [20]),
\[I_{\nu}(z)\sim\frac{e^{z}}{\sqrt{2\pi z}}\sum_{k=0}^{\infty}(-1)^{k}\frac{a_{ k}(\nu)}{z^{k}}.\]
In [4] K. Banerjee worked out explicit error bounds in the asymptotic formulas for the Bessel functions of non-negative order. More explicitly, we have in our cases
\[\frac{\pi}{6\sqrt{n}}I_{1}\left(\frac{2\pi\sqrt{n}}{3}\right) =\bigg{(}\frac{1}{4\sqrt{3}n^{\frac{3}{4}}}-\frac{3\sqrt{3}}{64 \pi n^{\frac{5}{4}}}-\frac{45\sqrt{3}}{2048\pi^{2}n^{\frac{7}{4}}}-\frac{945 \sqrt{3}}{32768\pi^{3}n^{\frac{9}{4}}}-\frac{127575\sqrt{3}}{2097152\pi^{4}n ^{\frac{11}{4}}}\bigg{)}e^{\frac{2\pi}{3}\sqrt{n}}\] \[+O_{\leq}\left(\frac{0.0795351}{n^{\frac{13}{4}}}e^{\frac{2\pi}{ 3}\sqrt{n}}\right). \tag{3.2}\]
By Lemma 2.6 and since the integrand is an even function, we can write with \(a=-\frac{\pi}{6}\) and \(b=0\) that
\[\mathcal{I}_{\frac{1}{18},1,0}(n)=\frac{1}{2}\int_{-1}^{1}f_{a,b}(x)\sqrt{1-x^{2}}e ^{\frac{2\pi}{3}\sqrt{n(1-x^{2})}}dx.\]
We now cut off the integral at \(\frac{1}{8}\) and insert the expansion of \(I_{1}\) up to order \(3\) and obtain
\[\frac{\pi}{18\sqrt{6n}}\mathcal{I}_{\frac{1}{18},1,0}(n) =\frac{1}{72\sqrt{2}n^{\frac{3}{4}}}\int_{-\frac{1}{8}}^{\frac{1} {8}}f_{a,b}(x)\left(1-x^{2}\right)^{\frac{1}{4}}e^{\frac{2\pi}{3}\sqrt{n(1-x^{ 2})}}dx\] \[-\frac{1}{128\sqrt{2}\pi n^{\frac{5}{4}}}\int_{-\frac{1}{8}}^{ \frac{1}{8}}f_{a,b}(x)\left(1-x^{2}\right)^{-\frac{1}{4}}e^{\frac{2\pi}{3} \sqrt{n(1-x^{2})}}dx\] \[-\frac{15}{4096\sqrt{2}\pi^{2}n^{\frac{7}{4}}}\int_{-\frac{1}{8}} ^{\frac{1}{8}}f_{a,b}(x)\left(1-x^{2}\right)^{-\frac{3}{4}}e^{\frac{2\pi}{3} \sqrt{n(1-x^{2})}}dx\] \[-\frac{315}{65536\sqrt{2}\pi^{3}n^{\frac{9}{4}}}\int_{-\frac{1}{8 }}^{\frac{1}{8}}f_{a,b}(x)\left(1-x^{2}\right)^{-\frac{5}{4}}e^{\frac{2\pi}{3 }\sqrt{n(1-x^{2})}}dx\] \[+O_{\leq}\left(\frac{27}{160\sqrt{2}\pi^{4}n^{\frac{11}{4}}} \int_{-\frac{1}{8}}^{\frac{1}{8}}f_{a,b}(x)\left(1-x^{2}\right)^{-\frac{7}{4} }e^{\frac{2\pi}{3}\sqrt{n(1-x^{2})}}dx\right)\] \[+O_{\leq}\left(\frac{1}{2}\int_{-\frac{1}{8}}^{\frac{1}{8}}f_{a,b }(x)\left(1-x^{2}\right)^{\frac{1}{4}}e^{\frac{\sqrt{7}\pi}{4}\sqrt{n}}dx \right). \tag{3.3}\]
We will now use the saddle point method to expand each of these terms into a power series. We illustrate this process for the first term. We introduce the notation \(A:=\frac{2\pi}{3}\sqrt{n},f(x):=f_{a,b}(x)(1-x^{2})^{\frac{1}{4}}\) and \(g(x)=\sqrt{1-x^{2}}.\) Then, we can write
\[\int_{-\frac{1}{8}}^{\frac{1}{8}}f_{a,b}(x)(1-x^{2})^{\frac{1}{4}}e^{\frac{2 \pi}{3}\sqrt{n(1-x^{2})}}dx=\int_{-\frac{1}{8}}^{\frac{1}{8}}f(x)e^{Ag(x)}dx= \frac{1}{\sqrt{A}}\int_{-\frac{1}{8}\sqrt{A}}^{\frac{1}{8}\sqrt{A}}f\left( \frac{y}{\sqrt{A}}\right)e^{Ag\left(\frac{y}{\sqrt{A}}\right)}. \tag{3.4}\]
The functions \(g(x)\) and \(f(x)\) are analytic on the interval \([-\frac{1}{8},\frac{1}{8}]\) and we can expand them into power series
\[Ag\left(\frac{y}{\sqrt{A}}\right) =A-\frac{y^{2}}{2}-\frac{y^{4}}{8A}-\frac{y^{6}}{16A^{2}}-\frac{5 y^{8}}{128A^{3}}-\frac{7y^{10}}{256A^{4}}-\frac{21y^{12}}{1024A^{5}}+O_{\leq} \left(j\frac{y^{14}}{A^{6}}\right)\] \[e^{Ag\left(\frac{y}{\sqrt{A}}\right)} =e^{A}e^{-\frac{y^{2}}{2}}\bigg{(}1-\frac{y^{4}}{8A}-\frac{y^{6} }{16A^{2}}-\frac{5y^{8}}{128A^{3}}-\frac{7y^{10}}{256A^{4}}-\frac{21y^{12}}{10 24A^{5}}+\frac{jy^{14}}{A6}\] \[+\frac{1}{2}\left(-\frac{y^{4}}{8A}-\frac{y^{6}}{16A^{2}}-\frac{5 y^{8}}{128A^{3}}-\frac{7y^{10}}{256A^{4}}-\frac{21y^{12}}{1024A^{5}}+\frac{jy^{14}}{A ^{6}}\right)^{2}\bigg{)}\] \[+\frac{1}{6}\left(-\frac{y^{4}}{8A}-\frac{y^{6}}{16A^{2}}-\frac{5 y^{8}}{128A^{3}}-\frac{7y^{10}}{256A^{4}}-\frac{21y^{12}}{1024A^{5}}+\frac{jy^{14}}{A ^{6}}\right)^{3}\bigg{)}\] \[+O_{\leq}\left(k\left(-\frac{y^{4}}{8A}-\frac{y^{6}}{16A^{2}}- \frac{5y^{8}}{128A^{3}}-\frac{7y^{10}}{256A^{4}}-\frac{21y^{12}}{1024A^{5}}+ \frac{jy^{14}}{A^{6}}\right)^{4}\right)\right),\]
\[f\left(\frac{y}{\sqrt{A}}\right)=f(0)\bigg{(}1-\frac{y^{2}}{4A}- \frac{5\pi^{2}y^{2}}{324A}-\frac{3y^{4}}{32A^{2}}+\frac{5\pi^{2}y^{4}}{1296A^{2} }+\frac{17\pi^{4}y^{4}}{69984A^{2}}-\frac{7y^{6}}{128A^{3}}+\frac{5\pi^{2}y^{6} }{3456A^{3}}-\frac{17\pi^{4}y^{6}}{279936A^{3}}\] \[-\frac{91\pi^{6}y^{6}}{22674816A^{3}}-\frac{77y^{8}}{2048A^{4}}+ \frac{35\pi^{2}y^{8}}{41472A^{4}}-\frac{17\pi^{4}y^{8}}{746496A^{4}}+\frac{91 \pi^{6}y^{8}}{90699264A^{4}}+\frac{207913\pi^{8}y^{8}}{3085588961280A^{4}}+ \frac{\sqrt{3}ly^{9}}{4A^{\frac{9}{2}}}\bigg{)},\]
where \(j,k\) and \(\ell\) are positive constants given by
\[j =\max_{x\in[-\frac{1}{8},\frac{1}{8}]}\left|\frac{d^{14}}{dx^{14} }\frac{g(x)}{14!}\right|\leq 0.0504,\] \[k =\max_{x\in[-\frac{1}{8},\frac{1}{8}]}\left|\frac{d^{2}}{dx^{2}} \frac{e^{x}}{2}\right|=\frac{e^{\frac{1}{8}}}{2},\] \[\ell =\max_{x\in[-\frac{1}{8},\frac{1}{8}]}\left|\frac{d^{10}}{dx^{10} }\frac{f(x)}{10!}\right|\leq 0.109.\]
We now multiply out the series expansions in the integrand and obtain
\[f\left(\frac{y}{\sqrt{A}}\right)e^{Ag\left(\frac{y}{\sqrt{A}}\right)}=T(y,A)+ O_{\leq}\left(R(y,A)\right),\]
where we have the main term
\[T(y,A)=1 +\frac{-162y^{2}-10\pi^{2}y^{2}-81y^{4}}{648A}+\frac{-26244y^{4} +1080\pi^{2}y^{4}+68\pi^{4}y^{4}-8748y^{6}+540\pi^{2}y^{6}+2187y^{8}}{279936A^{ 2}}\] \[+\frac{-9920232y^{6}+262440\pi^{2}y^{6}-11016\pi^{4}y^{6}-728\pi^ {6}y^{6}-2125764y^{8}+87480\pi^{2}y^{8}}{181398528A^{3}}\] \[+\frac{-5508\pi^{4}y^{8}+1062882y^{10}-21870\pi^{2}y^{10}-59049y^{ 12}}{181398528A^{3}}\]
and the error term
\[R(y,A)=\sum_{n=4}^{N}\frac{P_{n}(y)}{A^{n}},\]
for some \(N>0\) and even polynomials \(P_{n}(y).\) We now extend the range of integration in (3.4) to the whole real line and obtain
\[\frac{1}{\sqrt{A}}\int_{-\frac{1}{8}\sqrt{A}}^{\frac{1}{8}\sqrt{A }}f\left(\frac{y}{\sqrt{A}}\right)e^{Ag\left(\frac{y}{\sqrt{A}}\right)}dy= \frac{f(0)e^{A}}{\sqrt{A}}\int_{-\infty}^{\infty}e^{-\frac{y^{2}}{2}}T(y,A)dy\] \[+O_{\leq}\left(\frac{f(0)e^{A}}{\sqrt{A}}\int_{-\frac{1}{8}\sqrt {A}}^{\frac{1}{8}\sqrt{A}}e^{-\frac{y^{2}}{2}}\left(T(y,A)+E(y,A)\right)dy+ \frac{f(0)e^{A}}{\sqrt{A}}\int_{-\infty}^{\infty}e^{-\frac{y^{2}}{2}}E(y,A)dy \right). \tag{3.5}\]
We now reduced the integrals to Gaussian integrals which we can evaluate. Performing these integrations we find that for \(n\geq 435000\)
\[\frac{1}{72\sqrt{2}n^{\frac{3}{4}}}\int_{-\frac{1}{8}}^{\frac{1}{8}}f_{a,b}(x )(1-x^{2})^{\frac{1}{4}}e^{\frac{2\pi}{3}\sqrt{n(1-x^{2})}}dx\]
\[=\left(\frac{1}{18\sqrt{2}}\cdot\frac{1}{n}-\frac{5\left(81+2\pi^{2} \right)}{7776\sqrt{2}\pi}\cdot\frac{1}{n^{\frac{3}{2}}}+\frac{6561+3780\pi^{2}+6 8\pi^{4}}{746496\sqrt{2}\pi^{2}}\cdot\frac{1}{n^{2}}\right.\] \[-\left.\frac{5\left(-1240029+503010\pi^{2}+49572\pi^{4}+728\pi^{6} \right)}{322486272\sqrt{2}\pi^{3}}\cdot\frac{1}{n^{\frac{5}{2}}}\right)e^{ \frac{2\pi}{3}\sqrt{n}}+O_{\leq}\left(\frac{90\sqrt{\frac{2}{\pi}}}{n^{3}}e^{ \frac{2\pi}{3}\sqrt{n}}\right).\]
We apply the exact same method to other integrals and obtain for \(n\geq 435000\)
\[\frac{1}{128\sqrt{2}\pi n^{\frac{5}{4}}}\int_{-\frac{1}{8}}^{ \frac{1}{8}}f_{a,b}(x)(1-x^{2})^{-\frac{1}{4}}e^{\frac{2\pi}{3}\sqrt{n(1-x^{2 })}}dx\] \[=\left(\frac{1}{32\sqrt{2}\pi}\cdot\frac{1}{n^{3/2}}-\frac{81+10 \pi^{2}}{13824\sqrt{2}\pi^{2}}\cdot\frac{1}{n^{2}}+\frac{-10935+1620\pi^{2}+6 8\pi^{4}}{1327104\sqrt{2}\pi^{3}}\cdot\frac{1}{n^{5/2}}\right)e^{\frac{2\pi}{ 3}\sqrt{n}}\] \[+O_{\leq}\left(\frac{5}{64\pi^{3/2}n^{3}}e^{\frac{2\pi}{3}\sqrt{n }}\right),\]
\[\frac{15}{4096\sqrt{2}\pi^{2}n^{\frac{7}{4}}}\int_{-\frac{1}{8}}^ {\frac{1}{8}}f_{a,b}(x)(1-x^{2})^{-\frac{3}{4}}e^{\frac{2\pi}{3}\sqrt{n(1-x^{ 2})}}dx\] \[=\left(\frac{15}{1024\sqrt{2}\pi^{2}}\cdot\frac{1}{n^{2}}+\frac{5 \left(243-10\pi^{2}\right)}{147456\sqrt{2}\pi^{3}}\right)\cdot\frac{1}{n^{5/2 }}e^{\frac{2\pi}{3}\sqrt{n}}+O_{\leq}\left(\frac{75}{2048\pi^{5/2}n^{3}}e^{ \frac{2\pi}{3}\sqrt{n}}\right),\]
\[\frac{315}{65536\sqrt{2}\pi^{3}n^{\frac{9}{4}}}\int_{-\frac{1}{8} }^{\frac{1}{8}}f_{a,b}(x)(1-x^{2})^{-\frac{5}{4}}e^{\frac{2\pi}{3}\sqrt{n(1-x^ {2})}}dx\] \[=\frac{315}{16384\sqrt{2}\pi^{3}}\cdot\frac{1}{n^{5/2}}e^{\frac{2 \pi}{3}\sqrt{n}}+O_{\leq}\left(\frac{1575}{32768n^{3}\pi^{7/2}n^{3}}e^{\frac{ 2\pi}{3}\sqrt{n}}\right),\]
\[\frac{27}{160\sqrt{2}\pi^{4}n^{\frac{11}{4}}}\int_{-\frac{1}{8} }^{\frac{1}{8}}f_{a,b}(x)(1-x^{2})^{-\frac{7}{4}}e^{\frac{2\pi}{3}\sqrt{n(1-x ^{2})}}dx\] \[=O_{\leq}\left(\frac{27}{40\sqrt{2}\pi^{4}n^{3}}e^{\frac{2\pi}{3} \sqrt{n}}+\frac{27}{16\pi^{9/2}}\cdot\frac{1}{n^{7/2}}e^{\frac{2\pi}{3}\sqrt{ n}}\right).\]
Adding up all these terms we obtain for \(n\geq 45000\) the asymptotic formula
\[p_{2}(n) =\bigg{(}\frac{a_{1}}{n^{\frac{3}{4}}}+\frac{a_{2}}{n}+\frac{a_{3}}{n^ {\frac{5}{4}}}+\frac{a_{4}}{n^{\frac{3}{2}}}+\frac{a_{5}}{n^{\frac{7}{4}}}+\frac {a_{6}}{n^{2}}+\frac{a_{7}}{n^{\frac{9}{4}}}+\frac{a_{8}}{n^{\frac{5}{2}}}+ \frac{a_{9}}{n^{\frac{11}{4}}}\bigg{)}e^{\frac{2\pi}{3}\sqrt{n}}\] \[+O_{\leq}\bigg{(}\frac{55296\sqrt{2}+7875\sqrt{\pi}+6000\pi^{3/2 }+12800\pi^{5/2}+204800\pi^{7/2}}{163840\pi^{4}n^{3}}+0.00795351\left(\frac{1} {n}\right)^{13/4}\] \[+\frac{27\left(\frac{1}{n}\right)^{7/2}}{16\pi^{9/2}}\bigg{)}e^{ \frac{2\pi}{3}\sqrt{n}}+O_{\leq}\left(\frac{1}{2}\int_{-\frac{1}{8}}^{\frac{1 }{8}}f_{a,b}(x)(1-x^{2})^{\frac{1}{4}}e^{\frac{\sqrt{7}\pi}{4}\sqrt{n}}dx \right)+O_{\leq}\left(4e^{\frac{\sqrt{3}\pi}{3}\sqrt{n}}+2565n^{\frac{7}{8}} \right).\]
An easy calculation shows that the error term can be bounded by
\[O_{\leq}\left(\frac{1}{n^{3}}e^{\frac{2\pi}{3}\sqrt{n}}\right)\]
for all \(n\geq 435000.\) If we call the main term of our asymptotic formula \(G(n),\) then we find
\[\max_{1\leq n\leq 435000}\frac{n^{3}\left|p_{2}(n)-G(n)\right|}{e^{\frac{2\pi}{3 }\sqrt{n}}}\leq 16. \tag{3.6}\]
This concludes the proof.
**Theorem 3.2**.: _We have for \(n\geq 482\) and all even \(2\leq n<482,\) that_
\[p_{2}^{2}(n)-p_{2}(n-1)p_{2}(n+1)\geq 0.\]
Proof.: We write that the asymptotic formula from Theorem 3.1 in the form
\[p_{2}(n)=\mathcal{P}(n)+O_{\leq}(\mathcal{E}(n)).\]
Then, we estimate
\[p_{2}^{2}(n)-p_{2}(n-1)p_{2}(n+1)\geq(\mathcal{P}(n)-\mathcal{E}(n))^{2}-( \mathcal{P}(n-1)+\mathcal{E}(n-1))(\mathcal{P}(n+1)+\mathcal{E}(n+1)).\]
Using Taylor expansion we write this as
\[\frac{\pi}{216\sqrt{6}}\cdot\frac{1}{n^{\frac{13}{4}}}+\frac{\pi}{288n^{3}}-E( n),\]
where
\[|E(n)| \leq\frac{6.055140198378155\times 10^{-6}}{n^{\frac{35}{2}}}+\frac{0.005 35139}{n^{\frac{69}{4}}}+\frac{1.18221}{n^{17}}+\frac{0.0141338}{n^{\frac{67}{4 }}}+\frac{0.000646658}{n^{\frac{33}{2}}}\] \[+\frac{0.000254429}{n^{\frac{65}{4}}}+\frac{0.000742507}{n^{16}} +\frac{0.000588099}{n^{\frac{63}{4}}}+\frac{0.00321441}{n^{\frac{31}{2}}}+ \frac{0.0232749}{n^{\frac{61}{4}}}+\frac{4.30165}{n^{15}}\] \[+\frac{0.0726311}{n^{\frac{59}{4}}}+\frac{0.00242495}{n^{\frac{29} {2}}}+\frac{0.000930963}{n^{\frac{57}{4}}}+\frac{0.00270314}{n^{14}}+\frac{0. 00214691}{n^{\frac{55}{4}}}+\frac{0.0116994}{n^{\frac{27}{2}}}\] \[+\frac{0.0938434}{n^{\frac{53}{4}}}+\frac{17.6703}{n^{13}}+\frac {0.288354}{n^{\frac{51}{4}}}+\frac{0.0100125}{n^{\frac{25}{2}}}+\frac{0.00381 031}{n^{\frac{49}{4}}}+\frac{0.0111939}{n^{12}}+\frac{0.113621}{n^{\frac{47}{ 4}}}\] \[+\frac{23.2169}{n^{\frac{23}{2}}}+\frac{0.447083}{n^{\frac{45}{4 }}}+\frac{25.0755}{n^{11}}+\frac{0.62135}{n^{\frac{43}{4}}}+\frac{0.0296742}{ n^{\frac{21}{2}}}+\frac{0.0169767}{n^{\frac{41}{4}}}+\frac{0.0785117}{n^{10}}\] \[+\frac{0.278106}{n^{\frac{39}{4}}}+\frac{42.2773}{n^{\frac{19}{2} }}+\frac{1.18225}{n^{\frac{37}{4}}}+\frac{40.1092}{n^{9}}+\frac{0.936865}{n^{ \frac{35}{4}}}+\frac{0.0506511}{n^{\frac{17}{2}}}+\frac{0.028851}{n^{\frac{33} {4}}}\] \[+\frac{0.139483}{n^{8}}+\frac{0.76521}{n^{\frac{31}{4}}}+\frac{13 5.021}{n^{\frac{15}{2}}}+\frac{2.49801}{n^{\frac{29}{4}}}+\frac{0.27544}{n^{7} }+\frac{0.748082}{n^{\frac{27}{4}}}+\frac{0.0883287}{n^{\frac{13}{2}}}\] \[+\frac{1.22888}{n^{\frac{25}{4}}}+\frac{513.626}{n^{6}}+\frac{3.51 304}{n^{\frac{23}{4}}}+\frac{0.851624}{n^{\frac{11}{2}}}+\frac{2.50794}{n^{ \frac{21}{4}}}+\frac{0.24984}{n^{5}}+\frac{0.190725}{n^{\frac{19}{4}}}\] \[+\frac{1.04232}{n^{\frac{9}{2}}}+\frac{1.24105}{n^{\frac{17}{4}}} +\frac{1.89133}{n^{4}}+\frac{1119744\sqrt{3}-1620\sqrt{6}-5\sqrt{6}\pi^{2}}{279 936n^{\frac{15}{4}}}+\frac{-1215+16\pi}{62208n^{\frac{7}{2}}}\]
We conclude that
\[\frac{\pi}{216\sqrt{6}}\cdot\frac{1}{n^{\frac{13}{4}}}+\frac{\pi}{288n^{3}}-E( n)>0\]
for \(n\geq 5092\). The finitely many remaining cases are verified directly.
**Remark.** (i) In [7] Bringmann and Mahlburg obtained the asymptotic formula
\[p_{2}(n)\sim\left(\frac{1}{4\sqrt{3}n^{\frac{3}{4}}}+\frac{1}{18\sqrt{2}n} \right)e^{\frac{2\pi}{3}\sqrt{n}}.\]
However, this formula is not strong enough to prove log-concavity. Indeed, if one expands the log-concavity condition with abstract coefficients one sees that an asymptotic formula with precision to at least order \(n^{-\frac{9}{4}}\) is necessary. We chose to expand up to order \(n^{-\frac{11}{4}}\) to reduce the number of finitely many cases we have to check directly.
(ii) By looking at the quotient (3.6) for large values of \(n\) it seems that the optimal implicit constant can be chosen as \(C\approx 0.69\), while we can optimally show for large \(n\)
\[C\approx\frac{55296\sqrt{2}+7875\sqrt{\pi}+6000\pi^{3/2}+12800\pi^{5/2}+204800 \pi^{7/2}}{163840\pi^{4}}\approx 0.727135.\]
Furthermore, the quotient (3.6) is always larger for odd values of \(n\), then for even values of \(n\), which is in line with the fact that log-concavity fails for small odd \(n\).
|
2304.11442
|
Stabilizer Formalism for Operator Algebra Quantum Error Correction
|
We introduce a stabilizer formalism for the general quantum error correction
framework called operator algebra quantum error correction (OAQEC), which
generalizes Gottesman's formulation for traditional quantum error correcting
codes (QEC) and Poulin's for operator quantum error correction and subsystem
codes (OQEC). The construction generates hybrid classical-quantum stabilizer
codes and we formulate a theorem that fully characterizes the Pauli errors that
are correctable for a given code, generalizing the fundamental theorems for the
QEC and OQEC stabilizer formalisms. We discover hybrid versions of the
Bacon-Shor subsystem codes motivated by the formalism, and we apply the theorem
to derive a result that gives the distance of such codes. We show how some
recent hybrid subspace code constructions are captured by the formalism, and we
also indicate how it extends to qudits.
|
Guillaume Dauphinais, David W. Kribs, Michael Vasmer
|
2023-04-22T16:45:50Z
|
http://arxiv.org/abs/2304.11442v2
|
# Stabilizer Formalism for Operator Algebra Quantum Error Correction
###### Abstract
We introduce a stabilizer formalism for the general quantum error correction framework called operator algebra quantum error correction (OAQEC), which generalizes Gottesman's formulation for traditional quantum error correcting codes (QEC) and Poulin's for operator quantum error correction and subsystem codes (OQEC). The construction generates hybrid classical-quantum stabilizer codes and we formulate a theorem that fully characterizes the Pauli errors that are correctable for a given code, generalizing the fundamental theorems for the QEC and OQEC stabilizer formalisms. We discover hybrid versions of the Bacon-Shor subsystem codes motivated by the formalism, and we apply the theorem to derive a result that gives the distance of such codes. We show how some recent hybrid subspace code constructions are captured by the formalism, and we also indicate how it extends to qudits.
## 1 Introduction
Quantum error correction (QEC) is a central topic in quantum information science. Its origins as an independent field of study go back almost three decades [17, 40, 61, 60], and it now touches on almost every aspect of quantum information, ranging from theoretical to experimental investigations and in recent years as a key facet in the development of new quantum technologies [12, 15, 52, 62, 64, 65, 55]. More recently, developments in QEC included the introduction of a unified approach, called 'operator quantum error correction' (OQEC) [41, 42] that brought together traditional QEC with passive notions such as decoherence-free subspaces and noiseless subsystems, and led to the advent of subsystem codes and advances in fault-tolerant quantum computing [7, 8, 13, 33, 44, 57, 14, 30]. Subsequently, a further generalization was discovered, called 'operator algebra quantum error correction' (OAQEC) [9, 10], which additionally provided an approach for hybrid classical-quantum and infinite-dimensional error correction [11, 36, 37, 45, 35]. The following decade saw limited development of OAQEC theory, perhaps due to a paucity of initial applications.
The last few years have witnessed significant renewed interest in OAQEC, from at least three different but related directions. There have been advances in hybrid classical-quantum information coding theory and error correction [18, 28, 46, 50]. Several small quantum error correcting codes and operations necessary as fundamental components of a scalable fault-tolerant quantum computer have been implemented experimentally [1, 22, 43, 56]. And in black hole theory, recent work [29, 32, 38, 55, 6, 2, 6, 3, 2, 6] has reinterpreted the AdS/CFT correspondence using the language of quantum error correction. In particular, it was argued in [29] that the full machinery of OAQEC is necessary to capture the relevant properties of AdS/CFT.
The stabilizer formalism [16, 24, 25] introduced by Gottesman is a bedrock of QEC, providing a toolbox for the construction and characterization of correctable codes for Pauli error models. This formalism was generalized by Poulin [57] to the OQEC setting, giving a way to construct stabilizer subsystem codes and also a characterization of correctable subsystem codes for Pauli errors. The OQEC formalism further gave
an appropriate framework in which to view the well-known Bacon-Shor subsystem codes [8], which have proved to be important in fault tolerant quantum computing.
In this paper, we introduce a stablizer formalism for OAQEC, which generalizes Gottesman's formulation for traditional QEC codes and Poulin's for OQEC subsystem codes. The codes constructed include hybrid classical-quantum stabilizer codes, and motivated by this, we discover hybrid versions of the Bacon-Shor codes. We formulate a theorem that fully characterizes the Pauli errors that are correctable for a given stabilizer code, generalizing the fundamental theorems for QEC and OQEC, and we apply the theorem to calculate the distance of hybrid Bacon-Shor codes. Further, we show how some recent hybrid subspace code constructions are captured by the formalism. We also show how it extends to the case of qudits and we present examples in that general context.
This paper is organized as follows. Section 2 includes requisite background material. In Section 3 we give the main details of the formalism, and in Section 4 we formulate and prove the error correction theorem. We present some examples and applications in Section 5, including the hybrid Bacon-Shor codes and a theorem that gives the distance of such codes. Section 6 includes the extension of the formalism to qudits, and Section 7 includes concluding remarks.
## 2 Preliminaries
Given a fixed positive integer \(n\geq 1\), let \(\mathbb{C}^{N}\), with \(N=2^{n}\), be \(N\)-dimensional complex Hilbert space with a fixed orthonormal basis \(\{|0\rangle,\ldots,|N-1\rangle\}\), which alternatively can be identified with \((\mathbb{C}^{2})^{\otimes n}\) and orthonormal basis \(\{|i_{1}\cdots i_{n}\rangle=|i_{1}\rangle\otimes\ldots\otimes|i_{n}\rangle\,: \,i_{j}=0\,\mathrm{or}\,1\}\) via dyadic expansions. Further let \(M_{N}=(M_{2})^{\otimes n}\) be the set of \(N\times N\) complex matrices, which can be viewed as the set of matrix representations of linear transformations \(\mathcal{B}(\mathbb{C}^{N})\) on \(\mathbb{C}^{N}\) with respect to the basis \(\{|k\rangle\}\), and let \(\mathcal{U}(N)\) be the unitary group inside \(M_{N}\).
We let \(\mathcal{P}_{n}\) be the usual \(n\)-qubit Pauli group; that is, the subgroup of \(\mathcal{U}(N)\) generated by \(n\)-tensors of the single qubit bit flip and phase flip Pauli operators \(X\), \(Z\), and \(iI\) (we shall write \(I_{m}\) for the identity operator on \(\mathbb{C}^{m}\), or just \(I\) when the context is clear); that is,
\[X|0\rangle=|1\rangle,\ X|1\rangle=|0\rangle\quad\text{and}\quad Z|0\rangle=|0 \rangle,\ Z|1\rangle=-|1\rangle,\]
and with the corresponding \(n\)-qubit operators defined as \(X_{1}=X\otimes(I^{\otimes(n-1)})\), \(X_{2}=I\otimes X\otimes(I^{\otimes(n-2)})\), etc.
Given a subgroup of unitary operators \(\mathcal{G}\) inside \(\mathcal{B}(\mathcal{H})\), the set of (bounded linear) operators on a Hilbert space \(\mathcal{H}\), we let \(\mathrm{Alg}(\mathcal{G})\) denote the subalgebra of \(\mathcal{B}(\mathcal{H})\) generated by \(\mathcal{G}\); in other words, the set of complex polynomials in the elements of \(\mathcal{G}\). When \(\mathcal{H}\) is finite-dimensional, such an algebra \(\mathcal{A}\) is a (unital) \(\mathrm{C}^{*}\)-algebra [20, 54], and hence from the structure theory of such algebras, it is unitarily equivalent to a direct sum of the form
\[\mathcal{A}\cong\oplus_{k}(I_{m_{k}}\otimes M_{n_{k}})\]
for some positive integers \(m_{k},n_{k}\) with \(\sum_{k}m_{k}n_{k}=\dim\mathcal{H}\). Associated with this unitary equivalence is a decomposition of the Hilbert space \(\mathcal{H}\) as an orthogonal direct sum of subspaces each with its own tensor decomposition, \(\mathcal{H}=\oplus_{k}(A_{k}\otimes B_{k})\), in which the algebra itself decomposes as \(\mathcal{A}=\oplus_{k}(I_{A_{k}}\otimes\mathcal{B}(B_{k}))\). Moreover, the set \(\mathcal{A}^{\prime}\) (also an algebra) of all operators that commute with the algebra, the _commutant_ of \(\mathcal{A}\), is unitarily equivalent to
\[\mathcal{A}^{\prime}\cong\oplus_{k}(M_{m_{k}}\otimes I_{n_{k}}),\]
which again is determined by the structure of the Hilbert space decomposition as \(\mathcal{A}^{\prime}=\oplus_{k}(\mathcal{B}(A_{k})\otimes I_{B_{k}})\).
Open system quantum dynamics gives us _quantum channels_, which are completely positive trace-preserving linear maps \(\mathcal{E}:\mathcal{T}(\mathcal{H})\to\mathcal{T}(\mathcal{H})\) on the set of trace class operators on \(\mathcal{H}\)[54, 34, 51]. To each channel there is an associated dual map \(\mathcal{E}^{\dagger}\) defined on \(\mathcal{B}(\mathcal{H})\) via the equation: \(\mathrm{Tr}(\mathcal{E}^{\dagger}(X)\rho)=\mathrm{Tr}(X\mathcal{E}(\rho))\). (Observe that \(\mathcal{E}\) is trace-preserving exactly when \(\mathcal{E}^{\dagger}\) is unital; \(\mathcal{E}^{\dagger}(I)=I\).) Of course, in the finite-dimensional case the
sets \(\mathcal{T}(\mathcal{H})\) and \(\mathcal{B}(\mathcal{H})\) coincide, but we will still use the different notation to distinguish between the quantum information flow direction under consideration; namely, the Heisenberg and Schrodinger perspectives as discussed in the OAQEC context below.
Every channel \(\mathcal{E}\) has operator-sum representations [19], which are sets of 'Choi-Kraus' operators \(\{E_{k}\}\) inside \(\mathcal{B}(\mathcal{H})\) such that \(\mathcal{E}(\rho)=\sum_{k}E_{k}\rho E_{k}^{\dagger}\) for all \(\rho\in\mathcal{T}(\mathcal{H})\) and \(\sum_{k}E_{k}^{\dagger}E_{k}=I\). In the quantum error context, channels are often referred to as _error_ or _noise models_, and the implementation operators called _error operators_. Most importantly for the present work, the class of Pauli error models are central to quantum error correction, and are the subclass of _mixed unitary channels_ on \(\mathbb{C}^{N}\) of the form \(\mathcal{E}(\rho)=\sum_{k}p_{k}U_{k}\rho U_{k}^{\dagger}\), where \(U_{k}\in\mathcal{P}_{n}\) and the \(p_{k}\) form a classical probability distribution.
## 3 Hybrid Stabilizer Code Construction
### Stabilizer Subgroup and Code Subspace
Let \(\mathcal{S}\) be an abelian subgroup of \(\mathcal{P}_{n}\) that does not contain \(-I\), and suppose it has \(s\) linearly independent generators. As all elements of the Pauli group either commute or anti-commute up to some power of \(iI\), and \(\mathcal{S}\) does not contain the subgroup \(\langle iI\rangle\) generated by \(iI\), it is easy to see that the normalizer and centralizer of \(\mathcal{S}\) inside \(\mathcal{P}_{n}\) coincide;
\[\mathcal{N}(\mathcal{S})=\{g\in\mathcal{P}_{n}\ |\ g\mathcal{S}g^{-1}= \mathcal{S}\}=\{g\in\mathcal{P}_{n}\ |\ gh=hg\ \forall h\in\mathcal{S}\}=\mathcal{Z}(\mathcal{S}).\]
Let \(C=C(\mathcal{S})\) be the _stabilizer subspace_ for \(\mathcal{S}\), which is the subspace of \(\mathbb{C}^{N}\) defined as the joint eigenvalue-1 eigenspace for \(\mathcal{S}\); that is,
\[C=\mathrm{span}\{|\psi\rangle\,:\,g|\psi\rangle=|\psi\rangle\ \forall g\in \mathcal{S}\}.\]
We will let \(P\) denote the codespace projector for \(C\), the orthogonal projection of \(\mathbb{C}^{N}\) onto \(C\). It is well known that \(\dim C=2^{n-s}\) (for instance see the motivating example below). The stabilizer subspace is the base code for an OAQEC stabilizer code, which will encode further structure as described below.
### Gauge Group and Logical Operations
Let us first discuss some relevant operator theoretic notions. Given any element \(g\) of \(\mathcal{N}(\mathcal{S})=\mathcal{Z}(\mathcal{S})\), the subspace \(C\) is a reducing subspace for \(g\); that is, both the subspace and its orthogonal complement are invariant for \(g\). Indeed, if \(g\) commutes with every element of \(\mathcal{S}\), then \(gP=Pg\) as \(P\) is equal to a polynomial in the elements of \(\mathcal{S}\), which follows from the joint spectral functional calculus for those elements (an explicit formula is given in Section 6). Hence, \(gP=PgP\) and \(gP^{\perp}=P^{\perp}gP^{\perp}\) where \(P^{\perp}=I-P\), which are the invariant subspace conditions for \(C\) and \(C^{\perp}\) as operator relations. Observe that if \(C\) is a reducing subspace for every operator in an algebra \(\mathcal{A}\), then \(\mathcal{A}P\) is a subalgebra of \(\mathcal{B}(\mathbb{C}^{N})\) which is fully supported on \(C\). We will call \(\mathcal{A}P=P\mathcal{A}=P\mathcal{A}P\) the 'compression algebra' of \(\mathcal{A}\) to \(C\). In such a case, as a notational convenience to distinguish between that algebra and the corresponding algebra of operators restricted to \(C\) (so a subalgebra of \(\mathcal{B}(C)\)), we shall write \(\mathcal{A}|_{C}\) for the latter.
We now turn to the subsystem structure generated by a stabilizer subspace. Our formulation here is a little more abstract than that of [57], with an eye toward possible extensions of this formalism as noted in Section 7. Thus, suppose we can find subsets \(\mathcal{G}_{0}\) and \(\mathcal{L}_{0}\) of \(\mathcal{N}(\mathcal{S})=\mathcal{Z}(\mathcal{S})\) with the following properties:
* The compression algebras \(\mathrm{Alg}(\mathcal{G}_{0})P\), respectively \(\mathrm{Alg}(\mathcal{L}_{0})P\), is unitarily equivalent to a full matrix algebra \(M_{2^{r}}\) for some positive integer \(r\), respectively to \(M_{2^{k}}\) for some positive integer \(k\). (The motivating example presented below shows how this arises through anti-commuting pairs of Pauli operators.)
* The sets \(\mathcal{G}_{0}\) and \(\mathcal{L}_{0}\) are mutually commuting; \([g,L]=0\) for all \(g\in\mathcal{G}_{0}\), \(L\in\mathcal{L}_{0}\).
* The normalizer subgroup \(\mathcal{N}(\mathcal{S})\) is generated by \(\mathcal{S}\), \(iI\), \(\mathcal{G}_{0}\), and \(\mathcal{L}_{0}\).
We will assume these sets are minimal with these properties (in particular, an element of the set cannot be obtained as a product of other elements). The group \(\mathcal{G}\) defined as
\[\mathcal{G}=\langle\mathcal{S},iI,\mathcal{G}_{0}\rangle, \tag{1}\]
is called the _gauge group_ for the code, and the group
\[\mathcal{L}=\langle\mathcal{L}_{0},iI\rangle, \tag{2}\]
is called the _logical group_. The third condition ensures that the normalizer is group isomorphic to the direct product \(\mathcal{N}(\mathcal{S})\cong\mathcal{G}\times\mathcal{L}\). Choices of such subsets can be made using well-known properties of the Pauli group, using (\(r\) and \(k\) respectively) anti-commuting pairs of operators that mutually commute, and that commute with the other set and the stabilizers.
The subsystem structure that these subgroups generate is given in the following result. This can be proved straightforwardly as a consequence of the above formulation, together with the structure theory of algebras and their commutants described in the previous section.
**Lemma 1**.: Let \(C\) be a code subspace with gauge group \(\mathcal{G}\) and logical group \(\mathcal{L}\) as chosen above. Then \(C\) is a reducing subspace for both \(\mathcal{G}\) and \(\mathcal{L}\), and \(C\) decomposes as a tensor product of subsystems \(C=A\otimes B\) with \(A\cong(\mathbb{C}^{2})^{\otimes r}\), \(B\cong(\mathbb{C}^{2})^{\otimes k}\), such that
\[\left\{\begin{array}{rcl}\mathrm{Alg}(\mathcal{G})|_{C}&=&\mathcal{B}(A) \otimes I_{B}\\ \mathrm{Alg}(\mathcal{L})|_{C}&=&I_{A}\otimes\mathcal{B}(B)\end{array}\right.,\]
where \(\mathcal{B}(A)\cong M_{2^{r}}\) and \(\mathcal{B}(B)\cong M_{2^{k}}\).
Note that here the subsystem \(B\) encodes the logical qubits of the code. Further observe that an empty gauge set \(\mathcal{G}_{0}\) leads to a standard subspace code (\(\dim A=1\)), whereas a nonempty \(\mathcal{G}_{0}\) generates subsystem structure in the code (that is, when \(\dim A>1\)).
### Normalizer Cosets and Hybrid Code Sectors
Let us now turn to the notion that generates hybrid codes. As a group theoretic observation that will be relevant below, first note that the left and right cosets of \(\mathcal{N}(\mathcal{S})\) inside \(\mathcal{P}_{n}\) coincide; that is, \(g\mathcal{N}(\mathcal{S})=\mathcal{N}(\mathcal{S})g\) for all \(g\in\mathcal{P}_{n}\). This follows from the anti-commutation relations of \(\mathcal{P}_{n}\) and the fact that \(\mathcal{N}(\mathcal{S})\) contains \(\langle iI\rangle\).
Let \(\mathcal{T}\subseteq\mathcal{P}_{n}\) be a maximal set of coset representatives for \(\mathcal{N}(\mathcal{S})\) (a so-called coset _transversal_ for \(\mathcal{N}(\mathcal{S})\) as a subgroup of \(\mathcal{P}_{n}\)), and without loss of generality assume \(I\in\mathcal{T}\) is the representative for the normalizer itself. Then the full group is equal to the (disjoint) union \(\mathcal{P}_{n}=\cup_{g\in\mathcal{T}}g\mathcal{N}(\mathcal{S})\), and the cardinality of \(\mathcal{T}\) is equal to \(|\mathcal{T}|=|\mathcal{P}_{n}|/|\mathcal{N}(\mathcal{S})|=2^{s}\) (see the motivating example for an explicit calculation).
Taking terminological motivation from other areas, such as the notion of 'charge sectors' in the study of topological codes [39, 59], we shall use the term _code sector_ to refer to the (quantum) code defined by a given \(T\in\mathcal{T}\) and the elements that define the base code: \(\mathcal{S}\), \(\mathcal{L}\), \(\mathcal{G}\). Specifically, the code sector for \(T\) is defined by the collection of operators given by the sets \(T\mathcal{S}T^{-1}\), \(T\mathcal{L}T^{-1}\), \(T\mathcal{G}T^{-1}\), and then the associated codespace \(TC\).
The key observation concerning normalizer cosets in this setting is the following, which shows that the subgroup and coset structure induces orthogonality at the Hilbert space level.
**Lemma 2**.: Let \(\mathcal{S}\) be an abelian subgroup of \(\mathcal{P}_{n}\) that does not contain \(-I\), and let \(C\) be its stabilizer subspace. If \(\mathcal{T}\) is a selection of coset representatives for \(\mathcal{N}(\mathcal{S})\) inside \(\mathcal{P}_{n}\), then for all \(g_{1},g_{2}\in\mathcal{T}\) with \(g_{1}\neq g_{2}\) we have
\[Pg_{1}^{-1}g_{2}P=0,\]
where \(P\) is the orthogonal projection of \(\mathbb{C}^{N}\) onto \(C\); in other words, \(g_{1}|\psi_{1}\rangle\) is orthogonal to \(g_{2}|\psi_{2}\rangle\) for any choice of states \(|\psi_{1}\rangle,|\psi_{2}\rangle\in C\).
Proof.: As \(g_{1}\) and \(g_{2}\) are representatives from different cosets, we have \(g:=g_{1}^{-1}g_{2}\notin\mathcal{N}(\mathcal{S})\). Since the normalizer coincides with \(\mathcal{Z}(\mathcal{S})\), and all elements of \(\mathcal{P}_{n}\) commute modulo a power of \(iI\), it follows that there is some \(E\in\mathcal{S}\) and \(z\in\mathbb{C}\), with \(|z|=1\) and \(z\neq 1\), such that
\[gE=zEg.\]
Note that \(EP=P\) from the definition of \(C\) and because \(E\in\mathcal{S}\). Further, \(EP=PE\) as \(P\) is equal to the product of polynomials in elements of \(\mathcal{S}\) from spectral theory functional calculus as discussed above. Thus we have,
\[PgP=PgEP=P(zEg)P=zPEgP=zEPgP=zPgP,\]
and so \((1-z)PgP=0\). But \(z\neq 1\), and hence \(PgP=0\) as required.
**Remark**.: We thus have stabilizer codes that generalize both the original (subspace) setting of Gottesman, which is captured with the singleton coset representative subset (\(\mathcal{T}_{0}=\{I\}\)) and abelian gauge group (with empty set \(\mathcal{G}_{0}=\emptyset\)), and then the OQEC (subsystem) setting of Poulin, which is captured with the singleton coset representative (\(\mathcal{T}_{0}=\{I\}\)) subset and non-trivial gauge group (\(\mathcal{G}\neq\emptyset\)).
Moreover, any code defined by a subset \(\mathcal{T}_{0}\subseteq\mathcal{T}\) with \(|\mathcal{T}_{0}|>1\) will be a hybrid classical-quantum code, which will have a subspace base code (formally \(C=A\otimes B\) with \(A=\mathbb{C}\)) when the gauge group is abelian and a subsystem base code (\(C=A\otimes B\) with \(\dim A>1\)) otherwise. The size of the subset \(T_{0}\) determines the number of 'classical addresses' associated with the hybrid code. For instance, by Lemma 2, any \(g\notin\mathcal{N}(\mathcal{S})\) gives a coset \(g\mathcal{N}(\mathcal{S})\) for which the subspace \(gC\) is orthogonal to \(C\), and hence it defines a 2-bit, \(k\)-qubit hybrid code (which may have further subsystem structure when the gauge group is non-abelian).
In Sections 5 and 6 we will give examples of this general hybrid code construction and discuss them in detail. We next we turn to an analysis of what are the possible errors that a given hybrid stabilizer code can protect against.
## 4 Error Correction Theorem
The code construction above thus defines codes \(C=C(\mathcal{S},\mathcal{G}_{0},\mathcal{L}_{0},\mathcal{T}_{0})\), determined by, respectively, choices of stabilizer subgroup, gauge and logical operators, and subset of coset representatives. We shall characterize what sets of Pauli errors are correctable for a given code, and in doing so, we establish a generalization of the fundamental theorems of [24, 25] and [57] to this setting.
We shall first recall the basic notions and relevant results from OAQEC [9, 10], and then specify to the code framework formulated above. The starting point is the basic definition of OAQEC codes, which is most conveniently introduced in the Heisenberg picture.
**Definition 1**.: An algebra \(\mathcal{A}\subseteq\mathcal{B}(\mathcal{H})\) of operators on \(\mathcal{H}\) is _correctable_ for an error model \(\mathcal{E}\) if there exists a channel \(\mathcal{R}\) such that \(\mathcal{A}\) is conserved by \(\mathcal{R}\circ\mathcal{E}\) on states in \(Q\mathcal{H}\) where \(Q\) is the unit projection of \(\mathcal{A}\); that is,
\[Q(\mathcal{R}\circ\mathcal{E})^{\dagger}(X)Q=X\quad\forall X\in\mathcal{A}. \tag{3}\]
Given the unitary equivalence form for an algebra \(\mathcal{A}\cong\oplus_{i}(I_{m_{i}}\otimes M_{n_{i}})\), the unit element of \(\mathcal{A}\) is the projection \(Q\) in the algebra corresponding under the equivalence to \(\oplus_{i}I_{m_{i}}\otimes I_{n_{i}}\). There is a more general notion of OAQEC code considered in [9, 10], wherein the unit element of \(\mathcal{A}\) is replaced by an arbitrary projection on the Hilbert space, and one can then consider correction with respect to states supported on the corresponding range subspace of the projection. But the notion of correctable we consider here allows us to unambiguously discuss 'correction of an algebra', and is sufficient for our goal to extend the stabilizer formalism to this setting.
The corresponding Schrodinger picture description is given as follows: \(\mathcal{A}\) is correctable for \(\mathcal{E}\) if and only if there exists a channel \(\mathcal{R}\) such that for any density operator \(\rho=\sum_{i}\alpha_{i}(\tau_{i}\otimes\rho_{i})\) with \(\tau_{i}\in\mathcal{T}(A_{i})\), \(\rho_{i}\in\mathcal{T}(B_{i})\), and nonnegative scalars \(\sum_{i}\alpha_{i}=1\), there are density operators \(\tau_{i}^{\prime}\in\mathcal{T}(A_{i})\) for which
\[(\mathcal{R}\circ\mathcal{E})(\rho)=\sum_{i}\alpha_{i}\mathcal{R}\big{(} \mathcal{E}\big{(}\tau_{i}\otimes\rho_{i}\big{)}\big{)}=\sum_{i}\alpha_{i}( \tau_{i}^{\prime}\otimes\rho_{i}). \tag{4}\]
From this perspective, one can see that each of the subsystems \(B_{i}\) (with \(\dim B_{i}>1\)) can be used individually to encode quantum information that can be recovered. Moreover, an extra feature of such a code is that an arbitrary mixture of encoded states, one for each subsystem, can be simultaneously corrected by the same correction operation.
To generalize the main error correction theorem from previous stabilizer formalism settings, we need a description in terms of error operators. The following result from [9, 10] gives such a description, and below we formulate it in a style that we will use.
**Theorem 1**.: Let \(\mathcal{A}\) be a subalgebra of \(\mathcal{B}(\mathcal{H})\) with unit projection \(Q\). The following statements are equivalent:
1. \(\mathcal{A}\) is correctable for \(\mathcal{E}(\rho)=\sum_{k}E_{k}\rho E_{k}^{\dagger}\).
2. \([QE_{k}^{\dagger}E_{l}Q,X]=0\) for all \(X\in\mathcal{A}\) and all \(k,l\).
It follows from Theorem 1, using the structure of finite dimensional algebras and their commutants discussed above, that there is a correction operation \(\mathcal{R}\) for which Eq. (4) is satisfied if and only if for all \(k,l\) there are operators \(X_{kli}\in\mathcal{B}(A_{i})\) such that
\[QE_{k}^{\dagger}E_{l}Q=\sum_{i}X_{kli}\otimes I_{B_{i}}, \tag{5}\]
where here the operators \(X_{kli}\otimes I_{B_{i}}\) are understood to act on \(A_{i}\otimes B_{i}\), and so the sum (when there is a sum with more than one term) is thus an orthogonal direct sum of operators. The case of a sum with a single term captures the well-known Knill-Laflamme error correction conditions [40] when \(\dim A_{1}=1\) (and so \(X_{k1l}\) are complex scalars), and the OQEC testable conditions [41, 42] when \(\dim A_{1}>1\).
Now let us specify to our setting. Further notation will be introduced in the proof below, but we will note here that the algebras associated with the code constructions of the previous section in their unitary equivalence form satisfy \(m_{i}=m_{j}\) and \(n_{i}=n_{j}\) for any two pair of indices \(i,j\). Moreover, by saying a code \(C=C(\mathcal{S},\mathcal{G}_{0},\mathcal{L}_{0},\mathcal{T}_{0})\) is correctable, we mean the algebra determined by it, as in the previous section and the discussion above, is OAQEC-correctable.
**Theorem 2**.: A code \(C=C(\mathcal{S},\mathcal{G}_{0},\mathcal{L}_{0},\mathcal{T}_{0})\), with \(\mathcal{T}_{0}=\{g_{i}\}\), is correctable for a set of operators \(\{E_{k}\}\subseteq\mathcal{P}_{n}\) if and only if for all \(k,l\),
\[E_{k}^{\dagger}E_{l}\notin\Big{(}\mathcal{N}(\mathcal{S})\setminus\mathcal{G }\Big{)}\bigcup\Big{(}\bigcup_{i\neq j}g_{i}\mathcal{N}(\mathcal{S})g_{j}^{-1 }\Big{)}. \tag{6}\]
Proof.: First note that for any \(g\in\mathcal{P}_{n}\), we have equality of the following operator sets:
\[\mathcal{N}(\mathcal{S})\setminus\mathcal{G}=g\Big{(}\mathcal{N}(\mathcal{S}) \setminus\mathcal{G}\Big{)}g^{-1},\]
which follows from basic group properties, the anti-commutation relations, and the fact that \(\mathcal{G}\) (and \(\mathcal{N}(\mathcal{S})\)) includes the scalar operators. Next let us establish some notation. Recall that \(P\) is the projection onto \(C\), and for each \(i\), let \(P_{i}=g_{i}Pg_{i}^{-1}\). This is the projection onto the subspace \(g_{i}C:=A_{i}\otimes B_{i}\), which has subsystem tensor structure \(A_{i},B_{i}\) induced by that of \(C=A_{1}\otimes B_{1}\) and the unitary action of \(g_{i}\). So \(g_{i}(|a\rangle|b\rangle)\) will define an orthonormal basis for the new subspace from a basis \(|a\rangle|b\rangle\) for \(C\), which gives a corresponding identification of operators in \(g_{i}(\mathcal{B}(A)\otimes\mathcal{B}(B))g_{i}^{-1}\) with operators in \(\mathcal{B}(A_{i})\otimes\mathcal{B}(B_{i})\); in particular, for any \(X_{A}\in\mathcal{B}(A)\) this maps \(X_{A}\otimes I_{B}\) to \(X_{A_{i}}\otimes I_{B_{i}}\) for some \(X_{A_{i}}\in\mathcal{B}(A_{i})\).
By Lemma 2, the projections \(P_{i}\) project onto mutually orthogonal subspaces and hence we define the projection \(Q=\sum_{i}P_{i}\) to be the (orthogonal direct) sum of the \(P_{i}\), and with \(P_{1}=P\). The algebra in the background to be corrected and defined by the code is \(\mathcal{A}=\oplus_{i}(I_{A_{i}}\otimes\mathcal{B}(B_{i}))\), which has \(Q\) as its unit projection, and the error correction conditions we make use of are those of Eq. (5).
Throughout the proof we will let \(E:=E_{k}^{\dagger}E_{l}\) for a fixed pair \(k,l\). We shall first prove the 'if' direction of the result. If \(E\) is not in any of the sets in Eq. (6), then in particular for each \(i\), we have
\(g_{i}(\mathcal{N}(\mathcal{S})\setminus\mathcal{G})g_{i}^{-1}\) and so \(g_{i}^{-1}Eg_{i}\notin\mathcal{N}(\mathcal{S})\setminus\mathcal{G}\). Thus by the OQEC special case of the theorem above (or the Knill-Laflamme theorem when \(\mathcal{G}=\emptyset\)), we have for some operator \(X_{A}\in\mathcal{B}(A)\),
\[P(g_{i}^{-1}Eg_{i})P=X_{A}\otimes I_{B},\]
and hence for some operator \(X_{A_{i}}\in\mathcal{B}(A_{i})\),
\[P_{i}EP_{i}=g_{i}(Pg_{i}^{-1}Eg_{i}P)g_{i}^{-1}=g_{i}(X_{A}\otimes I_{B})g_{i} ^{-1}=X_{A_{i}}\otimes I_{B_{i}},\]
with the first and last equalities following from the definition of \(P_{i}\) and the induced tensor structure on \(g_{i}C\) from the decomposition \(C=A\otimes B\) and the action of \(g_{i}\) discussed above.
For the off-diagonal blocks, choose \(i\neq j\) and assume \(E\notin g_{i}\mathcal{N}(\mathcal{S})g_{j}^{-1}\). Then we have \(g_{i}^{-1}Eg_{j}\notin\mathcal{N}(\mathcal{S})\), and we claim that \(Pg_{i}^{-1}Eg_{j}P=0\) through what has become a standard stabilizer formalism type argument. Indeed, in general if \(F\notin\mathcal{N}(\mathcal{S})=\mathcal{Z}(\mathcal{S})\), then there is some \(g\in\mathcal{S}\) and \(z\in\mathbb{C}\) with \(|z|=1\) and \(z\neq 1\) such that \(Fg=zgF\). Also, as \(g\in\mathcal{S}\) and from the construction of \(C\), we have \(P=gP=Pg\) (the latter following from spectral theory since \(P\) is a polynomial in the elements of the abelian subgroup \(\mathcal{S}\)). Hence,
\[PFP=PFgP=P(zgF)P=z(PgFP)=zPFP,\]
and so \(PFP=0\). Thus we have, for all \(i\neq j\),
\[P_{i}EP_{j}=g_{i}(Pg_{i}^{-1}Eg_{j}P)g_{j}^{-1}=0.\]
It follows then that \(QEQ=\sum_{i,j}P_{i}EP_{j}=\sum_{i}X_{A_{i}}\otimes I_{B_{i}}\), and hence each of the operators \(QE_{k}^{\dagger}E_{l}Q\) satisfy the form given in Eq. (5). Thus, \(C\) is correctable for the set of error operators \(\{E_{k}\}\).
Conversely, for the 'only if' direction of the proof, suppose \(E\) satisfies \(QEQ=\sum_{i}X_{A_{i}}\otimes I_{B_{i}}\) for some operators \(X_{A_{i}}\in\mathcal{B}(A_{i})\). Then for a fixed \(i\), we have
\[X_{A_{i}}\otimes I_{B_{i}}=P_{i}EP_{i}=g_{i}(Pg_{i}^{-1}Eg_{i}P)g_{i}^{-1},\]
and hence
\[Pg_{i}^{-1}Eg_{i}P=g_{i}^{-1}(X_{A_{i}}\otimes I_{B_{i}})g_{i}=X_{A_{1}} \otimes I_{B_{1}},\]
for some \(X_{A_{1}}\in\mathcal{B}(A_{1})\). It follows from OQEC (and QEC when \(\mathcal{G}=\emptyset\)) that \(g_{i}^{-1}Eg_{i}\notin\mathcal{N}(\mathcal{S})\setminus\mathcal{G}\), and so \(E\notin g_{i}(\mathcal{N}(\mathcal{S})\setminus\mathcal{G})g_{i}^{-1}= \mathcal{N}(\mathcal{S})\setminus\mathcal{G}\).
Finally, fix a pair \(i\neq j\), and observe from the OAQEC correctable condition in Eq. (5) that the \(i,j\) off-diagonal block in the block diagonal decomposition determined by \(Q\) must be zero; that is, \(P_{i}EP_{j}=0\). Thus we have,
\[g_{i}Pg_{i}^{-1}Eg_{j}Pg_{j}^{-1}=P_{i}EP_{j}=0,\]
and since \(g_{i},g_{j}\) is unitary, in fact we have \(Pg_{i}^{-1}Eg_{j}P=0\). We want to conclude that \(g_{i}^{-1}Eg_{j}\notin\mathcal{N}(\mathcal{S})\). Suppose instead we had \(F:=g_{i}^{-1}Eg_{j}\in\mathcal{N}(\mathcal{S})=\mathcal{Z}(\mathcal{S})\). Then \(FP=PF\), from spectral theory and the construction of \(P\), and so \(C\), the range subspace of \(P\), is a reducing subspace for \(F\). Hence, \(P^{\perp}FP=P^{\perp}PF=0\), and so \(FP=P^{\perp}FP+PFP=0\). But \(F\) is a unitary operator, and so \(F\) restricted to \(C\) must be a norm-preserving map. This contradicts the fact that \(FP=0\), and thus we must have \(g_{i}^{-1}Eg_{j}\notin\mathcal{N}(\mathcal{S})\) as required, and this completes the proof.
**Remark**.: Conceptually, not belonging to the first set of the theorem statement ensures the individual codes (which are OQEC subsystem codes when the subspace has a tensor decomposition) are correctable, and with the needed orthogonality for the multiple codes given by the choice of coset representatives. Not belonging to the operator sets in the second union encapsulates the joint hybrid classical-quantum correctable code conditions. A larger subset of coset representatives corresponds to more sectors and a larger hybrid code. In particular, more sectors means generally larger operator sets in the theorem statements, which in turn makes it more difficult for errors to not belong to the sets, and hence smaller sets of correctable errors. These notions will be explored more in the examples below.
Examples and Applications
### Motivating Example
We can build upon the motivating example from the original stabilizer formalism as follows. The following operators are defined on \(n\)-qubit Hilbert space.
* Let \(s\leq n\) be a fixed positive integer and let \(\mathcal{S}=\langle Z_{1},\ldots,Z_{s}\rangle\); the subgroup of \(\mathcal{P}_{n}\) generated by phase flip operators on the first \(s\) qubits. Then \[C=C(\mathcal{S})=\mathrm{span}\big{\{}|\underbrace{0\cdots 0}_{s}i_{1}\cdots i_{n-s} \rangle\,:\,i_{j}=0,1\big{\}},\] and \(\dim C=2^{n-s}\) so that \(C\) can encode \(n-s\) qubits.
* Let \(r\) be a fixed integer with \(0\leq r\leq n-s\), and let \(\mathcal{G}_{0}\) be the set of \(r\) pairs of Pauli operators acting on qubits \(s+1\) to \(r+s\): \[\mathcal{G}_{0}=\big{\{}X_{i},Z_{i}\,:\,s+1\leq i\leq r+s\big{\}}.\] Then the gauge group \(\mathcal{G}\) is generated by \(\mathcal{S}\), \(iI\), and \(\mathcal{G}_{0}\), and includes the full Pauli subgroup of operators acting non-trivially on the \(r\) 'gauge qubits'.
* Let \(k=n-s-r\), and let \(\mathcal{L}_{0}\) be the set of \(k\) pairs of Pauli operators acting on qubits \(r+s+1\) to \(n\): \[\mathcal{L}_{0}=\big{\{}X_{i},Z_{i}\,:\,r+s+1\leq i\leq n\big{\}}.\] Then the logical group \(\mathcal{L}\) is the group generated by \(\mathcal{L}_{0}\) and \(iI\), and includes the full Pauli subgroup of operators acting non-trivially on the \(t\) 'logical qubits'.
* The normalizer \(\mathcal{N}(\mathcal{S})=\mathcal{Z}(\mathcal{S})\) for \(\mathcal{S}\) inside \(\mathcal{P}_{n}\) in this case is given by the following set of operators: \[\mathcal{N}(\mathcal{S})=\Big{\{}i^{c}\,\cdot\,Z_{1}^{b_{1}}\cdots Z_{s}^{b_ {s}}\,\cdot\,X_{s+1}^{a_{s+1}}Z_{s+1}^{b_{s+1}}\cdots X_{n}^{a_{n}}Z_{n}^{b_{ n}}\,:\,0\leq c\leq 3,\ 0\leq a_{j},b_{j}\leq 1\Big{\}}.\]
The size of the normalizer here is thus \(|\mathcal{N}(\mathcal{S})|=4\cdot 2^{s}\cdot 4^{n-s}=2^{2+2n-s}\). The full Pauli group \(\mathcal{P}_{n}\) has \(4^{n+1}\) elements (as every element can be uniquely written in the form \(i^{c}X_{1}^{a_{1}}Z_{1}^{b_{1}}\cdots X_{n}^{a_{n}}Z_{n}^{b_{n}}\)), and hence the number of normalizer cosets is given by,
\[|\mathcal{P}_{n}|/|\mathcal{N}(\mathcal{S})|=2^{2+2n-(2+2n-s)}=2^{s}.\]
Observe that each of the operators \(X_{j}\), \(1\leq j\leq s\), do not belong to \(\mathcal{N}(\mathcal{S})\). Hence we can take as a set of canonical coset representatives, the transversal given by the following \(2^{s}\)-element set:
\[\mathcal{T}=\big{\{}X_{1}^{a_{1}}\cdots X_{s}^{a_{s}}\,:\,0\leq a_{j}\leq 1 \big{\}}.\]
As a proviso, however, we note that there are many other choices of coset representatives, which could have different algebraic properties as it relates to the code generators. As a simple example, note that \(X_{i}\), with \(1\leq i\leq s\), and \(X_{i}N\), for some fixed \(N\in\mathcal{N}(\mathcal{S})\), generate the same coset, and so in particular a transversal need not consist entirely of mutually commuting operators, or even operators that commute with the gauge and logical operators.
Regarding the Hilbert space decomposition generated by this example, notice that the gauge and logical operators induce a tensor decomposition for the base code subspace \(C=A\otimes B\), where \(A\cong(\mathbb{C}^{2})^{\otimes r}\), \(B\cong(\mathbb{C}^{2})^{\otimes k}\), and this tensor structure naturally translates to any of the subspaces \(TC\), for \(T\in\mathcal{T}\), as \(T\) is unitary. (Recall here that \(n-s=r+k\), and the base code encodes \(k\) logical and \(r\) gauge qubits.) The
subspace \(C\) is easily seen to be invariant for each of the gauge and logical operators, and evidently for every \(A\in\mathcal{G}_{0}\) and \(B\in\mathcal{L}_{0}\), there are operators \(A_{1}\in\mathcal{B}(A)\) and \(B_{1}\in\mathcal{B}(B)\) such that
\[\left\{\begin{array}{rcl}A|_{C}&=&A_{1}\otimes I_{B}\\ B|_{C}&=&I_{A}\otimes B_{1}\end{array}\right.,\]
which is all true in general by Lemma 1. Given a (non-trivial) subset of coset representatives \(\mathcal{T}_{0}\subseteq\mathcal{T}\), the subspaces \(TC\), \(T\in\mathcal{T}_{0}\), are mutually orthogonal (in general this is true by Lemma 2) and the corresponding subspace for the hybrid code is \(C\mathcal{T}_{0}=\oplus_{T\in\mathcal{T}_{0}}TC\).
Regarding correctable sets of errors for this code, Theorem 2 gives a full characterization of the possible correctable errors for any given coset subset \(\mathcal{T}_{0}\). As a simple example, consider the case with the two operators \(\mathcal{T}_{0}=\{I,X_{1}\}\) from the transversal \(\mathcal{T}\) above. Here there are two operator sets that the error operator products \(E_{k}^{\dagger}E_{l}\) cannot belong to: (i) \(\mathcal{N}(\mathcal{S})\setminus\mathcal{G}\); and (ii) \(X_{1}\mathcal{N}(\mathcal{S})=\mathcal{N}(\mathcal{S})X_{1}\). From the normalizer and gauge structures above, the first set consists of all elements in the normalizer with scalar multiples of the identity on qubits \(s+1\) through to \(r+s\). This set encapsulates the (quantum) error correction conditions for the two quantum codes defined by this code. The second set is simply all elements of the normalizer multiplied by \(X_{1}\), and it corresponds to the cross terms that govern whether the code is hybrid correctable.
As an example of a set of correctable errors for this code, one could take a subset of \(\{I,X_{2},\ldots,X_{s}\}\). Any pairwise products of these operators do not belong to the two sets above and hence are correctable. (In fact, in general, any set of coset representatives not used to define the hybrid code will be correctable errors.) Sets of errors are not correctable by the theorem if any product of two of them belongs to either of the two sets. Thus, any error operator product of the form \(X_{1}N\), with \(N\in\mathcal{N}(\mathcal{S})\), would disrupt any hybrid correction for the error model, whereas any product belonging to the first set would prevent the individual quantum codes from being corrected.
### Hybrid Subspace Codes
When the gauge group is abelian (\(\mathcal{G}_{0}=\emptyset\)), the codes constructed above have no subsystem structure and are subspaces. Further, when additionally the coset representative set is non-trivial (\(\{I\}\subsetneq\mathcal{T}_{0}\)), the codes generated by the formalism are 'hybrid subspace codes'. From the OAQEC perspective, hybrid subspace codes are those associated with algebras \(\mathcal{A}\) that are unitarily equivalent to a direct sum of full matrix algebras; i.e., of the form \(\mathcal{A}\cong\oplus_{k=1}^{M}M_{n_{k}}\) for some positive integers \(n_{k}\) (and \(|\mathcal{T}_{0}|=M\) in our notation above). These are precisely the algebras with an abelian commutant, \(\mathcal{A}^{\prime}\cong\oplus_{k=1}^{M}\mathbb{C}I_{n_{k}}\). Each summand thus can be used to encode quantum information (when \(n_{k}>1\)) as a traditional quantum (subspace) code, and overall the collection of codes defined by \(\mathcal{A}\) make up a hybrid subspace code that can be corrected for error sets given by Theorem 2.
The testable conditions of Eq. (5) take on a particularly transparent form in this case. As before, let \(Q=\sum_{i}P_{i}\) be the unit projection of \(\mathcal{A}\), with \(P_{i}\) the projection onto the \(i\)th matrix block of \(\mathcal{A}\). Then the code is correctable for a set of error operators \(\{E_{k}\}\) if and only if there are complex scalars \(\lambda_{kl}^{(i)}\) such that for all \(k,l\),
\[QE_{k}^{\dagger}E_{l}Q=\sum_{i}\lambda_{kl}^{(i)}P_{i}. \tag{7}\]
These conditions can be cast into vector state form (as discussed in [48]) as follows: Given \(1\leq i\leq M\), choose orthonormal states \(\{|\psi_{i,j}\rangle\}_{j=1}^{n_{i}}\) such that \(P_{i}=\sum_{j}|\psi_{ij}\rangle\!\langle\psi_{ij}|\). We then have
\[\langle\psi_{i_{1}j_{1}}|E_{k}^{\dagger}E_{l}|\psi_{i_{2}j_{2}}\rangle=\langle \psi_{i_{1}j_{1}}|QE_{k}^{\dagger}E_{l}Q|\psi_{i_{2}j_{2}}\rangle=\sum_{i} \lambda_{kl}^{(i)}\langle\psi_{i_{1}j_{1}}|P_{i}|\psi_{i_{2}j_{2}}\rangle= \lambda_{kl}^{(i_{1})}\delta_{i_{1}i_{2}}\delta_{j_{1}j_{2}}.\]
One can reverse this argument to observe that Eq. (7) is equivalent to the orthogonality conditions
\[\langle\psi_{i_{1}j_{1}}|E_{k}^{\dagger}E_{l}|\psi_{i_{2}j_{2}}\rangle=\lambda _{kl}^{(i_{1})}\delta_{i_{1}i_{2}}\delta_{j_{1}j_{2}}, \tag{8}\]
for any choice of orthonormal basis states \(|\psi_{ij}\rangle\) for the range subspaces of the \(P_{i}\).
Recently, a distinguished special case of hybrid subspace codes was considered in [28]. In OAQEC language (with notation used in [28]), the algebras of focus there are of the form \(\mathcal{A}\cong\oplus_{\nu=1}^{M}M_{K}\); i.e., the direct sum of \(M\) copies of \(K\times K\) complex matrices. The full code subspace is thus an orthogonal direct sum \(\oplus_{\nu=1}^{M}C^{(\nu)}\) of \(K\)-dimensional subspaces, and there are unitary 'translation' operators that connect the individual code subspaces, \(C^{(\nu)}=T^{(\nu)}C^{(1)}\). Interestingly, the orthogonality conditions of Eq. (8) were independently discovered in [28] for this subclass of hybrid subspace codes. The codes constructed in [28] are captured by the stabilizer formalism presented above; in particular, they are special cases of hybrid codes with (in our notation above) trivial gauge generator group (\(\mathcal{G}_{0}=\emptyset\)) and non-trivial coset representatives (\(\{I\}\subsetneq\mathcal{T}_{0}\)). As an illustration, let us consider one of the codes presented there.
The following describes a single qubit hybrid code on 7-qubit space as presented in Table (34) of [28]. The first six rows are the stabilizer subgroup generators, the next two are logical operators on the base code space \(C^{(1)}\), and the final row is a translation operator which we shall denote by \(T\).
\begin{tabular}{c|c c c c c c} \hline \hline \(S_{1}\) & \(X\) & \(I\) & \(I\) & \(Z\) & \(Y\) & \(Y\) & \(Z\) \\ \(S_{2}\) & \(Z\) & \(I\) & \(I\) & \(I\) & \(I\) & \(I\) & \(X\) \\ \(S_{3}\) & \(I\) & \(X\) & \(I\) & \(X\) & \(Z\) & \(I\) & \(I\) \\ \(S_{4}\) & \(I\) & \(Z\) & \(I\) & \(Z\) & \(I\) & \(X\) & \(X\) \\ \(S_{5}\) & \(I\) & \(I\) & \(X\) & \(X\) & \(I\) & \(Z\) & \(I\) \\ \(S_{6}\) & \(I\) & \(I\) & \(Z\) & \(Z\) & \(X\) & \(I\) & \(X\) \\ \hline \(X\) & \(I\) & \(I\) & \(I\) & \(X\) & \(Z\) & \(Z\) & \(X\) \\ \(\overline{Z}\) & \(I\) & \(I\) & \(I\) & \(Z\) & \(X\) & \(X\) & \(I\) \\ \hline \(T\) & \(I\) & \(I\) & \(I\) & \(I\) & \(X\) & \(Y\) & \(Y\) \\ \hline \hline \end{tabular}
In our notation, the parameters for this example are \(n=7\), \(s=6\), and \(k=1\) (with \(r=0\) as there is no subsystem structure here). The choice of coset representatives given by the table is \(\mathcal{T}_{0}=\{I,T\}\); and indeed, one can see that \(T\) does not commute with \(S_{k}\) for \(k=2,3,5,6\), so \(T\mathcal{N}(\mathcal{S})\neq\mathcal{N}(\mathcal{S})\) is a different coset than that defined by the identity operator. Observe that there are \(2^{s}=64\) cosets for \(\mathcal{N}(\mathcal{S})\) in this case, and so there are several other potential coset representative subset choices. As a simple example, we could choose \(X_{1}=XIIIIII\), which does not commute with \(S_{2}\), and so defines a different coset than the identity operator and that defined by \(T\) (as \(TX_{1}\notin\mathcal{N}(\mathcal{S})\) since it does not commute with \(S_{3}\)); that is, \(\mathcal{N}(\mathcal{S})\neq X_{1}\mathcal{N}(\mathcal{S})\neq T\mathcal{N}( \mathcal{S})\).
Additionally, Theorem 2 gives a characterization of sets of possible Pauli errors that can be corrected by such codes. Correctable errors can be viewed through this lens; for instance, while the operator \(X_{1}\) can act as a new coset representative for this hybrid code, it is also a correctable error for the code, as can be seen through an application of Theorem 2 and the group theoretic conditions displayed there. The relevant operator sets given by the theorem are (recalling that there are no noncommutative gauge operators here): (i) \(\mathcal{N}(\mathcal{S})\setminus\langle\mathcal{S},iI\rangle\); and, (ii) \(T\mathcal{N}(\mathcal{S})=\mathcal{N}(\mathcal{S})T\). The fact that \(X_{1}\) does not belong to either of these sets shows the set \(\{I,X_{1}\}\) is a correctable set of errors for the code.
Note that Theorem 2 also tells us what sets of errors are not correctable for this hybrid code. As another simple example, consider the error set \(\{I,T\}\). Observe that \(TI=T\in T\mathcal{N}(\mathcal{S})\), and so the error set fails the hybrid correctable condition. (In fact, as noted in the previous example, the same is true for any error set consisting of transversal operators.) This error set is interesting in the sense that, while it does not satisfy the hybrid correctable condition, each of the individual quantum subspace codes are correctable for the error set, which follows since \(T\notin\mathcal{N}(\mathcal{S})\supseteq\mathcal{N}(\mathcal{S})\setminus \langle\mathcal{S},iI\rangle\).
### Hybrid Bacon-Shor Code
The (two-dimensional) Bacon-Shor code [8, 47] is a subsystem code defined on an \(\ell\times\ell\) grid of qubits, with gauge group \(\mathcal{G}\) generated by \(\mathcal{G}_{0}\) and \(iI\), where
\[\mathcal{G}_{0}=\{X_{(i,j)}X_{(i,j+1)}:1\leq i\leq\ell,1\leq j<\ell\}\cup\{Z_{ (i,j)}Z_{(i+1,j)}:1\leq i<\ell,1\leq j\leq\ell\}. \tag{9}\]
We use the notation \(X_{(i,j)}\) to denote a Pauli \(X\) operator acting on the qubit at coordinate \((i,j)\) (and analogously for Pauli \(Z\) operators). See Fig. 1 for a visual depiction of the operators in \(\mathcal{G}_{0}\). The stabilizer group \(\mathcal{S}\) is generated by the set
\[\big{\{}X_{(*,j)}X_{(*,j+1)},Z_{(i,*)}Z_{(i+1,*)}:1\leq i,j<\ell\big{\}}, \tag{10}\]
where \(X_{(*,j)}=X_{(1,j)}X_{(2,j)}\dots X_{(\ell,j)}\). The logical group \(\mathcal{L}\) is generated by \(\mathcal{L}_{0}\) and \(iI\), where
\[\mathcal{L}_{0}=\{X_{(*,j)},Z_{(i,*)}:1\leq i,j\leq\ell\}, \tag{11}\]
and so the code distance is \(d=\ell\).
Let us consider some example choices of subsets of coset representatives. First, let \(\mathcal{T}_{0}\) be generated by \(\prod_{i=1}^{\lfloor\ell/2\rfloor}X_{(2i,1)}\) and \(\prod_{j=1}^{\lfloor\ell/2\rfloor}Z_{(1,2j)}\) (see Fig. 0(a) for the \(\ell=8\) case). With this choice of \(\mathcal{T}_{0}\) we get a \(4\)-bit hybrid Bacon-Shor code. We can equivalently index our subset of coset representatives by their error syndromes. Consider the \(X\)-type stabilizer generators given in Eq. (10). We write the error syndrome as a binary string, where the \(j\)'th entry is \(0\) if the stabilizer \(X_{(*,j)}X_{(*,j+1)}\) is satisfied, and \(1\) if it is unsatisfied. Then the error syndromes corresponding to the coset representatives \(I\) and \(\prod_{j=0}^{\lfloor\ell/2\rfloor}Z_{(1,2j)}\) are respectively \(00\dots 0\) and \(11\dots 1\); i.e. the codewords of the (\(\ell\)-1)-bit repetition code. In general, let \(C_{c}\) be an (\(\ell\)-1)-bit linear code with basis \(\{v_{i}\}\). We are free to choose \(Z\)-type generators \(g_{i}\in\mathcal{T}_{0}\) such that \(\sigma(g_{i})=v_{i}\), where \(\sigma(g_{i})\) denotes the (binary) error syndrome. Fig. 0(b) illustrates the case when \(C_{c}\) is the Hamming code. If we make the same choice for the \(X\)-type generators of \(\mathcal{T}_{0}\) we get a \(16\)-bit hybrid Bacon-Shor code. The following theorem characterizes the distance of our hybrid Bacon-Shor codes.
**Theorem 3**.: Let \(C=C(\mathcal{S},\mathcal{G}_{0},\mathcal{L}_{0},\mathcal{T}_{0}=\{I\})\) be an \(\llbracket n,k,d\rrbracket\) stabilizer subsystem code. Fix a generating set \(\{S_{j}:1\leq j\leq s\}\) for \(\mathcal{S}\). Suppose that every single-qubit error operator anti-commutes with at most \(m\) of the \(S_{j}\). Then for any \([s,k_{c},d_{c}]\) linear code \(C_{c}\), there exists a hybrid subsystem code \(C^{\prime}=C(\mathcal{S},\mathcal{G}_{0},\mathcal{L}_{0},\mathcal{T}_{0}^{ \prime})\) encoding \(k\) logical qubits into \(n\) qubits with \(|\mathcal{T}_{0}^{\prime}|=k_{c}\) and distance \(d^{\prime}\geq\min(d,\lceil d_{c}/m\rceil)\).
Proof.: Let \(\{v_{i}\}\) be a basis for the codewords of \(C_{c}\). For each \(v_{i}\), we construct a corresponding coset representative \(g_{i}\) such that \(\sigma(g_{i})=v_{i}\), giving \(\mathcal{T}_{0}^{\prime}=\langle g_{i}:1\leq i\leq k_{c}\rangle\). This can be done for example using a basis of pure errors. Now we apply Theorem 2 to \(C^{\prime}=C(\mathcal{S},\mathcal{G}_{0},\mathcal{L}_{0},\mathcal{T}_{0}^{ \prime})\). First note that \(\mathcal{N}(\mathcal{S})\setminus\mathcal{G}\) contains operators of weight at least \(d\). Now consider a single term in the union \(\bigcup_{i\neq j}g_{i}\mathcal{N}(\mathcal{S})g_{j}^{-1}\). Every operator in \(g_{i}\mathcal{N}(\mathcal{S})g_{j}^{-1}\) has syndrome equal to the codeword \(u=v_{i}+v_{j}\) of \(C_{c}\). Because every single-qubit error anti-commutes with at most \(m\) stabilizer generators, any operator with syndrome equal to \(u\) must have weight at least \(\lceil d_{c}/m\rceil\). The same is true for every term in \(\bigcup_{i\neq j}g_{i}\mathcal{N}(\mathcal{S})g_{j}^{-1}\), so every operator in this set has weight at least \(\lceil d_{c}/m\rceil\).
Applying Theorem 3, for the hybrid Bacon-Shor code with \(C_{c}\) equal to the repetition code we have \(d^{\prime}=\lceil(\ell-1)/2\rceil\) and for the \(l=8\) Bacon-Shor code with \(C_{c}\) equal to the Hamming code we have \(d^{\prime}=2\). We
Fig. 1: Hybrid Bacon-Shor code. Qubits are indicated by black circles, \(XX\) gauge generators by red lines and \(ZZ\) gauge generators by blue lines. (a) Example coset representative whose error syndrome is \(11\dots 1\). (b) Example coset representatives whose error syndromes generate the Hamming code (each colour denotes a different representative).
can achieve better distance scaling using a different initial quantum code, e.g., the toric code [39] defined on an \(\ell\times\ell\) lattice with the canonical stabilizer generators. Here we have \(d=\ell\), \(s=\ell^{2}-2\) and \(m=4\). If we choose a (good) linear code with parameters \([s,\alpha s,\beta s]\) where \(\beta\geq 4\ell/(\ell^{2}-2)\), we can construct a hybrid code with \(\alpha s\) coset representatives and distance \(d^{\prime}=\ell\).
## 6 Extension to Qudits
In this section we discuss the extension of the stabilizer formalism presented above to the case of qudits; that is, what happens when one replaces the base qubit space \(\mathbb{C}^{2}\) with \(\mathbb{C}^{d}\) for fixed positive integer \(d>2\). We begin by recalling the basic set up for the standard qudit stabilizer formalism, as described in several other places (see for instance [26, 49, 23]).
### The \(n\)-Qudit Pauli Group
Let \(\{|0\rangle,\ldots,|d-1\rangle\}\) be a fixed basis for \(\mathbb{C}^{d}\), and given a fixed positive integer \(n\geq 1\) consider the corresponding basis for \((\mathbb{C}^{d})^{\otimes n}\) written as \(\{|i_{1}\cdots i_{n}\rangle=|i_{1}\rangle\otimes\ldots\otimes|i_{n}\rangle\,: \,0\leq i_{j}\leq d-1,\,1\leq j\leq n\}\). Further let \(\omega=e^{2\pi i/d}\) be a primitive \(d\)th root of unity, and define the following generalized Pauli operators:
\[X=\sum_{k=0}^{d-1}|k+1\rangle\!\langle k|\quad\text{and}\quad Z=\sum_{k=0}^{d- 1}\omega^{k}|k\rangle\!\langle k|,\]
where in the definition of \(X\) we use modulo \(d\) arithmetic with \(|d\rangle\equiv|0\rangle\). Some of the relevant properties of the so-called'shift' (\(X\)) and 'clock' (\(Z\)) operators include: \(X^{d}=I=Z^{d}\) and the anti-commutation relation
\[XZ=\omega ZX.\]
Note that \(X\) and \(Z\) are no longer self-adjoint for \(d>2\), but they are unitary with \(X^{-1}=X^{d-1}=X^{\dagger}\) (and the same for \(Z\)). The single qudit Pauli group is the unitary subgroup of \(\mathcal{U}(\mathbb{C}^{d})\) given by
\[\mathcal{P}_{d,1}=\langle\sqrt{w}I,X,Z\rangle,\]
and so the generic element of \(\mathcal{P}_{d,1}\) can be written (using the anti-commutation relation for \(X\) and \(Z\)) in the form \(\omega^{a/2}X^{b}Z^{c}\) for some \(a,b,c\in\mathbb{N}\).
Observe that for \(d=2\) we have \(\sqrt{\omega}=i\), and so this definition agrees with the qubit case. But one may ask, why include the phase factor \(\sqrt{\omega}\) as a generator, instead of \(\omega\) for instance? The reason is that including it allows for many more eigenvalue-\(1\) operators, which is crucial in the context of the stabilizer formalism. Indeed, one can show using standard linear algebra tools that for any operator \(X^{a}Z^{b}\), with \(a,b\in\mathbb{N}\), there is a \(U\in\mathcal{P}_{d,1}\) that is proportional to the operator such that \(U\) has \(1\) as an eigenvalue.
As in the single qubit case, for arbitrary \(n\geq 1\), we define the \(n\)_-qudit Pauli group_\(\mathcal{P}_{d,n}\) to be the subgroup of \(\mathcal{U}(N)\), with \(N=d^{n}\), generated by \(n\)-tensors of the single qudit Pauli operators \(X\), \(Z\), and \(\sqrt{\omega}I\); that is, the unitary group generated by \(Z_{1}=Z\otimes(I^{(\otimes(n-1)})\), \(Z_{2}=I\otimes Z\otimes(I^{\otimes(n-2)})\), etc. Hence it follows, again applying the anti-commutation relations, that a generic element of \(\mathcal{P}_{d,n}\) belongs to the set:
\[\Big{\{}(\sqrt{\omega})^{c}X_{1}^{a_{1}}Z_{1}^{b_{1}}\cdots X_{n}^{a_{n}}Z_{n} ^{b_{n}}\,:\,0\leq c\leq 2d-1,\,\,0\leq a_{j},b_{j}\leq d-1\Big{\}}.\]
Observe that the cardinality of \(\mathcal{P}_{d,n}\) is: \(2d\times d^{n}\times d^{n}=2d^{2n+1}\).
For stabilizer formalism related calculations, it is useful to know that every element of \(\mathcal{P}_{d,n}\) that is not a multiple of the identity operator has trace equal to \(0\). Indeed, one can use the anti-commutation relation and cyclic property of the trace to show that \(\operatorname{Tr}(X^{a}Z^{b})\neq 0\) if and only if \(X^{a}Z^{b}\) is a multiple of the identity. Moreover, given an abelian subgroup \(\mathcal{S}\) of \(\mathcal{P}_{d,n}\), there is a well-known and useful formula for the orthogonal projection \(P_{\mathcal{S}}\) onto the stabilizer subspace defined by \(\mathcal{S}\) given by \(P_{\mathcal{S}}=\frac{1}{|\mathcal{S}|}\sum_{\mathcal{S}\in\mathcal{S}}S.\)
### Hybrid Qudit Stabilizer Formalism
The OAQEC stabilizer formalism presented above for the qubit base space, extends fully to the case of qudits, including the main error correction theorem. Here we briefly point out the main pieces, following along the presentation above.
* The starting point is again an abelian subgroup \(\mathcal{S}\) of \(\mathcal{P}_{d,n}\) that does not contain the scalar operator \(\omega I\). Even though the generating operators are no longer self-adjoint, it is still the case that the normalizer and centralizer coincide; that is, \(\mathcal{N}(\mathcal{S})=\mathcal{Z}(\mathcal{S})\). (This follows because elements of \(\mathcal{P}_{d,n}\) either commute or commute up to a power of \(\omega\), and \(\omega I\) is not in \(\mathcal{S}\).) The stabilizer subspace \(C=C(\mathcal{S})=\operatorname{span}\{|\psi\rangle\,:\,g|\psi\rangle=|\psi \rangle\,\,\forall g\in\mathcal{S}\}\) is defined in the same way and satisfies \(\dim C=d^{n-s}\).
* The \(r\)-qudit gauge group and \(k\)-qudit logical group are analogously defined, with \(\sqrt{\omega}I\) replacing \(iI\). Lemma 1 holds, with \(\mathbb{C}^{d}\) replacing \(\mathbb{C}^{2}\) in the Hilbert space and subsystem decompositions.
* Further, regarding the normalizer cosets and hybrid code sectors, Lemma 2 still holds, with the replacement of \(\mathcal{P}_{d,n}\) and \(\omega I\) in the statement (with the same basic ingredients in the proof, as noted just above). Thus, given a subset of a coset transversal for \(\mathcal{N}(\mathcal{S})\) inside \(\mathcal{P}_{d,n}\), we will have an associated hybrid code \(C=C(\mathcal{S},\mathcal{G}_{0},\mathcal{L}_{0},\mathcal{T}_{0})\) with code sectors as in the qubit case, and subsystem structure defined by the gauge group (when it is non-trivial), which is carried to the different code spaces by the transversal operators.
In terms of the error correction conditions, first note that none of the OAQEC framework or results are qubit dependent, they are based on the general theory of operator algebras on Hilbert space. This, together with using analogous properties of the generalized Pauli group, allows us to generalize Theorem 2, essentially with the same proof. We state the result here for completeness.
**Theorem 4**.: A code \(C=C(\mathcal{S},\mathcal{G}_{0},\mathcal{L}_{0},\mathcal{T}_{0})\), with \(\mathcal{T}_{0}=\{g_{i}\}\), is correctable for a set of operators \(\{E_{k}\}\subseteq\mathcal{P}_{d,n}\) if and only if for all \(k,l\),
\[E_{k}^{\dagger}E_{l}\notin\Big{(}\mathcal{N}(\mathcal{S})\setminus\mathcal{G} \Big{)}\bigcup\Big{(}\bigcup_{i\neq j}g_{i}\mathcal{N}(\mathcal{S})g_{j}^{-1} \Big{)}. \tag{12}\]
A pair of examples are discussed below.
**Example 1**.: The motivating example presented above generalizes straightforwardly as follows.
* Let \(s\leq n\) be a fixed positive integer and let \(\mathcal{S}=\langle Z_{1},\dots,Z_{s}\rangle\subseteq\mathcal{P}_{d,n}\). Then \[C=C(\mathcal{S})=\operatorname{span}\bigl{\{}|\underbrace{0\cdots 0}_{s}i_{1} \cdots i_{n-s}\rangle\,:\,0\leq i_{j}\leq d-1\bigr{\}},\] and \(\dim C=d^{n-s}\) so that \(C\) can encode \(n-s\) qudits.
* Let \(r\) be a fixed integer with \(0\leq r\leq n-s\), and let \(\mathcal{G}_{0}\) be the set of \(r\) pairs of generating Pauli operators acting on qudits \(s+1\) to \(r+s\): \[\mathcal{G}_{0}=\bigl{\{}X_{i},Z_{i}\,:\,s+1\leq i\leq r+s\bigr{\}}.\] Then the gauge group \(\mathcal{G}\) is generated by \(\mathcal{S}\), \(\sqrt{\omega}I\), and \(\mathcal{G}_{0}\), and includes the full subgroup of operators in \(\mathcal{P}_{d,n}\) acting non-trivially on the \(r\) gauge qudits.
* Let \(k=n-s-r\), and let \(\mathcal{L}_{0}\) be the set of \(k\) pairs of generating Pauli operators acting on qudits \(r+s+1\) to \(n\): \[\mathcal{L}_{0}=\bigl{\{}X_{i},Z_{i}\,:\,r+s+1\leq i\leq n\bigr{\}}.\] Then the logical group \(\mathcal{L}\) is the group generated by \(\mathcal{L}_{0}\) and \(\sqrt{\omega}I\), and includes the full subgroup of operators in \(\mathcal{P}_{d,n}\) acting non-trivially on the \(k\) logical qudits.
* The normalizer \(\mathcal{N}(\mathcal{S})=\mathcal{Z}(\mathcal{S})\) for \(\mathcal{S}\) inside \(\mathcal{P}_{n,d}\) is given by: \[\mathcal{N}(\mathcal{S})=\Big{\{}\omega^{c/2}\,\cdot\,Z_{1}^{b_{1}}\cdots Z_{s }^{b_{s}}\,\cdot\,X_{s+1}^{a_{s+1}}Z_{s+1}^{b_{s+1}}\cdots X_{n}^{a_{n}}Z_{n}^ {b_{n}}\,:\,0\leq c\leq 2d-1,\,\,0\leq a_{j},b_{j}\leq d-1\Big{\}}.\]
The size of the normalizer here is: \(|\mathcal{N}(\mathcal{S})|=2d\times d^{s}\times(d^{2})^{n-s}=2d^{2n-s+1}\). Hence the number of normalizer cosets is given by,
\[|\mathcal{P}_{n,d}|/|\mathcal{N}(\mathcal{S})|=d^{s}.\]
As in the qubit case, each operator \(X_{j}\), \(1\leq j\leq s\), does not belong to \(\mathcal{N}(\mathcal{S})\), and nor does any product \(X_{j}X_{j^{\prime}}^{-1}\) of operators from this set. So a coset transversal of maximal size is given by the \(d^{s}\)-element set:
\[\mathcal{T}=\big{\{}X_{1}^{a_{1}}\cdots X_{s}^{a_{s}}\,:\,0\leq a_{j}\leq d-1 \big{\}}.\]
Thus, given a (non-trivial) subset of coset representatives \(\mathcal{T}_{0}\subseteq\mathcal{T}\), the subspaces \(TC\), \(T\in\mathcal{T}_{0}\), are mutually orthogonal and the corresponding subspace for the hybrid code is \(C_{\mathcal{T}_{0}}=\oplus_{T\in\mathcal{T}_{0}}TC\), with \(C\) (in the case of a non-trivial gauge group) having subsystem structure that is carried to the subspaces \(TC\) by the transversal operators. A full characterization of the possible correctable errors for any given coset subset \(\mathcal{T}_{0}\) is given by Theorem 4. The simple example of a two element transversal set and set of correctable errors discussed above in the qubit case, carries through analogously.
As another example, we give a hybrid version of a (subspace) code presented in the seminal work [27]. In contrast to the qubit (\(d=2\)) case, this example shows how for larger \(d\), even a single mode (\(n=1\)) can generate interesting hybrid code structures.
**Example 2**.: Let \(d=18\) and \(n=1\). A single qubit subspace code, which can be viewed as a 'pre-GKP code', is given in [27] as the stabilizer subspace \(C\) generated by the (abelian) group \(\mathcal{S}=\langle X^{6},Z^{6}\rangle\), which one can readily calculate is spanned by the two states:
\[|\overline{0}\rangle=\frac{1}{\sqrt{3}}(|0\rangle+|6\rangle+|12\rangle)\qquad \text{and}\qquad|\overline{1}\rangle=\frac{1}{\sqrt{3}}(|3\rangle+|9\rangle+| 15\rangle).\]
The anti-commutation relations as it relates to these two operators are given, for any positive integers \(a\), \(b\), as follows:
\[(X^{a}Z^{b})X^{6} = \omega^{6b}X^{6}(X^{a}Z^{b})\] \[(X^{a}Z^{b})Z^{6} = \omega^{6a}Z^{6}(X^{a}Z^{b}).\]
In particular, \(X^{a}Z^{b}\) commutes with \(\mathcal{S}\) if and only if \(a\) and \(b\) are both divisible by \(3\). Thus, in this case we have
\[\mathcal{N}(\mathcal{S})=\mathcal{Z}(\mathcal{S})=\big{\{}\omega^{c/2}X^{a}Z^ {b}\,\,\big{|}\,\,\,0\leq c\leq 35,\,a,b\in\{0,3,6,9,12,15\}\big{\}}.\]
Hence, \(|\mathcal{N}(\mathcal{S})|=36\times 6\times 6\), and from \(|\mathcal{P}_{18,1}|=36\times 18\times 18\) it follows that the number of cosets is given by: \(|\mathcal{P}_{18,1}|/|\mathcal{N}(\mathcal{S})|=9\).
Returning to the original code construction, the logical operators were identified as \(\overline{X}=X^{3}\) and \(\overline{Z}=Z^{3}\). Moreover, as noted in [27], the 9 operators belonging to the set \(\mathcal{T}=\{X^{a}Z^{b}\,:\,|a|,|b|\leq 1\}\) form a correctable error set for the code (in the classic Knill-Laflamme sense, which remember is captured as a special case of OAQEC), as one can show they map \(C\) to 9 mutually orthogonal subspaces. This set of operators is also of interest here, as \(\mathcal{T}\) forms a coset transversal for \(\mathcal{S}\). Indeed, one can easily verify using the anti-commutation relations that any two elements from this set define different cosets for \(\mathcal{S}\) inside \(\mathcal{P}_{18,1}\) (as any product \(T_{1}^{-1}T_{2}\notin\mathcal{N}(\mathcal{S})\) when \(T_{1},T_{2}\in\mathcal{T}\)).
We can thus consider hybrid versions of this code, by taking a subset \(\mathcal{T}_{0}\) of elements from \(\mathcal{T}\) and their corresponding code sectors, and then Theorem 4 can tell us what are the correctable error sets for the code.
Consider for example the set \(\mathcal{T}_{0}=\{I,X,X^{-1}\}\subseteq\mathcal{T}\). Here the gauge group is generated by \(\mathcal{S}\) and \(\sqrt{\omega}I\), and for this particular \(\mathcal{T}_{0}\) the first set in the union of Eq. (12) is equal to:
\[\mathcal{N}(\mathcal{S})\setminus\mathcal{G}=\big{\{}\omega^{c/2}X^{a}Z^{b}\; \;\big{|}\;\;0\leq c\leq 35,\,a,b\in\{3,9,15\}\big{\}},\]
which follows from elements commuting modulo a power of \(\omega\). The cross-term union of Eq. (12) is defined by 6 sets, but collapses (from anti-commutation) to the union of 4 sets given by:
\[\bigcup_{a\in\{-2,-1,1,2\}}X^{a}\mathcal{N}(\mathcal{S})=\big{\{}\omega^{c/2}X ^{a}Z^{b}\;\;\big{|}\;\;0\leq c\leq 35,\,a\in\mathbb{N},\,b\in\{0,3,6,9,12,15 \}\big{\}}.\]
Observe the first set is a subset of the second union, which is a consequence of \(\mathcal{T}_{0}\) including \(I\) and being inverse closed. Thus, by Theorem 4, the possible correctable errors for this code are precisely those operator sets \(\mathcal{E}\subseteq\mathcal{P}_{18,1}\) such that \(g_{1}^{-1}g_{2}\) does not belong to this union for any choice of \(g_{1},g_{2}\in\mathcal{E}\). For example, one can easily check that the set \(\mathcal{E}=\{Z^{2b+1}\,:\,0\leq b\leq 8\}\) satisfies this condition and hence forms a correctable set of errors for the code.
Note that the hybrid code in this particular instance is 6-dimensional, as it is determined by a qubit base code and the 3 code sectors defined by \(\mathcal{T}_{0}=\{I,X,X^{-1}\}\); namely, the direct sum \(C_{\mathcal{T}_{0}}=C\oplus XC\oplus X^{-1}C\). Thus, as we are in a 18-dimensional space, we can have a maximum of 3 (non-degenerate) errors that can be correctable for this code. One might express concern then, as the error set \(\mathcal{E}\) includes 9 operators; however, there is no contradiction here, as these operators include degeneracy on \(C_{\mathcal{T}_{0}}\). Indeed, one can check directly that \(Z,Z^{3},Z^{5}\) map this 6-dimensional subspace to 3 mutually orthogonal subspaces. Moreover, \(Z^{6}\) acts as the identity on \(C\), and from the anti-commutation relations it acts as \(\omega^{6}I\) on \(XC\) and \(\overline{\omega}^{6}I\) on \(X^{-1}C\). It follows that \(ZC_{\mathcal{T}_{0}}=Z^{7}C_{\mathcal{T}_{0}}=Z^{13}C_{\mathcal{T}_{0}}\), where this is equality of subspaces, and analogous statements are true for the operator triples \(\{Z^{3},Z^{9},Z^{15}\}\) and \(\{Z^{5},Z^{11},Z^{17}\}\) on the other two (orthogonal) 6-dimensional subspaces defined by \(Z^{3}\) and \(Z^{5}\). So the 9 operator error set actually degenerates in this case to 3 different errors when one restricts to the hybrid code space.
## 7 Concluding Remarks
This work opens up a number of potential new lines of investigation and the possible extension of some others. Further consideration of the hybrid Bacon-Shor subsystem codes introduced here is warranted, given the wide applicability of the subsystem versions [57, 8] in fault-tolerant quantum computing and beyond, and in particular with NISQ era quantum computers likely to involve hybrid forms of classical and quantum information processing [58]. It would be interesting to explore possible implications of our stablizer formalism on classes of recently constructed hybrid subspace codes; for instance, we expect new light can be shed on codes constructed in works such as [28, 46, 50], and one can ask if the formalism allows for construction of more codes with useful properties following the approaches introduced there. One could also consider generalizations of this formalism to a variety of other settings of relevance in quantum information, such as other generalized Pauli error models, or continuous QEC and infinite-dimensional settings such as in [27]. Regarding the connection with black hole theory, it may be possible to use our OAQEC stabilizer formalism to construct toy models of AdS/CFT capturing the properties missed by the celebrated tensor-network models of [31, 53], which are subsystem codes. We plan to pursue these investigations elsewhere and we invite others to do so as well.
_Acknowledgements._ We dedicate this paper to the memory of our friend and mentor, David Poulin. We thank Priya Nadkarni and Rafael Alexander for stimulating discussions. D.W.K. was partly supported by NSERC Discovery Grant 400160. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities.
|
2307.07770
|
randomHAR: Improving Ensemble Deep Learners for Human Activity
Recognition with Sensor Selection and Reinforcement Learning
|
Deep learning has proven to be an effective approach in the field of Human
activity recognition (HAR), outperforming other architectures that require
manual feature engineering. Despite recent advancements, challenges inherent to
HAR data, such as noisy data, intra-class variability and inter-class
similarity, remain. To address these challenges, we propose an ensemble method,
called randomHAR. The general idea behind randomHAR is training a series of
deep learning models with the same architecture on randomly selected sensor
data from the given dataset. Besides, an agent is trained with the
reinforcement learning algorithm to identify the optimal subset of the trained
models that are utilized for runtime prediction. In contrast to existing work,
this approach optimizes the ensemble process rather than the architecture of
the constituent models. To assess the performance of the approach, we compare
it against two HAR algorithms, including the current state of the art, on six
HAR benchmark datasets. The result of the experiment demonstrates that the
proposed approach outperforms the state-of-the-art method, ensembleLSTM.
|
Yiran Huang, Yexu Zhou, Till Riedel, Likun Fang, Michael Beigl
|
2023-07-15T10:51:03Z
|
http://arxiv.org/abs/2307.07770v1
|
randomHAR: Improving Ensemble Deep Learners for Human Activity Recognition with Sensor Selection and Reinforcement Learning
###### Abstract
Deep learning has proven to be an effective approach in the field of Human activity recognition (HAR), outperforming other architectures that require manual feature engineering. Despite recent advancements, challenges inherent to HAR data, such as noisy data, intra-class variability and inter-class similarity, remain. To address these challenges, we propose an ensemble method, called randomHAR. The general idea behind randomHAR is training a series of deep learning models with the same architecture on randomly selected sensor data from the given dataset. Besides, an agent is trained with the reinforcement learning algorithm to identify the optimal subset of the trained models that are utilized for runtime prediction. In contrast to existing work, this approach optimizes the ensemble process rather than the architecture of the constituent models. To assess the performance of the approach, we compare it against two HAR algorithms, including the current state of the art, on six HAR benchmark datasets. The result of the experiment demonstrates that the proposed approach outperforms the state-of-the-art method, ensembleLSTM.
human activity recognition, deep-learning, ensemble methods
## I Introduction
Human Activity Recognition (HAR) is a field that infers human activities from raw time-series signals acquired through embedded sensors of smartphones and wearable devices [1]. It has emerged as a revolutionary technology for real-time and autonomous monitoring in behavior analysis, ambient assisted living, activity of daily living (ADL), elderly care, rehabilitions, entertainments, and surveillance in smart home environments [2]. Deep learning-based approaches, because of their automatic feature extraction capabilities, have been widely adopted [3]. Recent studies have shown that Deep Learning approaches outperform classical Machine Learning algorithms in many HAR tasks [4]. Despite some game-changing achievements, several challenges in HAR remain unresolved.
* **Noisy data** Sensory data inevitably includes lots of noise information on account of the inherent imperfections of sensors and often contains missing or even incorrect readings due to sensor failure [5, 6].
* **Intra-class variability** The same human activity, such as walking, may exhibit variations among different subjects or even for the same subject in different recording sessions [7].
* **Inter-class similarity** Sensor data from different human activities may exhibit high similarities, which can make it challenging for a deep learning model to distinguish between them. [7].
EnsembleLSTM [6] has demonstrated the ability of ensemble methods to solve the problem of high variability in model output due to data quality. Multiple LSTM-based networks [8] with the same structure, but different parameter values, are trained by using mini-batch and adjusting the size and position of the sliding window episode-wise. Then, a subset of the trained models is selected with the 'TopK' strategy. The final prediction is obtained by aggregating prediction of the selected models through the mode operation. While this method achieves good results, we hypothesize that it can be further improved by considering the following aspects.
_(i)_ A fundamental assumption of the bagging ensemble algorithm is that the trained models in the method are sufficiently independent of each other [9]. Increasing the randomness of each model is a way to ensure this independence [9]. EnsembleLSTM [6] introduces the random property by mini batch and varying parameters of sliding window, which can be further improved through sensor selection. Therefore, we hypothesize that a random selection of sensor [9] can further improve the final performance of the method. Also, due to the selection, the method would be less computational intensive by construction.
_(ii)_ The "TopK" strategy in EnsembleLSTM [6] chooses the best performing models. However, this strategy does not consider if the selected models are sufficiently independent and lead to improvements when combined in an ensemble. Besides, the parameter \(k\) in 'TopK' strategy is hard to initialize. By selecting model combinations with reinforcement learning rather than 'TopK', we hypothesize that we can further increase the ensemble performance and avoid the parameter initialization problem.
Besides, it is worth noting that, having more sensor data does not necessarily lead to better performance [10]. Instead, well-designed sensor selection can reduce the effect of intra
class variability and inter-class similarity. For instance, jogging and running may have similar vertical acceleration patterns but differ in horizontal acceleration. In such cases, removing certain sensor data can enhance the performance of the learning algorithm.
To address the challenges highlighted by the aforementioned observations, we propose a HAR framework called randomHAR. The framework combines sensor randomization and deep learning, while performing effective model selection through reinforcement learning. Our contribution can be summarized as follows:
* The proposed approach outperforms the state-of-the-art HAR ensemble method [6] on six publicly available datasets.
* The proposed approach can avoid the parameter initialization hard problem in the state-of-the-art method.
* The proposed approach can be applied to any HAR model that takes sensor signal as input.
* The PyTorch code of the proposed approach can be found in http://github for further study.
## II Related work
Ensembles have been traditionally applied to decision trees as weak learners. The advantage over other classical models is that they react sensitive to e.g. different bootstraps or random feature selection [9]. However, the work of [11] still shows good performance of Random Forest on HAR. While machine learning-based approaches [12, 13, 14] have been successfully applied in the HAR domain over the past decade, they also face many challenges. For example, extracting complex features and recognizing time-series pattern. The advances of deep learning have led to a point, where deep learning models achieve better results than other machine learning-based algorithms on many HAR benchmark datasets. Simple convolutional neural network (CNN) can be used to extract complex signal patterns. Alemayoh et al. [15] encodes the collected sensor signals into a 14x16 virtual image and then uses a CNN on the generated images to classify eight different activities. In [16], CNNs are applied directly to the raw signal, while [17] combines the frequency domain convolution with time-domain convolution to achieve better performance. on the other hand, [18] tries to improve the performance by tuning hyperparameters on top of CNN, [19] by pretraining and [20] by employing PCA. Compared with CNNs, LSTMs has received more attention in HAR with its ability to extract sequence information from raw signals. Chung et al. [21] starts to experiment with LSTM to solve simple classification problems, and Zhao et al. [22] proposes a bidirectional LSTM-based architecture to extract information from signals in both directions. The work of [6] and [4] have shown top performances and are thus used as references for our work. All methods mentioned above aim to reduce the high variance of the prediction results due to the strong noise of the HAR data and the variability of the subjects by optimizing the network parameters and architecture. In contrast, we try to solve the problem by optimizing an ensemble of LSTMs.
## III Methodology
Given a HAR dataset \(\mathcal{D}=\{s_{0},\cdots,s_{n-1}\}\) where \(n\) is the number of sensors containing in the dataset and \(s_{i},0\leq i<n\) indicates the data collected with the specified sensor. The objective of a HAR task is to train a model \(f\) on a specific subset \(\mathcal{D}^{\prime}\subset\mathcal{D}\) of the given data, such that, given any instance \(x\) from the \(\mathcal{D}/\mathcal{D}^{\prime}\), \(f(x)=y\), where \(y\) is the ground-truth class of the instance \(x\).
Figure 1 illustrates the pipeline of the proposed approach. It consists of two essential components, namely sensor selection and model selection. In the sensor selection stage, various subsets of the provided data are generated based on the included sensors. Several models with the same base model architecture are then trained on these generated subsets. In the model selection stage, a subset of the trained models is chosen. The output of these selected models are then aggregated using the'mode' aggregation (i.e. majority voting) function to produce the final prediction.
### _Sensor selection_
Given a training set \(\mathcal{D}^{\prime}=\{s_{0},\cdots,s_{n-1}\}\) and a HAR model architecture. To generate subsets, \(k\) binary vectors \(m_{i}\in\left\{0,1\right\}^{n},0\leq i<k\) are first generated by a Bernoulli distribution, where \(k\) is the number of subsets to generate. These vectors are then utilized to generate \(k\) subsets \(\{\mathcal{D}_{0},\cdots,\mathcal{D}_{k-1}\}\) by selectively incorporating only the sensor data whose corresponding value in the vector is 1. Each generated vector represents a unique subset. Each of these subsets is utilized to train a model with the given model architecture, resulting in the generation of \(k\) models (classifiers) \(\{g_{0},\cdots,g_{k-1}\}\).
### _Model selection_
An episode-based reinforcement learning algorithm is applied to the generated models to find the best model combination. Finally,'mode' aggregation (i.e. majority voting) is applied to obtain the final prediction.
The object function for the model combination task is modeled as
\[J(\mu)=\int p_{\mu}(\theta)R(\theta)d\theta,\quad\mu^{\star}=\underset{\mu} {\operatorname{argmax}}\ J(\mu), \tag{1}\]
Fig. 1: Framework of the proposed randomHAR algorithm.
where \(p_{\mu}(\theta)=\mathcal{N}(\mu,I)\), is the search distribution to find the binary parameter vector \(\theta=\left\{\theta^{0},\cdots,\theta^{k}\right\}\) that maximizes reward
\[\begin{split} R(\theta)&=\mathbb{E}_{x\sim\mathcal{D} ^{\prime\prime}}\left[\mathbbm{1}_{y,\text{ensemble}\left(\{\theta^{0},g_{0} (x),\cdots,\theta^{k}.g_{k}(x)\}\right)\right]}\\ &\approx\frac{1}{N_{1}}\sum_{i=0}^{N_{1}}\left[\mathbbm{1}_{y, \text{ensemble}\left(\{\theta^{0},g_{0}(x_{i}),\cdots,\theta^{k}.g_{k}(x_{i}) \}\right)\right]},\end{split} \tag{2}\]
where \(N_{1}\) is the number of samples generated to estimate the reward, \(\mathcal{D}^{\prime\prime}\) is the validation set, \(g\) represents the trained model (classifier) and \(\theta\) represents the binary vector that used to select the classifiers for the final prediction. \(\text{Ensemble}(\cdot)\) is the method to combine the predictions of the selected classifiers. We make use of the'mode' aggregation function in the experiment.
The gradient of the objective function can be calculated through
\[\begin{split}\nabla J(\mu)&=\int R(\theta)\nabla _{\mu}\text{log}p_{\mu}(\theta)d\theta\\ &\approx\frac{1}{N_{2}}\sum_{i=1}^{N_{2}}R(\theta_{i})\nabla_{\mu }\text{log}p_{w}(\theta_{i})\\ &\approx\frac{1}{N_{2}}\sum_{i=1}^{N_{2}}R(\theta_{i})(\theta_{i }-\mu),\end{split} \tag{3}\]
where \(N_{2}\) is the number of samples in Monte Carlo estimation.
## IV Experiment
In this section, we design experiments to investigate various essential questions pertaining to the proposed approach: _(i)_ Whether the proposed approach can outperform the state-of-the-art ensemble method, ensembleLSTM [6]? _(ii)_ Whether the reinforcement learning-based model selection process in the proposed approach necessary? _(iii)_ how generalized the proposed approach is?
### _Experiment setting_
In the first two experiments, we use the deepConvLSTM variant model proposed in the ISWC 2021 best paper [4] as the base model and followed most of the experimental settings in that paper. In the last experiment, we utilize the deep convolutional neural network (CNN) as the base model to test the generality of the proposed approach.
We evaluated the model on six publicly available datasets, namely a preprocessed version of the Opportunity (OPPO) dataset [23] as well as five popular HAR datasets namely DSADS [24], HAPT [25], PAMAP2 [26], RealWorld HAR (RWHAR) [27] and SKODAR [28]. The descriptions of these datasets are summarized in Table I. In addition, F1-score (macro) and Leave-One-Subject-Out (LOSO) Cross-Validation method are applied on all the datasets (except SKODAR) to assess the performance of the model. The SKODAR dataset contains only one subject, and the 5-folds Cross-Validation is applied to this dataset.
All experiments are repeated 5 times to make the result more plausible. We use Adam optimizer with the initial learning rate 1e-4 to train the model. The learning rate decays by a factor of 10 with patience equal to 5 epochs. The maximal training epochs is 50 with early stopping, which patience is set to 10. For the model combination selection, the parameter \(N_{1}\) is set to the size of the validation set and \(N_{2}\) is set to 10.
When training the ensembleHAR model in our experiments, instead of using the episode-wise bagging, we take the data utilized in each epoch (after injecting the randomness) to train an individual model. By adopting this approach, each variant of the model is fully trained at the expense of increased resource usage. Simultaneously, it improves the comparability of the method with the proposed approach, while preserving its fundamental principle. The performances of the trained method are compared to that in the corresponding paper [6] and no degradation in performance was observed.
### _Evaluation and discussion_
To assess the efficacy of the proposed approach, a comparative analysis of the following four distinct methods are conducted: _(i)_ The base deepConvLSTM model (base), _(ii)_ The deepConvLSTM model with the ensemble method proposed in [6] (ensembleLSTM), _(iii)_ The deepConvLSTM with the proposed ensemble approach without trained model selection (randomHAR-all). _(iv)_ The deepConvLSTM model with the proposed ensemble approach with reinforcement learning model selection (randomHAR-rl). Ten models are trained in both ensemble methods. For the 'ensembleLSTM' method, five of generated models are utilized for the final prediction.
We summarize the result in Table II. We can see that 'randomHAR-rl' achieves the best performance on all datasets. Among them, by employing significance test (with'scipy' package), 'RandomHAR-rl' is shown to significantly outperform other methods on four out of six datasets, namely DSADS, HAPT, RWHAR, OPPO. It reduces the variance while achieving improvement in F1 scores. This confirms our hypothesis: strategies to increase the randomness of the trained model (sensor selection) and to improve the quality of the model selection (reinforcement learning) can lead to enhanced performance. When comparing 'ensembleLSTM' and 'randomHAR-all', we found that 'randomHAR-all' does not exhibit any significant advantage in terms of results. This indicates that model selection is imperative in the proposed approach. This observation is reasonable, since an inappropriate sensor selection can severely limit the performance of the corresponding trained model.
To evaluate the effectiveness of the reinforcement learning model selection process, we compared the performance of RandomHAR using the "TopK" strategy (Top5 + ss) with that of the RandomHAR method using the reinforcement learning strategy (R1 + ss). According to the result in Fig. 2, the reinforcement learning strategy brings performance gains on all datasets. while it should be noted that the improvement brought by the reinforcement learning strategy on three of the datasets is not significant enough. Nevertheless, since reinforcement learning can learn the optimal number of models to be selected, the reinforcement learning strategy avoids the need for hyperparameter initialization, which is a challenging problem in the "TopK" strategy.
To demonstrate the generality of the proposed approach, we conducted an experiment wherein the base model in randomHAR was replaced with a CNN model1. The result is summarized in TABLE III. As shown, the proposed approach leads to improved performance across all datasets, although the overall performance is not as good as that obtained using the ConvLSTM based model. This finding provides evidence for the generality of the proposed approach, and indicate its potential applicability to other base models.
Footnote 1: The architecture of the model can be found in http://github.
Besides, Random forest and neural network are currently widely employed in various fields. An intuitive question is why not just utilize them to generate the final decision using the predictions of all the trained models as input. To address this concern, we explore the efficacy of Multilayer Perceptron (MLP), as a substitute for model selection, to generate the final prediction. However, the result reveals that the performance of the MLP is unstable and inferior compared to the proposed approach, which may be due to over-fitting.
Fig. 2: RandomHAR Performance with and without reinforcement learning model selection strategy.
## V Conclusion and future work
In this paper, we propose a novel ensemble approach for HAR and improve the performance of existed HAR Ensemble methods. This is achieved by increasing the randomness of individual models in the trained model set and improving the model combination selection strategy. Although the proposed approach yields promising results, there are still many avenues for further research. For example, _(i)_ can we optimize the reward function for the model selection process, e.g., can we accelerate the convergence by subtracting the average reward of the old agent? _(ii)_ instead of randomly selecting sensors, can meta-features be utilized to enhance the targeted performance of the trained models? _(iii)_ can the original HAR problem be split into two consecutive sub-problems, and multiple models trained on biased datasets be combined to make predictions?
|
2305.16004
|
The Milstein scheme for singular SDEs with Hölder continuous drift
|
We study the $L^p$ rate of convergence of the Milstein scheme for SDEs when
the drift coefficients possess only H\"older regularity. If the diffusion is
elliptic and sufficiently regular, we obtain rates consistent with the additive
case. The proof relies on regularisation by noise techniques, particularly
stochastic sewing, which in turn requires (at least asymptotically) sharp
estimates on the law of the Milstein scheme, which may be of independent
interest.
|
Máté Gerencsér, Gerald Lampl, Chengcheng Ling
|
2023-05-25T12:45:24Z
|
http://arxiv.org/abs/2305.16004v1
|
# The Milstein scheme for singular SDEs with Holder continuous drift
###### Abstract.
We study the \(L^{p}\) rate of convergence of the Milstein scheme for SDEs when the drift coefficients possess only Holder regularity. If the diffusion is elliptic and sufficiently regular, we obtain rates consistent with the additive case. The proof relies on regularisation by noise techniques, particularly stochastic sewing, which in turn requires (at least asymptotically) sharp estimates on the law of the Milstein scheme, which may be of independent interest.
Mathematics Subject Classification (2020): Primary 60H35, 60H10; Secondary 60H50, 60L90, 35B65.
Keywords: Singular SDEs; Malliavin calculus; strong approximation; Milstein scheme; regularisation by noise; stochastic sewing; Zvonkin's transformation.
## 1. Introduction
The term _regularisation by noise_ classically refers to well-posedness of stochastic differential equations (SDEs)
\[dX_{t}=b(X_{t})\;\mathrm{d}t+\sigma(X_{t})\;\mathrm{d}W_{t},\quad X_{0}=x_{0}, \tag{1.1}\]
beyond the (stochastic version of the) Cauchy-Lipschitz theorem: at the price of some nondegeneracy assumption on \(\sigma\), the classical Lipschitz continuity condition on \(b\) can be dramatically reduced [40, 37, 18, 39]. Recently numerous studies focused on leveraging these regularisation effects in the analysis of approximation of SDEs with irregular coefficients [8, 22, 28, 7, 10, 13, 4]. From the long list of recent works [6, 2, 3, 5, 11, 24, 27, 29, 21, 31, 35, 36, 38, 1, 20] and references therein, let us highlight two features. On the one hand, when \(b\) is merely bounded, then with nondegenerate and sufficiently regular \(\sigma\), the Euler-Maruyama scheme is shown to converge in \(L^{p}\) with rate \(1/2\) in [5]. On the other hand, if the noise is additive (i.e. \(\sigma\) is constant and nondegenerate), then this can be improved: if in addition, \(b\) has regularity \(\alpha\in(0,1)\) in either a Holder or a Sobolev sense (with sufficiently high integrability), \(L^{p}\) rate \((1+\alpha)/2\) is proved [2, 5]. A natural question is whether the rates beyond \(1/2\) can also be achieved in the multiplicative case. As far as weak convergence is concerned, this was affirmatively answered in [12] in the case of Holder \(b\). For strong convergence, however, the rate \(1/2\) is known to be sharp for the Euler-Maruyama scheme, see e.g. [15, 14, 26], and therefore higher order methods are needed to even hope for a superior rate. The goal of this paper is to show that the standard Milstein scheme does achieve the \((1+\alpha)/2\) rate in the full range \(\alpha\in(0,1)\) in the Holder drift case.
The Milstein scheme for (1.1) is defined as
\[dX_{t}^{n}=b(X_{k_{n}(t)}^{n})\,\mathrm{d}t+\Big{(}\sigma(X_{k_{n}(t)}^{n})+ \nabla\sigma\sigma(X_{k_{n}(t)}^{n})(W_{t}-W_{k_{n}(t)})\Big{)}\,\mathrm{d}W_{t },\quad X_{0}^{n}=x_{0}^{n}, \tag{1.2}\]
with \(k_{n}(t)=\frac{\lfloor nt\rfloor}{n}\), \(n\in\mathbb{N}\), \(t\in[0,1]\). This scheme was originally designed by Milstein [30] to produce a \(O(n^{-1})\)-error (in \(L^{p}\)) for SDEs with \(C^{2}\) coefficients, just like the standard Euler scheme for deterministic ODEs. However in the framework of SDEs, as we see in (1.2), such scheme is of second order compared to the Euler scheme. This can pose challenges in implementation, see [33, Section 7.5.2], but since these issues are well studied in the literature, we only focus on the error analysis of the scheme.
In the sequel \((X_{t})_{t\in[0,1]}\) and \((X_{t}^{n})_{t\in[0,1]}\) are \(\mathbb{R}^{d}\)-valued stochastic processes, \(b:\mathbb{R}^{d}\to\mathbb{R}^{d}\), \(\sigma:\mathbb{R}^{d}\to\mathbb{R}^{d\times d_{1}}\), and \(W\) is a \(d_{1}\)-dimensional standard Brownian motion. The dimensions \(d\) and \(d_{1}\) are in principle arbitrary, but the ellipticity condition will imply \(d_{1}\geqslant d\) automatically. The way products of matrices and higher order tensors are understood is always clear from the context, so we often omit indices. To illustrate this (and to make the scheme itself completely precise): the \((i,k)\)-th coordinate of the \(d\times d_{1}\) matrix \(\nabla\sigma\sigma(X_{k_{n}(t)}^{n})(W_{t}-W_{k_{n}(t)})\) is given by
\[\sum_{j=1}^{d_{1}}\sum_{\ell=1}^{d}\partial_{j}\sigma^{ik}\sigma^{j\ell}(X_{k _{n}(t)}^{(n)})\big{(}W_{t}^{\ell}-W_{k_{n}(t)}^{\ell}\big{)}.\]
We now state our assumptions and main result with assuming some standard notation; all of which is defined precisely in the part _Notation_ below. One important notion is the nondegeneracy of the noise (though this can be relaxed on regions where the drift is regular, see [5, Section 1.4]): we say a matrix-valued function \(A:\mathbb{R}^{d}\mapsto\mathbb{R}^{d\times d}\) is _uniformly elliptic_ if there exists a \(\lambda>0\) such that for all \(x,\xi\in\mathbb{R}^{d}\)
\[\lambda|\xi|^{2}\leqslant\langle A(x)\xi,\xi\rangle\leqslant\lambda^{-1}|\xi| ^{2}. \tag{1.3}\]
_Assumption 1.1_.: For some \(\alpha\in(0,1]\), assume
* \(b\in\mathcal{C}^{\alpha}\)
* \(\sigma\sigma^{*}\) is uniformly elliptic and \(\sigma\in\mathcal{C}^{3}\).
It is well-known (see e.g. [37]) that under Assumption 1.1 a unique strong solution to (1.1) exists. Our main result is as follows.
**Theorem 1.2**.: _Let \((X_{t})_{t\in[0,1]}\) and \((X_{t}^{n})_{t\in[0,1]}\) be the solutions to (1.1) and (1.2) correspondingly, for all \(n\in\mathbb{N}\), for any \(p\geqslant 1\), if Assumption 1.1 holds, then for all \(\epsilon>0\) the bound_
\[\big{\|}\sup_{t\in[0,1]}|X_{t}-X_{t}^{n}|\big{\|}_{L_{\alpha}^{p}}\leqslant N |x_{0}-x_{0}^{n}|+Nn^{-\frac{1+\alpha}{2}+\epsilon} \tag{1.4}\]
_holds, where the constant \(N\) depends on \(\|b\|_{\mathcal{C}^{\alpha}}\), \(\|\sigma\|_{\mathcal{C}^{3}},\alpha,p,d,d_{1},\lambda\), \(\epsilon\)._
_Remark 1.3_.: As mentioned above, the rate \(\frac{1+\alpha}{2}\) above agrees with the additive case [2]. For additive noise, however, the same rate is obtained with \(b\) having only Sobolev regularity: \(b\in W^{\alpha,p}\) with \(p>\max(2,d)\)[5, 31]. For the Milstein scheme the treatment of Sobolev drift is
currently beyond the scope of the available density estimates in Section2, we leave this for future investigations.
### Related works
The first analysis of a higher order scheme in the case of irregular drift is to our best knowledge [27] (recently extended to SDEs with finite activity jumps [35]). Therein the scalar \(d=d_{1}=1\) case is considered, and the irregularity of the drift is in the form of jump discontinuities. This corresponds to \(\alpha=1/2,p=2\) in the context of Remark1.3 and therefore the strong rate \(3/4\) proven in [27] is consistent with \(\frac{1+\alpha}{2}\). The regularity assumption on \(\sigma\) is weaker in [27] than ours, and the diffusion may also degenerate away from the irregularities. However, the scheme is less direct and involves the knowledge of a rather nontrivial function of the coefficients (denoted by \(G^{-1}\) therein), which, if not available, may also need to be approximated, introducing further errors. Nevertheless, an interesting fact is that for this class of coefficients this rate \(3/4\) is sharp among all approximation methods based on evaluations of \(W\) on a deterministic grid [29], but can be improved by using an adaptive algorithm [38].
### Main idea of the proof
Let us briefly outline the strategy of the proof. The aim is to estimate the difference \(X_{t}-X_{t}^{n}\) which has the representation
\[X_{t}-X_{t}^{n} =x_{0}-x_{0}^{n}+\int_{0}^{t}[b(X_{r})-b(X_{k_{n}(r)}^{n})]\;\mathrm{ d}r\] \[\quad+\int_{0}^{t}[\sigma(X_{r})-\sigma(X_{k_{n}(r)}^{n})-(\nabla \sigma\sigma)(X_{k_{n}(r)}^{n})(W_{r}-W_{k_{n}(r)})]\;\mathrm{d}W_{r}=:I_{0}+I _{1}+I_{2}.\]
What is perhaps not immediate on the first sight, the novel difficulties with the Milstein scheme do not lie with \(I_{2}\). Indeed, writing
\[I_{2}= \int_{0}^{t}[\sigma(X_{r})-\sigma(X_{r}^{n})]\;\mathrm{d}W_{r}\] \[\quad+\int_{0}^{t}[\sigma(X_{r}^{n})-\sigma(X_{k_{n}(r)}^{n})-( \nabla\sigma\sigma)(X_{k_{n}(r)}^{n})(W_{r}-W_{k_{n}(r)})]\;\mathrm{d}W_{r}=:I_ {21}+I_{22},\]
the Milstein scheme is designed precisely so that \(I_{22}\) is of order \(n^{-1}\). The term \(I_{21}\) can be treated easily by an appropriate version of Gronwall's lemma. As for \(I_{1}\), we write
\[I_{1}= \int_{0}^{t}[b(X_{r})-b(X_{r}^{n})]\;\mathrm{d}r+\int_{0}^{t}[b(X_{r}^{n} )-b(X_{k_{n}(r)}^{n})]\;\mathrm{d}r=:I_{11}+I_{22}.\]
The treatment of \(I_{11}\) relies on a version _Zvonkin's transformation_[40, 37], more precisely in the form of an Ito-Tanaka trick. This transformation gives rise to further quantities similar to \(I_{12}\), they (and \(I_{12}\) itself) are handled via _stochastic sewing_[23]. An important ingredient for stochastic sewing is the behavior of the law of the process, and the required estimates on the law of the Milstein approximation \(X^{n}\) are significantly more challenging than in the case of the Euler-Maruyama scheme (in fact, some "usual" bounds like two-sided heat kernel estimates do not even hold). These estimates (see Section2), which can be of independent interest, are derived via a Malliavin calculus toolbox.
Structure of the paper.Based on the above, the article is organised as follows: we start with Section2.1 on Malliavin calculus which gives us general criteria for deriving the density estimates for a process in an Ito's integral form, following with Section2.2 which gives the estimates on the law of the Milstein scheme and some auxiliary processes by applying the theorems from Section2.1. In Section3 we derive bounds on additive functionals of the Milstein scheme via stochastic sewing. In Section4 we combine these bounds with Zvonkin's transformation to conclude the proof for the main theorem. In AppendixA we include some estimates on PDEs which is used in the proof in Section4.
Notation.For \(k\in\mathbb{N}\), \(f:\mathbb{R}^{d}\mapsto\mathbb{R}\), denote \(\partial_{k}f(x):=\frac{\partial f(x)}{\partial x_{k}}\) for \(x\in\mathbb{R}^{d}\) and \(\nabla f(x):=(\partial_{i}f(x))_{1\leqslant i\leqslant d}\), the derivative is understood in the weak sense. For vector-valued \(f\) we use the same notation, and \(\nabla^{k}f\) is defined via \(\nabla(\nabla^{k-1}f)\) iteratively. For a multi-index \(k=(k_{1},\ldots,k_{d})\in\mathbb{N}^{d}\), denote \(\partial^{k}f(x):=\frac{\partial^{|k|}f(x)}{\partial x_{k_{1}}\cdots\partial x _{k_{d}}}\). If \(k=(0,\ldots,0)\), we use convention \(\partial^{k}f=f\). We denote by \(C_{0}^{\infty}\) (\(C_{p}^{\infty}\), resp.) the set of all continuously infinitely differentiable functions that, along with all of their partial derivatives, are compactly supported (of polynomial growth, resp.).
For \(\alpha\in(0,1]\), we set \(\mathcal{C}^{\alpha}(\mathbb{R}^{d})\) to be the space of continuous functions such that
\[\|f\|_{\mathcal{C}^{\alpha}}:=\sup_{x,y\in\mathbb{R}^{d},x\neq y}\frac{|f(x)-f (y)|}{|x-y|^{\alpha}}+\sup_{x\in\mathbb{R}^{d}}|f(x)|<\infty.\]
Here, and often below, we write \(\mathcal{C}^{\alpha}\) instead of \(\mathcal{C}^{\alpha}(\mathbb{R}^{d})\) for simplicity. For \(\alpha\in(0,\infty)\), we define \(\mathcal{C}^{\alpha}(\mathbb{R}^{d})\) the space of all functions \(f\) defined on \(\mathbb{R}^{d}\) having bounded derivatives \(\partial^{k}f\) for multi-indices \(k\in\mathbb{N}^{d}\) with \(|k|\leqslant\alpha\) so that
\[\|f\|_{\mathcal{C}^{\alpha}}:=\sum_{|k|<\alpha}\sup_{x\in\mathbb{R}^{d}}| \partial^{k}f(x)|+\sum_{\alpha-1\leqslant|k|<\alpha}\|f\|_{\mathcal{C}^{ \alpha-|k|}}<\infty.\]
Note that the \(\mathcal{C}^{\alpha}\)-norm always includes the supremum of the function. We also denote the space of bounded measurable functions \(\mathcal{C}^{0}(\mathbb{R}^{d})\) with the supremum norm. To be noticed that the functions in \(\mathcal{C}^{0}\) do not need to be continuous. For convenience and to emphasise the distinction of \(\mathcal{C}^{0}\) from continuous functions, we also use the notation \(\mathbb{B}\) for the space of bounded measurable functions endowed with the supremum norm. For \(\alpha<0\), then we denote by \(\mathcal{C}^{\alpha}(\mathbb{R}^{d})\) the space of all Schwarz distributions such that
\[\|f\|_{\mathcal{C}^{\alpha}}:=\sup_{y\in(0,1]}\gamma^{-\frac{\alpha}{2}}\|P_{ y}f\|_{\mathcal{C}^{0}}<\infty,\]
where \(P_{t}f:=p_{t}*f\) and \(p_{t}(x):=\frac{1}{\sqrt{2\pi t}}e^{-\frac{|x|^{2}}{2t}}\).
On finite dimensional vector spaces we always use the Euclidean norm.
In proofs, the notation \(a\lesssim b\) abbreviates the existence of \(C>0\) such that \(a\leqslant Cb\), such that moreover \(C\) depends only on the parameters claimed in the corresponding statement. If the constant depends on any further parameter \(c\), we incorporate it in the notation by writing \(a\lesssim_{c}b\).
## 2. Estimates on the law of \(X^{n}\) and related processes
### Preliminaries of Malliavin calculus
Let \(H=L^{2}([0,1],\mathbb{R}^{d_{1}})\) with inner product \(\langle\cdot,\cdot\rangle_{H}\) and for \(h\in H\) let us use the shorthand \(W(h)=\int_{0}^{1}h_{t}\,dW_{t}\). By \(\mathcal{S}\) we denote the class of random variables \(X\) for which there exists an \(n\in\mathbb{N}\), vectors \(h_{1},\ldots,h_{n}\in H\) and a function \(f\in C_{p}^{\infty}(\mathbb{R}^{d})\) such that
\[X=f(W(h_{1}),\ldots,W(h_{n})).\]
We call the elements of \(\mathcal{S}\)_smooth random variables_. More generally, let \(V\) be a Hilbert space and denote by \(\mathcal{S}_{V}\) the space of \(V\)-valued smooth random variables of the form
\[X=\sum_{j=1}^{n}X_{j}v_{j},\]
where \(v_{j}\in V,\,X_{j}\in\mathcal{S}\). Without loss of generality, we may assume that each there exist \(h_{1},\ldots,h_{m}\in H\) and function \(f_{1},\ldots,f_{n}\in C_{p}^{\infty}(\mathbb{R}^{d})\) such that \(X_{j}=f_{j}(W(h_{1}),\ldots,W(h_{m}))\). The _Malliavin derivative_ of such a random variable \(X\in\mathcal{S}_{V}\) is the \(H\otimes V\)-valued variable \(DX\), defined by
\[DX=\sum_{j=1}^{n}\sum_{i=1}^{m}\partial_{i}f_{j}(W(h_{1}),\ldots,W(h_{m}))h_{i }\otimes v_{j}. \tag{2.1}\]
In the sequel any vector space \(U\) is identified with \(U\otimes\mathbb{R}\), in particular if \(X\in\mathcal{S}\), then \(DX\in\mathcal{S}_{H}\). The \(k\)_-th Malliavin derivative_ can be defined recursively by the above. We then have that \(D^{k}X\) is a \(H^{\otimes k}\otimes V\)-valued random variable. From this point on we will only take \(V\) to be a finite dimensional vector space, and so we drop it from the notation whenever it does not cause confusion (and one can simply understand every operation componentwise).
Recall from [32, Chapter 1] that for \(p\in[1,\infty)\) and \(k\geqslant 1\), the operator \(D^{k}:\mathcal{S}\subset L^{p}(\Omega)\to L^{p}(\Omega;H^{\otimes k})\) is closable. For \(p\geqslant 1\) and \(k\geqslant 1\) we define the seminorms
\[\|X\|_{k,p}=\left(\mathbb{E}\|X\|^{p}+\sum_{i=1}^{k}\mathbb{E}\|D^{i}X\|_{H^{ \otimes i}}^{p}\right)^{\frac{1}{p}} \tag{2.2}\]
We define \(\mathbb{D}^{k,p}\) as the completion of the space \(\mathcal{S}\) in \(L^{p}(\Omega)\) with respect to \(\|\cdot\|_{k,p}\). Furthermore, we denote for \(k\geqslant 0\) the spaces
\[\mathbb{D}^{k,\infty}:=\bigcap_{p\geqslant 1}\mathbb{D}^{k,p},\qquad\mathbb{D}^ {\infty}:=\bigcap_{k\geqslant 1}\mathbb{D}^{k,\infty},\]
where the latter is a metric space that is complete and countably normed.
The adjoint of \(D\) is denoted by \(\delta\): the domain of \(\delta\) is those elements \(u\in L^{2}(\Omega,H)\) such that there exists \(Y\in L^{2}(\Omega)\) such that \(\mathbb{E}\langle u,DX\rangle_{H}=\mathbb{E}(YX)\) for every \(X\in\mathcal{S}\). We then write \(Y=\delta u\). If \(u\) is an adapted (to the filtration generated by \(W\)) process such that \(\int_{0}^{1}\mathbb{E}|u_{t}|^{2}\,\mathrm{d}t<\infty\), then
\(\delta u\) is its Ito integral \(\delta u=\int_{0}^{1}u_{t}\,\mathrm{d}W_{t}\). If furthermore \(u_{t}\in\mathbbm{D}^{1,2}\) and \(\int_{0}^{1}\mathbbm{E}|D_{t}u_{s}|^{2}\,\mathrm{d}s<\infty\) then we have the following identity (see [9, Proposition 3.8])
\[D_{t}(\delta(u))=u_{t}+\delta(D_{t}u)=u_{t}+\int_{0}^{1}D_{t}u_{s}\,\mathrm{d }W_{s}. \tag{2.3}\]
The _Malliavin matrix_ is defined as
\[\mathcal{M}^{ij}:=\langle DX^{i},DX^{j}\rangle_{H},\quad 1\leqslant i,j\leqslant d\]
whenever it makes sense. We say that a random vector \(X=(X^{1},\ldots,X^{d})\) whose components are in \(\mathbbm{D}^{1,2}\) is _nondegenerate_ if its Malliavin matrix \(\mathcal{M}\) is a.s. invertible and \((\det\mathcal{M})^{-1}\in L^{p}(\Omega)\) for all \(p\geqslant 1\).
**Theorem 2.1**.: _[_32_, Proposition 2.1.4]_ _Let \(X=(X^{1},\ldots,X^{d})\) be a nondegenerate random vector and fix \(k\geqslant 1\). Suppose that \(X^{j}\in\mathbbm{D}^{k+1,\infty}\) for \(j=1,\ldots,d\). Then for any \(Y\in\mathbbm{D}^{k,\infty}\) and any multi-index \(\alpha\in\mathbbm{N}^{d}\) such that \(|\alpha|=k\) there exists a random variable \(Z_{\alpha}(X,Y)\) such that for every \(\varphi\in C_{p}^{\infty}(\mathbb{R}^{d})\) we have_
\[\mathbbm{E}[(\partial_{\alpha}\varphi)(X)Y]=\mathbbm{E}[\varphi(X)Z_{\alpha}]. \tag{2.4}\]
_The random variables \(Z_{\alpha}\) are given by recursion_
\[Z_{\epsilon_{j}}(X,Y) =\delta\Big{(}\sum_{l=1}^{d}(Y\mathcal{M}^{-1})^{jl}DX^{l}\Big{)},\] \[Z_{\alpha+\epsilon_{j}}(X,Y) =Z_{\epsilon_{j}}(X,Z_{\alpha}(X,Y)),\]
_and they satisfy the bounds, for \(1\leqslant p<q<\infty\) with \(\frac{1}{p}=\frac{1}{q}+\frac{1}{r}\),_
\[\|Z_{\alpha}\|_{L_{\infty}^{p}}\leqslant N\,\|\mathcal{M}^{-1}DX\|_{k,2^{k-1} }^{k}\|Y\|_{k,q},\]
_where the constant \(N\) depends on \(p,q,d,k\)._
The following result is an extension of [32, Proposition 2.1.3] and [19, Theorem 3.5, page 300].
**Theorem 2.2**.: _Let \(t\in(0,1]\). Let \((u_{s})_{s\in[0,1]}\) be a \(\mathbb{R}^{d\times d_{1}}\)-valued adapted process. Suppose that_
1. \(\mathbbm{E}\left(\int_{0}^{1}\|u_{s}\|^{2}\,\,\mathrm{d}s\right)<\infty\)_,_ \(u_{s}\in\mathbbm{D}^{1,2}\) _for all_ \(s\in[0,t]\)_, and for some_ \(p\in[2,\infty)\) _it holds that_ \[\mu:=\sup_{s,r\in[0,1]}\mathbbm{E}\left(\|D_{s}u_{r}\|^{p}\right)<\infty,\] (2.5)
2. _there exists a constant_ \(\lambda_{*}>0\) _such that_ \(u_{s}u_{s}^{*}\geqslant\lambda_{*}I\) _for all_ \(s\in[0,t]\)_._
_Set \(X_{t}=\int_{0}^{t}u_{s}\,\,\mathrm{d}W_{s}\) and denote by \(\mathcal{M}_{t}\) the Malliavin matrix of \(X_{t}\). Then for \(\gamma\in(0,\frac{p}{2d})\) we have_
\[\mathbbm{E}\left((\det\mathcal{M}_{t})^{-\gamma}\right)\leqslant N\,t^{-\gamma d} \tag{2.6}\]
_where the constant \(N\) depends only on \(\lambda_{*},d,d_{1},\mu,\gamma\) and \(p\)._
Proof.: Note that for any \(\xi\in\mathbb{R}^{d}\),
\[\xi^{*}\,\mathcal{M}_{t}\xi=\int_{0}^{t}\|D_{s}X_{t}\xi\|^{2}\,\mathrm{d}s.\]
We denote by \(\lambda_{t}\) the smallest eigenvalue of \(\mathcal{M}_{t}\). It is given by
\[\lambda_{t}\coloneqq\inf_{\|\xi\|=1}\xi^{*}\,\mathcal{M}_{t}\xi=\inf_{\|\xi\|= 1}\int_{0}^{t}\|D_{s}X_{t}\xi\|^{2}\,\mathrm{d}s. \tag{2.7}\]
We notice that because of \(\det\mathcal{M}_{t}\geqslant\lambda_{t}^{d}\) we have
\[\mathds{E}((\det\mathcal{M}_{t})^{-\gamma})\leqslant\mathds{E}(\lambda_{t}^{- d\gamma}). \tag{2.8}\]
We therefore estimate this last quantity. Note that (2.3) yields for \(s<t\)
\[D_{s}^{k}X_{t}^{m}=u_{s}^{k,m}+\sum_{l=1}^{d_{1}}\int_{s}^{t}D_{s}^{k}(u_{r}^{ m,l})\,\,\mathrm{d}W_{r}^{l}. \tag{2.9}\]
Using the elementary inequality \((a+b)^{2}\geqslant\frac{1}{2}a^{2}-b^{2}\) we obtain, for any \(\xi\in\mathbb{R}^{d}\) with \(\|\xi\|=1\),
\[\|D_{s}X_{t}\xi\|^{2}\geqslant\frac{1}{2}\|u_{s}\xi\|^{2}-\left\|\left(\int_{ s}^{t}D_{s}u_{r}\,\,\mathrm{d}W_{r}\right)\xi\right\|^{2}\geqslant\frac{1}{2} \lambda_{*}-\left\|\int_{s}^{t}D_{s}u_{r}\,\,\mathrm{d}W_{r}\right\|^{2}. \tag{2.10}\]
The lower bound for the first term is provided by the ellipticity assumption in \((ii)\). After plugging (2.10) into (2.7) we arrive at the following estimates for any \(h\in[0,1]\)
\[\lambda_{t}\geqslant\int_{t(1-h)}^{t}\frac{1}{2}\lambda_{*}-\left\|\int_{s}^{ t}D_{s}u_{r}\,\,\mathrm{d}W_{r}\right\|^{2}\,\mathrm{d}s=:\frac{1}{2}th\lambda_{*}-I _{h}(t). \tag{2.11}\]
Going back to the quantity we want to estimate, we write, for any \(a>0\),
\[\mathds{E}(\lambda_{t}^{-\gamma d}) =\int_{0}^{\infty}\mathds{P}(\lambda_{t}^{-\gamma d}>y)\,\, \mathrm{d}y\lesssim\int_{0}^{\infty}y^{\gamma d-1}\mathds{P}(\lambda_{t}^{-1} >y)\,\,\mathrm{d}y\] \[\lesssim a^{\gamma d}+\int_{a}^{\infty}y^{\gamma d-1}\mathds{P} \big{(}\lambda_{t}<1/y\big{)}\,\mathrm{d}y. \tag{2.12}\]
We now choose the parameters as \(a\coloneqq\frac{4}{t\lambda_{*}}\) and for any \(y\geqslant a\), \(h=\frac{4}{t\lambda_{*}y}\). Note that a \(y\geqslant a\) indeed implies \(h\leqslant 1\). From (2.11) we see
\[\mathds{P}\big{(}\lambda_{t}<1/y\big{)}\leqslant\mathds{P}\big{(}I_{h}(t) \geqslant 1/(2y)\big{)}\lesssim y^{\frac{p}{2}}\mathds{E}(|I_{h}(t)|^{\frac{p}{2}}). \tag{2.13}\]
Furthermore, applying the Jensen and Burkholder-Davis-Gundy (BDG) inequalities yields
\[\mathds{E}(|I_{h}(t)|^{\frac{p}{2}}) \lesssim(th)^{\frac{p}{2}-1}\int_{t(1-h)}^{t}\mathds{E}\Big{|} \int_{s}^{t}D_{s}u_{r}\,\,\mathrm{d}W_{r}\Big{|}^{p}\,\,\mathrm{d}s\] \[\lesssim(th)^{\frac{p}{2}-1}\int_{t(1-h)}^{t}\mathds{E}\Big{(} \int_{s}^{t}\|D_{s}u_{r}\|^{2}\,\,\mathrm{d}r\Big{)}^{\frac{p}{2}}\,\,\mathrm{d}s\]
\[\lesssim(th)^{\frac{p}{2}-1}\int_{t(1-h)}^{t}(t-s)^{\frac{p}{2}-1} \int_{s}^{t}\mathrm{d}r\sup_{r\in[0,1]}\mathds{E}\|D_{s}(u_{r})\|^{p}\ \mathrm{d}s\] \[\lesssim(th)^{p}\sup_{r,s\in[0,1]}\mathds{E}\|D_{s}(u_{r})\|^{p}. \tag{2.14}\]
Since \(th\approx y^{-1}\), we incorporate (2.14) and (2.13) into (2.12) and obtain for \(\gamma\in(0,\frac{p}{2d})\)
\[\mathds{E}(\lambda_{t}^{-\gamma d}) \lesssim t^{-\gamma d}+\int_{\frac{4}{t\lambda_{*}}}^{\infty} \mathds{E}(|I_{h}^{k}(t)|^{\frac{p}{2}})\,y^{\gamma d-1+\frac{p}{2}}\,\mathrm{ d}y\] \[\lesssim t^{-\gamma d}+\int_{\frac{4}{t\lambda_{*}}}^{\infty}y^{ \gamma d-1-\frac{p}{2}}\,\mathrm{d}y\lesssim t^{-\gamma d}. \tag{2.15}\]
Returning with the above to (2.8) finishes the proof.
_Remark 2.3_.: We will later need a slight extension of Theorem 2.2, to accommodate processes of the form \(X_{t}^{\theta}:=\int_{0}^{t_{1}}u_{s}\,\mathrm{d}W_{s}+\theta\int_{t_{1}}^{t}u _{s}\,\mathrm{d}W_{s}\), \(0<t_{1}\leqslant t\), \(\theta\in[0,1]\). Bounding the Malliavin matrix of \(X\) follows the same lines as above, for the sake of completeness we provide the argument.
\[\|D_{s}X_{t}\xi\|^{2}\] \[= \|(D_{s}X_{t}\mathds{1}_{s\in[0,t_{1}]}+D_{s}X_{t}\mathds{1}_{s \in(t_{1},t]})\xi\|^{2}\] \[\geqslant \Big{(}\frac{1}{2}\|u_{s}\xi\|^{2}-\Big{\|}\Big{(}\int_{s}^{t_{1} }D_{s}u_{r}\,\mathrm{d}W_{r}\Big{)}\xi\Big{\|}^{2}\Big{)}\mathds{1}_{s\in(0,t_ {1})}+\Big{(}\frac{\theta^{2}}{2}\|u_{s}\xi\|^{2}-\Big{\|}\Big{(}\theta\int_{s }^{t}D_{s}u_{r}\,\mathrm{d}W_{r}\Big{)}\xi\Big{\|}^{2}\Big{)}\mathds{1}_{s\in[ t_{1},t]}\] \[\geqslant \Big{(}\frac{1}{2}\lambda_{*}-\Big{\|}\int_{s}^{t_{1}}D_{s}u_{r} \,\mathrm{d}W_{r}\Big{\|}^{2}\Big{)}\mathds{1}_{s\in(0,t_{1})}+\theta^{2} \Big{(}\frac{1}{2}\lambda_{*}-\Big{\|}\int_{s}^{t}D_{s}u_{r}\,\mathrm{d}W_{r} \Big{\|}^{2}\Big{)}\mathds{1}_{s\in[t_{1},t]}\]
which yields for any \(h\in[0,1]\)
\[\lambda_{t}\geqslant \int_{t_{1}(1-h)}^{t_{1}}\Big{(}\frac{1}{2}\lambda_{*}-\Big{\|} \int_{s}^{t_{1}}D_{s}u_{r}\,\mathrm{d}W_{r}\Big{\|}^{2}\Big{)}\mathds{1}_{s\in (0,t_{1})}\ \mathrm{d}r\] \[+\int_{t_{1}}^{t_{1}+(t-t_{1})h}\theta^{2}\Big{(}\frac{1}{2} \lambda_{*}-\Big{\|}\int_{s}^{t}D_{s}u_{r}\,\mathrm{d}W_{r}\Big{\|}^{2}\Big{)} \mathds{1}_{s\in(t_{1},t]}\ \mathrm{d}r\] \[\geqslant \frac{1}{2}\lambda_{*}(t_{1}h+\theta^{2}(t-t_{1})h)-\int_{t_{1}(1 -h)}^{t_{1}}\Big{\|}\int_{s}^{t_{1}}D_{s}u_{r}\,\mathrm{d}W_{r}\Big{\|}^{2} \mathds{1}_{s\in(0,t_{1})}\ \mathrm{d}r\] \[-\int_{t_{1}}^{t_{1}+(t-t_{1})h}\theta^{2}\Big{\|}\int_{s}^{t}D_{s }u_{r}\,\mathrm{d}W_{r}\Big{\|}^{2}\mathds{1}_{s\in(t_{1},t]}\ \mathrm{d}r\] \[= :\frac{1}{2}\lambda_{*}(t_{1}h+\theta^{2}(t-t_{1})h)-I_{1}(t_{1} )-I_{2}^{\theta}(t)=:\frac{1}{2}\lambda_{*}(t_{1}h+\theta^{2}(t-t_{1})h)-I_{h} ^{\theta}(t).\]
Next, we use (2.12), and this time we take \(a:=\frac{4}{t_{1}\lambda_{*}+\theta^{2}(t-t_{1})\lambda_{*}}\) and for any \(y\geqslant a,h=\frac{4}{y(t_{1}\lambda_{*}+\theta^{2}(t-t_{1})\lambda_{*})}\leqslant 1\). Similarly to (2.13), we have
\[\mathds{P}\big{(}\lambda_{t}<1/y\big{)}\leqslant\mathds{P}\big{(}I_{h}^{ \theta}(t)\geqslant 1/(2y)\big{)}\lesssim y^{\frac{p}{2}}\mathds{E}(|I_{h}^{ \theta}(t)|^{\frac{p}{2}}).\]
Similarly to (2.14) we have
\[\mathds{E}(|I_{1}(t_{1})|^{\frac{p}{2}})\lesssim(t_{1}h)^{p}\sup_{r,s \in[0,1]}\mathds{E}\|D_{s}(u_{r})\|^{p}\lesssim y^{-p},\] \[\mathds{E}(|I_{2}(t)|^{\frac{p}{2}})\lesssim\theta^{p}((t-t_{1})h )^{p}\sup_{r,s\in[0,1]}\mathds{E}\|D_{s}(u_{r})\|^{p}\lesssim y^{-p}.\]
Finally, similarly to (2.15), we get for any \(\gamma\in(0,\frac{p}{2d})\)
\[\mathds{E}(\lambda_{t}^{-\gamma d})\lesssim(t_{1}+\theta^{2}(t-t_{1}))^{- \gamma d}.\]
It shows that instead of (2.6), the Malliavin matrix \(\mathcal{M}_{t}^{\theta}\) of \(X_{t}^{\theta}\) satisfies the following bound:
\[\mathds{E}\left((\det\mathcal{M}_{t}^{\theta})^{-\gamma}\right)\lesssim N \left(t_{1}+\theta^{2}(t-t_{1})\right)^{-\gamma d}. \tag{2.16}\]
Define for \(j\in\mathbb{N}\), \(p\geqslant 1\), and random variables \(X\), the quantity
\[\mathcal{D}(j,p)(X)\coloneqq\operatorname*{ess\,sup}_{s_{1},\ldots,s_{j}\in[0,1]}\mathds{E}\|D_{s_{1},\ldots,s_{j}}X\|^{p}, \tag{2.17}\]
whenever it is finite. Define the following class of processes
\[\mathfrak{D}^{k}=\{(X_{t})_{t\in[0,1]} : \mathcal{D}(j,p)(X_{t})<\infty\ \forall 1\leqslant j\leqslant k,p \geqslant 2,t\in[0,1]\] \[\text{and}\ D_{s}X_{t}=0\ \forall s>t\}.\]
**Proposition 2.4**.: _Let \(k\in\mathbb{N}\) and \((X_{t})_{t\in[0,1]}\in\mathfrak{D}^{k+1}\). Then for all \(p\geqslant 2\), \(t\in(0,1]\), one has_
\[\mathds{E}\|D^{k+1}X_{t}\|_{H^{\otimes(k+1)}}^{p} \lesssim\mathcal{D}(k+1,p)(X_{t})\,t^{\frac{p}{2}(k+1)}, \tag{2.18}\] \[\mathds{E}\|D^{k}\mathcal{M}_{t}\|_{H^{\otimes k}}^{p} \lesssim N\,t^{\frac{p}{2}(k+2)}, \tag{2.19}\]
_where \(N\) depends only on \(k,d\), and finitely many \(\mathcal{D}(j,r)(X_{t})\) with \(j=1,\ldots,k+1\) and \(r\geqslant 2\)._
_Remark 2.5_.: For the above statement (and several below) \(X_{t}\) does not actually need to be a process, the statement holds with any random variable \(X\) satisfying the condition \(D_{s}X=0\), \(s>t\), for some fixed \(t\). We choose to formulate the statements like this to stay closer to the standard literature, and since in their applications we will use them with processes.
Proof.: We first note that because of our assumption, we have \(D_{s_{1},\ldots,s_{k+1}}X_{t}=0\) if \(s_{1}\vee\cdots\lor s_{k+1}>t\). Applying Jensen's inequality we then obtain
\[\mathds{E}\|D^{k+1}X_{t}\|_{H^{\otimes k+1}}^{p} =\mathds{E}\Big{(}\int_{0}^{t}\cdots\int_{0}^{t}|D_{s_{1},\ldots,s_{k+1}}X_{t}|^{2}\,\mathrm{d}s_{1}\ldots\,\mathrm{d}s_{k+1}\Big{)}^{\frac{p} {2}}\] \[\leqslant t^{(\frac{p}{2}-1)(k+1)}\,\int_{0}^{t}\cdots\int_{0}^{t} \mathds{E}\|D_{s_{1},\ldots,s_{k+1}}X_{t}\|^{p}\,\mathrm{d}s_{1}\ldots\, \mathrm{d}s_{k+1},\]
from which the first bound follows immediately.
For the second bound note that for any \(m,q\in\{1,\ldots,d\}\), by applying Jensen's inequality twice we get
\[\mathds{E}\|D^{k}\mathcal{M}_{t}^{mq}\|_{H^{qk}}^{p}\] \[=\mathds{E}\Big{(}\int_{0}^{t}\cdots\int_{0}^{t}\bigg{|}D_{s_{1}, \ldots,s_{k}}\int_{0}^{t}\langle D_{s}X_{t}^{m},D_{s}X_{t}^{q}\rangle_{\mathds{ R}^{d_{1}}}\,\,\mathrm{d}s\Big{|}^{2}\,\,\mathrm{d}s_{1}\ldots\,\,\mathrm{d}s_{k} \Big{)}^{\frac{p}{2}}\] \[\leqslant\mathds{E}\Big{(}\int_{0}^{t}\cdots\int_{0}^{t}t\int_{0 }^{t}\bigg{|}D_{s_{1},\ldots,s_{k}}\langle D_{s}X_{t}^{m}D_{s}X_{t}^{q}\rangle \bigg{|}^{2}\,\mathrm{d}s\,\,\mathrm{d}s_{1}\ldots\,\,\mathrm{d}s_{k}\Big{)}^{ \frac{p}{2}}\] \[\lesssim t^{\frac{p}{2}(k+2)}\sup_{s,s_{1},\ldots,s_{k}\in(0,t)} \mathds{E}\big{|}D_{s_{1},\ldots,s_{k}}\langle D_{s}X_{t}^{m}D_{s}X_{t}^{q} \rangle\big{|}^{p}.\]
We can then use the following Leibniz rule:
\[D_{s_{1},\ldots,s_{k}}^{k}(FG)=\sum_{I\subset\{s_{1},\ldots,s_{k}\}}D_{I}^{|I| }(F)\,\,D_{I^{c}}^{k-|I|}(G),\quad F,G\in\mathds{D}^{k,p}\]
for any \(k\geqslant 1\), \(p\geqslant 2\). Applying this with \(F=D_{s}^{I}X_{t}^{m}\) and \(G=D_{s}^{I}X_{t}^{q}\) and subsequently using Holder's inequality leads to (2.19).
**Proposition 2.6**.: _Let \(k\in\mathbb{N}\) and let \((X_{t})_{t\in[0,1]}\in\mathfrak{D}^{k+1}\) be nondegenerate. Then for all \(p\geqslant 2\), \(t\in(0,1]\), one has_
\[\mathds{E}\|D^{k}\mathcal{M}_{t}^{-1}\|_{H^{qk}}^{p}\leqslant N\,t^{\frac{p}{ 2}k}\,\sum_{j=1}^{k}[\mathds{E}|\det\mathcal{M}_{t}|^{-2(1+2j)p}]^{\frac{j+1}{ 2(1+2j)}}\,t^{p(d(j+1)-1)} \tag{2.20}\]
_where \(N\) depends only on \(k\), \(d\), and finitely many \(\mathcal{D}(j,r)(X_{t})\) with \(j=1,\ldots,k+1\) and \(r\geqslant 2\)._
Proof.: Because of [32, Lemma 2.1.6] we have
\[D\mathcal{M}_{t}^{-1}=-\mathcal{M}_{t}^{-1}(D\mathcal{M}_{t})\mathcal{M}_{t}^ {-1}.\]
For higher order derivatives it follows by induction that there are constants \(C(k,j,c_{1},\ldots,c_{j})\) such that
\[D^{k}\mathcal{M}_{t}^{-1}=\sum_{j=1}^{k}\sum_{\begin{subarray}{c}1\leqslant c _{1},\ldots,c_{j}\leqslant k\\ c_{1}+\cdots+c_{j}=k\end{subarray}}C(k,j,c_{1},\ldots,c_{j})\left[\mathcal{M }_{t}^{-1}\prod_{i=1}^{j}\big{(}(D^{c_{i}}\mathcal{M}_{t})\mathcal{M}_{t}^{-1} \big{)}\right].\]
By the above, (2.19) and Holder's inequality, we deduce
\[\mathds{E}\|D^{k}\mathcal{M}_{t}^{-1}\|_{H^{qk}}^{p} \lesssim\sum_{j=1}^{k}[\mathds{E}\|\mathcal{M}_{t}^{-1}\|^{p(1+2 j)}]^{\frac{j+1}{1+2j}}\sum_{\begin{subarray}{c}1\leqslant c_{1},\ldots,c_{j} \leqslant k\\ c_{1}+\cdots+c_{j}=k\end{subarray}}\prod_{i=1}^{j}[\mathds{E}\|D^{c_{i}} \mathcal{M}_{t}\|_{H^{q\infty_{i}}}^{p(1+2j)}]^{\frac{1}{1+2j}}\] \[\lesssim\sum_{j=1}^{k}[\mathds{E}\|\mathcal{M}_{t}^{-1}\|^{p(1+2 j)}]^{\frac{j+1}{1+2j}}\sum_{\begin{subarray}{c}1\leqslant c_{1},\ldots,c_{j} \leqslant k\\ c_{1}+\cdots+c_{j}=k\end{subarray}}t^{\frac{p}{2}\sum_{i=1}^{j}(c_{i}+2)}\]
\[\lesssim\sum_{j=1}^{k}[\mathds{E}\|\mathcal{M}_{t}^{-1}\|^{p(1+2j)} ]^{\frac{j+1}{1+2j}}\,t^{\frac{p}{2}(k+2j)} \tag{2.21}\]
Recall that there is a constant \(C=C(d)\) such that for a non-degenerate \(d\times d\) matrix \(A\), \(\|A^{-1}\|\leqslant C\,|\det(A)|^{-1}\|A\|^{d-1}\). Therefore, we have
\[\mathds{E}\|\mathcal{M}_{t}^{-1}\|^{p(1+2j)} \leqslant\,[\mathds{E}|\det\mathcal{M}_{t}|^{-2(1+2j)p}]^{\frac{1 }{2}}\,[\mathds{E}\|\mathcal{M}_{t}\|^{2(1+2j)p(d-1)}]^{\frac{1}{2}}\] \[\lesssim\,[\mathds{E}|\det\mathcal{M}_{t}|^{-2(1+2j)p}]^{\frac{1 }{2}}\,t^{(1+2j)p(d-1)}.\]
Plugging the above inequality into (2.21) leads promptly to (2.20).
**Theorem 2.7**.: _Let \(k\in\mathbb{N}\) and let \((X_{t})_{t\in[0,1]}\in\mathfrak{D}^{k+1}\) be nondegenerate. Furthermore, assume that such that for all \(p\geqslant 1\) there exists \(C_{p}>0\) such that for all \(t\in(0,1]\)_
\[\mathds{E}|\det\mathcal{M}_{t}|^{-p}\leqslant C_{p}t^{-pd}. \tag{2.22}\]
_Then for all \(Y\in\mathds{D}^{k,q}\) with \(q>1\), \(\varphi\in C_{p}^{\infty}(\mathbb{R}^{d})\), and multiindex \(\alpha\) with \(|\alpha|=k\) one has_
\[|\mathds{E}\partial_{\alpha}\varphi(X_{t})Y|\leqslant N\|\varphi \|_{C^{0}}\|Y\|_{k,q}\,t^{-\frac{k}{2}}, \tag{2.23}\]
_for all \(t\in(0,1]\), where \(N\) depends only on \(k\), \(d\), and finitely many \(C_{r}\) and \(\mathcal{D}(j,r)(X_{t})\) with \(j=1,\ldots,k+1\) and \(r\geqslant 2\)._
Proof.: Using \(Z_{\alpha}\) from Theorem 2.1, we can write
\[|\mathds{E}\partial_{\alpha}\varphi(X_{t})Y|\leqslant\|\varphi \|_{C^{0}}\,\|Z_{\alpha}\|_{L^{1}_{\alpha}}\lesssim\|\varphi\|_{C^{0}}\,\| \mathcal{M}_{t}^{-1}DX_{t}\|_{k,p}^{k}\|Y\|_{k,q}, \tag{2.24}\]
where \(p=2^{k-1}\frac{q-1}{q}\). In the first step, we note that
\[D^{k}(\mathcal{M}_{t}^{-1}DX_{t})=\sum_{j=0}^{k}\binom{k}{j}D^{j} \mathcal{M}_{t}^{-1}\otimes D^{k-j+1}X_{t}.\]
Furthermore by (2.20) and (2.22) for \(t\in[0,1]\) we obtain
\[\mathds{E}\|D^{k}\mathcal{M}_{t}^{-1}\|_{H^{0k}}^{p}\lesssim t^{ \frac{p}{2}k}\sum_{i=1}^{k}t^{-p}\lesssim t^{\frac{p}{2}(k-2)}.\]
Putting it together with (2.18) leads to
\[\mathds{E}\|D^{k}(\mathcal{M}_{t}^{-1}DX_{t})\|_{H^{8k}}^{p} \lesssim\sum_{j=0}^{k}\big{\|}\big{\|}D^{j}M_{t}^{-1}\big{\|}_{H^ {0j}}^{p}\big{\|}_{L^{2}_{\alpha}}\,\big{\|}\big{\|}D^{k-j+1}X_{t}\big{\|}_{H^ {0k-j+1}}^{p}\big{\|}_{L^{2}_{\alpha}}\] \[\lesssim\sum_{j=0}^{k}t^{\frac{p}{2}(j-2)}\,t^{\frac{p}{2}(k-j+1) }\lesssim\sum_{j=0}^{k}t^{\frac{p}{2}(k-1)}\lesssim t^{\frac{p}{2}(k-1)}.\]
Therefore
\[\|\mathcal{M}_{t}^{-1}DX_{t}\|_{k,p}\lesssim\sum_{j=0}^{k}t^{\frac{1}{2}(j-1)} \lesssim t^{-\frac{1}{2}}, \tag{2.25}\]
and therefore from (2.24) we get (2.23).
### Estimates on the laws of certain processes
Throughout this subsection we fix \(1\geqslant\lambda>0\) and \(K>0\) and assume that \(\sigma\sigma^{*}\) satisfies (1.3) with \(\lambda\), and that \(\|\sigma\|_{C^{1}}\leqslant K\). For now we consider the Milstein scheme without drift term:
\[d\bar{X}_{t}^{n}=\left(\sigma(\bar{X}_{k_{n}(t)}^{n})+\nabla\sigma\sigma(\bar{ X}_{k_{n}(t)}^{n})(W_{t}-W_{k_{n}(t)})\right)\mathrm{d}W_{t},\quad\bar{X}_{0}^{n}=y. \tag{2.26}\]
Estimates on the law of \(\bar{X}^{n}\) will later be transferred to \(X^{n}\) by a Girsanov transform. Fix a function \(\chi\in\mathcal{C}_{0}^{\infty}(\mathbb{R};\mathbb{R})\) such that \(|\chi(x)|\leqslant|x|\) and
\[\chi(x)=\left\{\begin{array}{ll}x&\text{ if }\quad|x|\leqslant\frac{\kappa}{2}, \\ 0&\text{ if }\quad|x|\geqslant\kappa.\end{array}\right.\qquad\qquad\kappa:= \frac{\lambda}{4Kd^{2}}, \tag{2.27}\]
Note that for any \(k\in\mathbb{N}\), \(|\nabla^{k}\chi|\leqslant N(\lambda,K,d,k)\). Introduce the truncated Milstein scheme corresponding to (2.26): for any \(y\in\mathbb{R}^{d}\) define the process \((\hat{X}_{t}^{n}(y))_{t\in[0,1]}\) by
\[\mathrm{d}\hat{X}_{t}^{n,i}=\sum_{j=1}^{d_{1}}\left(\sigma^{ij}(\hat{X}_{k_{n} (t)}^{n})+\sum_{k=1}^{d_{1}}(\nabla\sigma\sigma)^{ijk}(\hat{X}_{k_{n}(t)}^{n}) \chi(W_{t}^{k}-W_{\kappa_{n}(t)}^{k})\right)\mathrm{d}W_{t}^{j},\quad\hat{X}_ {0}^{n}=y. \tag{2.28}\]
Define the event \(\hat{\Omega}\subset\Omega\) by
\[\hat{\Omega}:=\left\{\begin{array}{ll}\sup_{t\in\left[\frac{k-1}{n},\frac{k }{n}\right]}|W_{t}^{l}-W_{\frac{k}{n}}^{l}|\leqslant\kappa/2,\ \forall k=1,\ldots,n,\quad l=1,\ldots,d_{1}\end{array}\right\}. \tag{2.29}\]
Note that by the assumptions on \(\sigma\), \((\hat{X}_{t}^{n})_{t\in[0,1]}\) coincides with \((\bar{X}_{t}^{n})_{t\in[0,1]}\) on \(\hat{\Omega}\). Analogously to [2, Proposition 5.3] we know that there exist constants \(N\) and \(c>0\) which depend only on \(d\) and \(\kappa\), such that
\[\mathrm{P}(\hat{\Omega})\geqslant 1-Ne^{-cn}. \tag{2.30}\]
Further, we define the auxiliary processes, \(\bar{Z}_{t}(x)=(\bar{Z}_{t}^{i}(x))_{i=1,\ldots,d}\), for \(x\in\mathbb{R}^{d}\), by
\[\bar{Z}_{t}^{i}(x):=\sum_{j=1}^{d_{1}}\int_{0}^{t}\sigma^{ij}(x)\,\mathrm{d}W_ {r}^{j}+\sum_{j,l=1}^{d_{1}}\int_{0}^{t}[\nabla\sigma\sigma]^{ijl}(x)\chi(W_{r }^{j})\,\mathrm{d}W_{r}^{l}. \tag{2.31}\]
**Lemma 2.8**.: _Let \(q>1\), \(k\in\mathbb{N}\), \(Y\in\mathbb{D}^{k,q}\), and \(G\in\mathcal{C}^{k}(\mathbb{R}^{d})\). Then there exists a constant \(N\) depending on \(\kappa,d,d_{1},\lambda,K,k\), such that for any multi-index \(\alpha\) with \(|\alpha|=k\) and \(t\in(0,1]\) one has the bound_
\[\sup_{x\in\mathbb{R}^{d}}|\mathbb{E}[\partial^{\alpha}G(\bar{Z}_{t}(x))Y]| \leqslant N\|G\|_{\infty}\|Y\|_{k,q}\,t^{-\frac{k}{2}}. \tag{2.32}\]
Proof.: We apply Theorem 2.2 and Theorem 2.7. Fix \(x\in\mathbb{R}^{d}\) and for simplicity we drop it from the notation of \(\tilde{Z}(x)\). Denote \(A:=\sigma(x),B:=(\nabla\sigma\sigma)(x)\), \(\chi(W_{t}):=(\chi(W_{t}^{j}))_{1\leqslant j\leqslant d_{1}}\). It is evident that \(|A|,|B|,|\chi|,|\chi^{\prime}|\lesssim 1\). Let \(u_{t}=A+B\chi(W_{t})\). Firstly we have
\[\mathds{E}\big{[}\int_{0}^{t}|u(t)|^{2}\,\mathrm{d}t\big{]}<\infty,\quad\mu:= \sup_{s,t\in[0,1]}\mathds{E}[|D_{s}u_{t}|^{p}]=\sup_{s\in[0,t]}\mathds{E}[|B \chi^{\prime}(W_{t})|^{p}]<\infty,\quad\forall p\geq 2.\]
Therefore \(u_{t}\in\mathds{D}^{1,2}\) for all \(t\in[0,1]\). Moreover
\[u_{t}u_{t}^{*}=AA^{*}+AB^{*}\chi(W_{t})+B\chi(W_{t})A^{*}+B\chi(W_{t})B^{*} \chi(W_{t})\geqslant\frac{1}{4}\lambda I.\]
Then Theorem 2.2 implies that for the Malliavin matrix \(\mathcal{M}_{t}\) of \(\tilde{Z}_{t}\), we have for any \(\gamma\in(0,\infty)\), \(t\in(0,1]\)
\[\mathds{E}[(\det\mathcal{M}_{t})^{-\gamma}]\lesssim_{\gamma}t^{-\gamma d}.\]
Moreover, as in (2.9), we have for \(s_{1},s_{2},s_{3},\ldots,s_{k+1}<t\),
\[D_{s_{1}}\tilde{Z}_{t}= u_{s_{1}}+\int_{s_{1}}^{t}B\chi^{\prime}(W_{r})\,\mathrm{d}W_{r},\] \[D_{s_{2}s_{1}}\tilde{Z}_{t}= B\chi^{\prime}(W_{s_{1}\lor s_{2}})+\int_{s_{1}\lor s_{2}}^{t}B\chi^{ \prime\prime}(W_{r})\,\mathrm{d}W_{r},\quad\cdots\] \[D_{s_{k+1}\ldots s_{3}s_{2}s_{1}}\tilde{Z}_{t}= B\partial^{k}\chi(W_{s_{1}\lor s_{2}\lor s_{3}\lor\ldots\lor s_{k+1}})+ \int_{s_{1}\lor s_{2}\lor s_{3}\ldots\lor s_{k+1}}^{t}B\partial^{k+1}\chi(W_{r })\,\mathrm{d}W_{r}.\]
From here it follows that \(\tilde{Z}\in\mathfrak{D}^{k+1}\) and therefore Theorem 2.7 yields (2.32).
**Lemma 2.9**.: _Let \(q\geqslant 2\), \(m\in\mathbb{N}\). Let \(Y\in\mathds{D}^{m,q},G\in\mathcal{C}^{m}(\mathbb{R}^{d})\), and assume \(\sigma\in\mathcal{C}^{m+2}\). Let \((\hat{X}_{t}^{n})_{t\in[0,1]}\) be the solution to (2.28). Then there exists a constant \(N\) depending on \(\kappa,d,d_{1},\lambda,K,k\), and \(\|\sigma\|_{\mathcal{C}^{m+2}}\) such that for any multi-index \(\alpha\) with \(|\alpha|=m\) all \(t\in(0,1]\) one has the bound_
\[\sup_{x\in\mathbb{R}^{d}}\big{|}\mathds{E}\big{[}\partial^{\alpha}G(\hat{X}_{t }^{n}(x))Y\big{]}\big{|}\lesssim N\|G\|_{L^{\infty}_{x}}\|Y\|_{m,q}\,t^{-\frac{ m}{2}}. \tag{2.33}\]
Proof.: We want to conclude (2.33) by applying Theorem 2.2 and Theorem 2.7. First we want to show that \(\sigma\in\mathcal{C}^{m+2}\) implies \((\hat{X}_{t}^{n})_{t\in[0,1]}\in\mathfrak{D}^{m+1}\), with furthermore
\[\mathcal{D}(1,r)(\hat{X}_{t}^{n}),\ldots,\mathcal{D}(m+1,r)(\hat{X}_{t}^{n}) \lesssim_{r}1. \tag{2.34}\]
Since \(D_{s}\hat{X}_{t}^{n}=0\) for \(s>t\) is obvious, one only needs to show (2.34). We only detail the argument for showing \(\mathcal{D}(1,r)(\hat{X}_{t}^{n})\lesssim_{r}1\) which corresponds to \(m=0\) (for \(m>0\) the bound can be obtained by the similar induction argument as in e.g. [9, Proof of Proposition 5.2]). The argument will actually give more: for any \(s\) that is not a gridpoint, we show
\[\mathds{E}\sup_{t\in[s,1]}\|D_{s}\hat{X}_{t}^{n}\|^{r}\lesssim_{r}1. \tag{2.35}\]
Set \(k_{n}^{+}(s)=k_{n}(s)+1/n\). First note that for \(t\in[s,k_{n}^{+}(s)]\) we have
\[D_{s}\hat{X}_{t}^{n}=\sigma(\hat{X}_{k_{n}(s)}^{n})+\nabla\sigma \sigma(\hat{X}_{k_{n}(s)}^{n})\chi(W_{s}-W_{k_{n}(s)})+\int_{s}^{t}\nabla\sigma \sigma(\hat{X}_{k_{n}(s)}^{n})\chi^{\prime}(W_{r}-W_{k_{n}(s)})\;\mathrm{d}W_{ r}.\]
From \(\sigma\in\mathcal{C}^{2}\) (here even \(\sigma\in\mathcal{C}^{1}\) suffices) it is clear that
\[\mathds{E}\sup_{t\in[s,k_{n}^{+}(s)]}\|D_{s}\hat{X}_{t}^{n}\|^{r}\lesssim_{r}1. \tag{2.36}\]
If \(t>k_{n}^{+}(s)\), one can inductively obtain
\[D_{s}\hat{X}_{t}^{n}= D_{s}\hat{X}_{k_{n}(t)}^{n}+\int_{k_{n}(t)}^{t}\big{(}\nabla \sigma(\hat{X}_{k_{n}(t)}^{n})+\nabla(\nabla\sigma\sigma)(\hat{X}_{k_{n}(t)}^{ n})\chi(W_{r}-W_{k_{n}(t)})\big{)}^{*}D_{s}\hat{X}_{k_{n}(t)}^{n}\;\mathrm{d}W_{r}\] \[= D_{s}\hat{X}_{k_{n}^{+}(s)}^{n}+\int_{k_{n}^{+}(s)}^{t}\big{(} \nabla\sigma(\hat{X}_{k_{n}(r)}^{n})+\nabla(\nabla\sigma\sigma)(\hat{X}_{k_{n }(r)}^{n})\chi(W_{r}-W_{k_{n}(r)})\big{)}^{*}D_{s}\hat{X}_{k_{n}(r)}^{n}\; \mathrm{d}W_{r}.\]
This is simply a linear delay equation for \(t\mapsto D_{s}\hat{X}_{t}^{n}\) with bounded coefficients, since \(\sigma\in\mathcal{C}^{2}\). From here it is classical (by BDG and Gronwall inequalities) to get
\[\mathds{E}\sup_{t\in[k_{n}^{+}(s),1]}\|D_{s}\hat{X}_{t}^{n}\|^{r}\lesssim_{r} \mathds{E}\|D_{s}\hat{X}_{k_{n}^{+}(s)}^{n}\|^{r}, \tag{2.37}\]
which, combined with (2.36) yields the claimed bound (2.35).
Now we denote
\[A_{t}\coloneqq\sigma(\hat{X}_{k_{n}(t)}),\qquad B_{t}\coloneqq(\nabla\sigma )(\hat{X}_{k_{n}(t)}^{n})\chi(W_{t}-W_{k_{n}(t)})\]
and set \(u_{t}\coloneqq A_{t}+B_{t}\). By the definition of \(\chi,A_{t}\) and \(B_{t}\) one then has
\[u_{t}u_{t}^{*}=(A_{t}+B_{t})(A_{t}+B_{t})^{*}\geqslant\frac{\lambda}{4}I. \tag{2.38}\]
Condition (2.5) holds for any \(p\geqslant 2\) since \(\sigma\in\mathcal{C}^{2}\) and one can apply Theorem 2.2 to \(u_{t}\). Thus for the Malliavin matrix \(\mathcal{M}_{t}\) of \(\int_{0}^{t}u_{r}dW_{r}=\hat{X}_{t}^{n}\) holds
\[\mathds{E}[(\det\mathcal{M}_{t})^{-r}]\lesssim_{\gamma}t^{-rd}\]
for \(t\in(0,1]\) and all \(\gamma\geqslant 2\). Hence \(\hat{X}_{t}^{n}\) is non-degenerate and \(\hat{X}_{t}^{n}\in\mathds{D}^{1,p}\) for all \(p>2\) and (2.34) holds for \(m=0\) as claimed. By Theorem 2.7 we get (2.25).
## 3. Intermediate estimates on Milstein scheme
Before showing the desired estimates we recall the following version of _stochastic sewing lemma_, originating from [23]. Let
\[[S,T]_{\leqslant}^{2} \coloneqq\left\{(s,t)\in[S,T]^{2}:S\leqslant s\leqslant t \leqslant T\right\},\] \[[S,T]_{\leqslant}^{3} \coloneqq\left\{(s,u,t)\in[S,T]^{3}:S\leqslant s\leqslant u \leqslant t\leqslant T\right\}.\]
Given a two-parameter process \((s,t)\mapsto A_{s,t}\), we set for \((s,u,t)\in[S,T]_{\leqslant}^{3}\)
\[\delta A_{s,u,t}\coloneqq A_{s,t}-A_{u,t}-A_{s,u}.\]
**Lemma 3.1**.: _[_23_, Theorem 2.4]_ _Let \(p\geqslant 2\), \(0\leqslant S\leqslant T\leqslant 1\). Let \((A_{t})_{t\in[0,1]}\) be an \((\mathcal{F}_{t})_{t\in[0,1]}\)-adapted \(L_{p}(\Omega)\)-valued process with values in \(\mathbb{R}^{d}\). Suppose that for some \(\epsilon_{1},\epsilon_{2}>0\) and \(C_{1},C_{2}\) the bounds_
\[\|A_{s,t}\|_{L_{\omega}^{p}} \leqslant C_{1}|t-s|^{\frac{1}{2}+\epsilon}, \tag{3.1}\] \[\|\mathrm{E}_{s}\delta A_{s,u,t}\|_{L_{\omega}^{p}} \leqslant C_{2}|t-s|^{1+\epsilon_{2}}, \tag{3.2}\]
_hold for all \((s,u,t)\in[S,T]_{\leqslant}^{3}\). Then there exists a unique (up to modification) \((\mathcal{F}_{t})_{t\in[0,1]}\)-adapted process \(\mathcal{A}:[S,T]\to L_{p}(\Omega)\) such that \(\mathcal{A}_{S}=0\) and the following bounds hold for some constants \(K_{1},K_{2}>0\):_
\[\|\mathcal{A}_{t}-\mathcal{A}_{s}-A_{s,t}\|_{L_{\omega}^{p}} \leqslant K_{1}|t-s|^{\frac{1}{2}+\epsilon_{1}}+K_{2}|t-s|^{1+ \epsilon_{2}},\quad(s,t)\in[S,T]_{\leqslant}^{2} \tag{3.3}\] \[\|\mathrm{E}_{s}[\mathcal{A}_{t}-\mathcal{A}_{s}-A_{s,t}]\|_{L_{ \omega}^{p}} \leqslant K_{2}|t-s|^{1+\epsilon_{2}},\quad(s,t)\in[S,T]_{ \leqslant}^{2}. \tag{3.4}\]
_Moreover, there exists a constant \(K\) depending only on \(\epsilon_{1}\) and \(\epsilon_{2},d\) such that \(\mathcal{A}\) satisfies the bound_
\[\|\mathcal{A}_{t}-\mathcal{A}_{s}\|_{L_{\omega}^{p}}\leqslant KpC_{1}|t-s|^{ \frac{1}{2}+\epsilon_{1}}+KpC_{2}|t-s|^{1+\epsilon_{2}},\quad(s,t)\in[S,T]_{ \leqslant}^{2}. \tag{3.5}\]
### Estimates on additive functionals
**Lemma 3.2**.: _Let \(\alpha\in(0,1),\epsilon\in(0,\frac{1}{2}),\alpha^{\prime}\in(1-2\epsilon,1),p \geqslant 2\). Suppose that \((H^{\sigma})\) in Assumption 1.1 holds and that \((\hat{X}_{t}^{n}(y))_{t\in[0,1]}\) is the solution to (2.28), \(y\in\mathbb{R}^{d}\). Then for all functions \(h\in\mathcal{C}^{\alpha^{\prime}},f\in\mathcal{C}^{\alpha}\), \((s,t)\in[0,1]_{\leqslant}^{2}\), the following holds_
\[\Big{\|}\int_{s}^{t}h(\hat{X}_{r}^{n})\big{(}f(\hat{X}_{r}^{n})-f(\hat{X}_{k_{ n}(r)}^{n})\big{)}\,\mathrm{d}r\Big{\|}_{L_{\omega}^{p}}\leqslant N\|f\|_{ \mathcal{C}^{\alpha}}\|h\|_{\mathcal{C}^{\alpha^{\prime}}}n^{-\frac{\alpha+1} {2}+\epsilon}|t-s|^{\frac{1}{2}+\epsilon} \tag{3.6}\]
_with \(N=N(p,d,d_{1},\|\sigma\|_{\mathcal{C}^{3}},\alpha,\alpha^{\prime},\lambda,\epsilon)\)._
Proof.: We partially follow and partially refine the arguments of [5, Lemma 3.1] (see also [2, Lemma 6.1]). Define \(k\) by \(\frac{k}{n}=k_{n}(s)\) for \(s\in[0,1]\). Let
\[A_{s,t}\coloneqq\mathrm{E}_{s}[\mathcal{A}_{s,t}]:=\mathrm{E}_{s}\int_{s}^{t }h(\hat{X}_{s}^{n})(f(\hat{X}_{r}^{n})-f(\hat{X}_{k_{n}(r)}^{n}))\,\mathrm{d}r.\]
In order to apply Lemma 3.1, we are going to verify (3.1) first. Using the \(\mathcal{F}_{s}\)-measurability of \(h(\hat{X}_{s}^{n})\), we write
\[\|A_{s,t}\|_{L_{\omega}^{p}}\leqslant\|h\|_{\mathrm{B}}\tilde{A}_{s,t}\coloneqq \|h\|_{\mathrm{B}}\int_{s}^{t}\Big{\|}\mathrm{E}_{s}\big{[}f(\hat{X}_{r}^{n})- f(\hat{X}_{k_{n}(r)}^{n})\big{]}\big{\|}_{L_{\omega}^{p}}\,\mathrm{d}r.\]
Depending on the relation of the various variables, there are several trivial cases, which we deal with first. If \(t\in[s,\frac{k+4}{n}]\), then using the bound
\[\Big{\|}\sup_{r\in[0,1]}|\hat{X}_{r}^{n}-\hat{X}_{k_{n}(r)}^{n}\|_{L_{\omega}^ {m}}\lesssim_{m}n^{-\frac{1}{2}}, \tag{3.7}\]
for any \(m\in(0,\infty)\), we get for any \(\epsilon\in(0,\frac{1}{2})\)
\[\tilde{A}_{s,t}\lesssim\|f\|_{\mathcal{C}^{\alpha}}\int_{s}^{t}n^{-\frac{ \alpha}{2}}\,\mathrm{d}r\lesssim\|f\|_{\mathcal{C}^{\alpha}}n^{-\frac{1+ \epsilon}{2}+\epsilon}|t-s|^{\frac{1}{2}+\epsilon} \tag{3.8}\]
since \(|t-s|\leqslant 4n^{-1}\). In the sequel we assume \(t>\frac{k+4}{n}\). We write
\[\tilde{A}_{s,t}\leqslant\Big{(}\int_{s}^{\frac{k+4}{n}}+\int_{\frac{k+4}{n}}^{t} \Big{)}\big{\|}\mathds{E}_{s}\big{[}f(\hat{X}_{r}^{n})-f(\hat{X}_{k_{n}(r)}^{n} )\big{]}\big{\|}_{L^{p}_{\omega}}\,\mathrm{d}r\eqqcolon S_{1}+S_{2}. \tag{3.9}\]
The term \(S_{1}\) is as simple as before:
\[S_{1}\lesssim\|f\|_{C^{\alpha}}n^{-\frac{1+\alpha}{2}+\epsilon}|t-s|^{\frac{1 }{2}+\epsilon}. \tag{3.10}\]
For \(S_{2}\), notice that
\[S_{2}=\int_{\frac{k+4}{n}}^{t}\big{\|}\mathds{E}_{s}\mathds{E}_{\frac{k+1}{n} }(\mathds{E}_{k_{n}(r)}f(\hat{X}_{r}^{n})-f(\hat{X}_{k_{n}(r)}^{n}))\big{\|}_{ L^{p}_{\omega}}\,\mathrm{d}r.\]
For \(x,y\in\mathbb{R}^{d}\), let
\[\Sigma(x,y) :=\nabla\sigma\sigma(x)\chi(y), A(x)\coloneqq(\sigma\sigma^{*})(x),\] \[B(x,y) :=(\Sigma\Sigma^{*})(x,y), C(x,y)\coloneqq\sigma(x)\Sigma(x,y),\]
and define
\[g(x)\coloneqq g_{r}^{n}(x)\coloneqq\mathds{E}\Big{(}f\big{(}x+\int_{k_{n}(r)}^ {r}[\sigma(x)+\Sigma(x,W_{\xi}-W_{k_{n}(r)})]\,\mathrm{d}W_{\xi}\big{)}-f(x) \Big{)}. \tag{3.11}\]
Then by the Markov property
\[S_{2}=\int_{\frac{k+4}{n}}^{t}\mathds{E}_{s}\mathds{E}_{\frac{k+1}{n}}g(\hat{ X}_{k_{n}(r)}^{n})dr. \tag{3.12}\]
For simplicity let
\[Y_{k_{n}(r),r}(x)\coloneqq\int_{k_{n}(r)}^{r}[\sigma(x)+\Sigma(x,W_{\xi}-W_{k _{n}(r)})]\,\mathrm{d}W_{\xi}.\]
By Ito's formula we have
\[g(x)=\sum_{i,j=1}^{d}\mathds{E}\int_{k_{n}(r)}^{r}\big{[}\big{(} \frac{1}{2}A^{ij}(x)+\frac{1}{2}B^{ij}(x,W_{u}-W_{k_{n}(r)})\] \[\qquad\qquad\qquad\qquad+C^{ij}(x,W_{u}-W_{k_{n}(r)})\big{)}\partial _{ij}f(x+Y_{k_{n}(r),u}(x))\big{]}\,\mathrm{d}u. \tag{3.13}\]
We aim to show for \(\alpha\in(0,1)\)
\[\big{|}\mathds{E}_{\frac{k+1}{n}}g(\hat{X}_{k_{n}(r)}^{n}(y))\big{|}\lesssim \|f\|_{C^{\alpha}}n^{-\frac{1+\alpha}{2}}(r-s)^{-\frac{1}{2}}. \tag{3.14}\]
We claim that it suffices to show (3.14) in the cases \(\alpha=0,1\). Indeed, the general case of (3.14) then follows via interpolation (cf. applying [25, Theorem 1.6] and [25, 1.1.1. Examples, Example 1.8]). First we treat the case \(\alpha=0\). First by the Newton-Leibniz formula we write
\[\mathds{E}_{\frac{k+1}{n}}g(\hat{X}_{k_{n}(r)}^{n}(y))=\mathds{E}_{\frac{k+1} {n}}\Big{[}\int_{0}^{1}\nabla f\big{(}x+\theta Y_{k_{n}(r),r}(x)\big{)}\cdot Y _{k_{n}(r),r}(x)\,\mathrm{d}\theta\Big{|}_{x=\hat{X}_{k_{n}(r)}^{n}(y)}\Big{]}.\]
We want to get rid of \(\nabla\) by Malliavin integration by parts. More precisely, setting \(\hat{X}^{n,\theta}_{r}(y)=\hat{X}^{n}_{k_{n}(r)}(y)+\theta Y_{k_{n}(r),r}(\hat{X} ^{n}_{k_{n}(r)}(y))\), we wish to use (2.23) conditionally on \(\mathcal{F}_{\frac{k+1}{n}}\), with \(k=1\), \(q=2\), \(\hat{X}^{n,\theta}_{r}(y)\) in place of \(X\), and \(Y_{k_{n}(r),r}(\hat{X}^{n}_{k_{n}(r)}(y))\) in place of \(Y\). It is easy to verify that \(\hat{X}^{n,\theta}(y)\in\mathfrak{D}^{2}\): indeed, \(D_{s}\hat{X}^{n,\theta}_{t}(y)=0\) for \(s>t\) is obvious, while to bound \(\mathcal{D}(j,p)(\hat{X}^{n,\theta}_{t}(y))\), we can proceed as follows. For \(s\in(k_{n}(t),t]\) we have
\[D_{s}\hat{X}^{n,\theta}_{t}(y) =D_{s}\hat{X}^{n}_{k_{n}(t)}+\theta D_{s}Y_{k_{n}(t),t}(\hat{X}^{ n}_{k_{n}(t)}(y))\] \[=\theta\big{(}\nabla\sigma\sigma(\hat{X}^{n}_{k_{n}(s)})\chi(W_{s }-W_{k_{n}(s)})+\int_{s}^{t}\nabla\sigma\sigma(\hat{X}^{n}_{k_{n}(t)})\chi^{ \prime}(W_{\xi}-W_{k_{n}(t)})\,\mathrm{d}W_{\xi}\big{)},\]
and for \(s<k_{n}(t)\),
\[D_{s}\hat{X}^{n,\theta}_{t}(y)=D_{s}\hat{X}^{n}_{k_{n}(t)}+\theta\int_{k_{n}( t)}^{t}\nabla\big{(}\sigma(X^{n}_{k_{n}(t)})+\nabla\sigma\sigma(\hat{X}^{n}_{k_{n} (t)})\chi(W_{\xi}-W_{k_{n}(t)})\big{)}D_{s}\hat{X}^{n}_{k_{n}(t)}\,\mathrm{d}W_ {\xi}.\]
From \(\hat{X}^{n}\in\mathfrak{D}^{2}\), BDG inequality, and \(\sigma\in\mathcal{C}^{2}\) we get \(\mathcal{D}(1,p)(\hat{X}^{n,\theta}_{t}(y))\lesssim_{p}1\). Bounding \(\mathcal{D}(2,p)(\hat{X}^{n,\theta}_{t}(y))\) is very similar. This verifies \(\hat{X}^{n,\theta}(y)\in\mathfrak{D}^{2}\). To verify (2.22), we apply Remark 2.3. Letting \(u_{\xi}:=\sigma(\hat{X}^{n}_{k_{n}(\xi)})+\Sigma(\hat{X}^{n}_{k_{n}(\xi)},W_{ \xi}-W_{k_{n}(\xi)})\), the conditions of Remark 2.3 are satisfied with \(t_{1}=k_{n}(r)\). Therefore the Malliavin matrix \(\mathcal{M}^{\theta}_{r}\) of \(\hat{X}^{n,\theta}_{r}\) satisfies
\[\mathbb{E}_{\frac{k+1}{n}}|\det\mathcal{M}^{\theta}_{r}|^{-p}\lesssim C_{p} \big{(}k_{n}(r)-\frac{k+1}{n}+\theta^{2}(r-k_{n}(r))\big{)}^{-pd}\lesssim C_{p }(r-s)^{-pd}.\]
Therefore \(\hat{X}^{n,\theta}\) fulfills (2.22). Finally, it remains to bound the \(\|\cdot\|_{1,2}\) norm of \(Y_{k_{n}(r),r}(\hat{X}^{n}_{k_{n}(r)}(y))\). We have
\[D_{s}Y_{k_{n}(r),r}(\hat{X}^{n}_{k_{n}(r)}(y))\] \[\qquad\qquad\qquad\qquad\qquad+\int_{s}^{r}\nabla\sigma\sigma( \hat{X}^{n}_{k_{n}(r)})\chi^{\prime}(W_{\xi}-W_{k_{n}(r)})\,\mathrm{d}W_{\xi} \Big{)}\] \[\qquad\qquad\qquad+\mathds{1}_{s<k_{n}(r)}\int_{k_{n}(r)}^{r} \nabla[\sigma(\hat{X}^{n}_{k_{n}(r)}(y))+\Sigma(\hat{X}^{n}_{k_{n}(r)}(y),W_{ \xi}-W_{k_{n}(r)})]D_{s}\hat{X}^{n}_{k_{n}(r)}(y)\,\mathrm{d}W_{\xi}\]
and
\[\|Y_{k_{n}(r),r}(\hat{X}^{n}_{k_{n}(r)}(y))\|_{1,2}\] \[\qquad\lesssim\|Y_{k_{n}(r),r}(\hat{X}^{n}_{k_{n}(r)}(y))\|_{L^{2 }_{\omega}}+\big{\|}\big{(}\int_{0}^{1}|D_{s}Y_{k_{n}(r),r}(\hat{X}^{n}_{k_{n }(r)}(y))|^{2}\,\mathrm{d}s\big{)}^{\frac{1}{2}}\big{\|}_{L^{2}_{\omega}}\] \[\qquad\lesssim\sup_{x\in\mathbb{R}^{d}}\|Y_{k_{n}(r),r}(x)\|_{L^{2 }_{\omega}}+\big{(}\int_{0}^{1}\mathds{1}_{s\in[k_{n}(r),r]}(s)\,\mathrm{d}s \big{)}^{\frac{1}{2}}\] \[\qquad\qquad+\big{\|}\big{(}\int_{k_{n}(r)}^{r}\big{|}\int_{s}^{r }\nabla\sigma\sigma(\hat{X}^{n}_{k_{n}(r)})\chi^{\prime}(W_{\xi}-W_{k_{n}(r)}) \,\mathrm{d}W_{\xi}\big{|}^{2}\,\mathrm{d}s\big{)}^{\frac{1}{2}}\big{\|}_{L^{2 }_{\omega}}\]
\[\leq\|f\|_{C^{\alpha}}n^{-\frac{1+\alpha}{2}}\int_{\frac{k+4}{n}}^{t}(r-s)^{- \frac{1+\alpha}{2}}dr\lesssim\|f\|_{C^{\alpha}}n^{-\frac{1+\alpha}{2}}|t-s|^{ \frac{1}{2}+\epsilon}. \tag{3.16}\]
Combining (3.9), (3.10) and (3.16), we get
\[\|A_{s,t}\|_{L^{p}_{\omega}}\leqslant\|h\|_{\mathbf{B}}\tilde{A}_{s,t}\leqslant N \|f\|_{C^{a}}\|h\|_{\mathbf{B}}n^{-\frac{1+a}{2}+\epsilon}|t-s|^{\frac{1}{2}+ \epsilon}. \tag{3.17}\]
We conclude that that (3.1) holds with \(C_{1}=N\|h\|_{C^{a^{\prime}}}\|f\|_{C^{a}}n^{-\frac{1+a}{2}+\epsilon}\).
Now we move on to verifying (3.2). We have
\[\delta A_{s,u,t}=\Big{(}\mathbb{E}_{s}\int_{u}^{t}h(\hat{X}_{s})-\mathbb{E}_{u }\int_{u}^{t}h(\hat{X}_{u})\Big{)}\big{[}f(\hat{X}_{r}^{n})-f(\hat{X}_{k_{n}(r) }^{n})\big{]}\,\mathrm{d}r,\]
and thus
\[\mathbb{E}_{s}\delta A_{s,u,t}=\int_{u}^{t}\mathbb{E}_{s}\Big{(}\big{(}h(\hat {X}_{s})-h(\hat{X}_{u})\big{)}\mathbb{E}_{u}\big{[}f(\hat{X}_{r}^{n})-f(\hat{X }_{k_{n}(r)}^{n})\big{]}\Big{)}\,\mathrm{d}r.\]
Using the preceding discussion, particularly the notation from (3.8) and the estimate (3.17), we can write
\[\big{\|}\mathbb{E}_{s}\delta A_{s,u,t}\big{\|}_{L^{p}_{\omega}} \lesssim\int_{u}^{t}\big{\|}\big{(}h(\hat{X}_{s})-h(\hat{X}_{u}) \big{\|}_{L^{2p}_{\omega}}\big{\|}\mathbb{E}_{u}\big{[}f(\hat{X}_{r}^{n})-f( \hat{X}_{k_{n}(r)}^{n})\big{]}\big{\|}_{L^{2p}_{\omega}}\,\mathrm{d}r\] \[\lesssim|u-s|^{\frac{a^{\prime}}{2}}\|h\|_{C^{a^{\prime}}}\int_{u }^{t}\big{\|}\mathbb{E}_{u}\big{[}f(\hat{X}_{r}^{n})-f(\hat{X}_{k_{n}(r)}^{n} )\big{]}\big{\|}_{L^{2p}_{\omega}}\,\mathrm{d}r\] \[\lesssim|u-s|^{\frac{a^{\prime}}{2}}\|h\|_{C^{a^{\prime}}}\tilde{ A}_{u,t}\] \[\lesssim|u-s|^{\frac{a^{\prime}}{2}}\|h\|_{C^{a^{\prime}}}\|f\|_{ C^{a}}n^{-\frac{1+a}{2}+\epsilon}|t-u|^{\frac{1}{2}+\epsilon}\] \[\lesssim|t-s|^{1+\epsilon^{\prime}}\|h\|_{C^{a^{\prime}}}\|f\|_{ C^{a}}n^{-\frac{1+a}{2}+\epsilon}\]
with \(\epsilon^{\prime}\coloneqq\frac{a^{\prime}-1}{2}+\epsilon\), which is positive by assumption. Therefore (3.2) holds as well with \(C_{2}=N\|h\|_{C^{a^{\prime}}}\|f\|_{C^{a}}n^{-\frac{1+a}{2}+\epsilon}\).
So all of the conditions in Lemma3.1 are satisfied. Notice that the process
\[\hat{\mathcal{A}}_{t}\coloneqq\int_{0}^{t}h(\hat{X}_{r}^{n})\big{(}f(\hat{X}_ {r}^{n})-f(\hat{X}_{k_{n}(r)}^{n})\big{)}\,\mathrm{d}r\]
is \(\mathcal{F}_{t}\)-adapted; moreover it clearly satisfies the following bounds
\[\big{\|}\hat{\mathcal{A}}_{t}-\hat{\mathcal{A}}_{s}-A_{s,t}\big{\|}_ {L^{p}_{\omega}}=\big{\|}\hat{\mathcal{A}}_{s,t}-A_{s,t}\big{\|}_{L^{p}_{\omega }}\lesssim|t-s|,\] \[\big{\|}\mathbb{E}_{s}\big{[}\hat{\mathcal{A}}_{t}-\hat{\mathcal{ A}}_{s}-A_{s,t}\big{]}\big{\|}_{L^{p}_{\omega}}=\big{\|}\mathbb{E}_{s}\big{[} \hat{\mathcal{A}}_{s,t}-A_{s,t}\big{]}\big{\|}_{L^{p}_{\omega}}\lesssim|t-s|^{ 1+\frac{a^{\prime}}{2}}.\]
Hence Lemma3.1 shows that \(\hat{\mathcal{A}}=\mathcal{A}\) and the desired estimate (3.6) follows from (3.5).
After applying Kolmogorov continuity theorem to Lemma3.2 we simply get the following corollary.
**Corollary 3.3**.: _Let \(\alpha\in(0,1),\,\epsilon\in(0,\frac{1}{2}),\,\alpha^{\prime}\in(1-2\epsilon,1),p \geqslant 2.\) Suppose that \((H^{\sigma})\) in Assumption 1.1 holds and that \((\hat{X}_{t}^{n}(y))_{t\in[0,1]}\) is the solution to (2.28), \(y\in\mathbb{R}^{d}.\) Then for all functions \(h\in C^{\alpha^{\prime}},f\in C^{\alpha},\,(s,t)\in[0,1]_{\varsigma}^{2},\) the following holds_
\[\Big{\|}\sup_{t\in[0,1]}\big{|}\int_{0}^{t}h(\hat{X}_{r}^{n})\big{(}f(\hat{X}_ {r}^{n})-f(\hat{X}_{k_{n}(r)}^{n})\big{)}\,\mathrm{d}r\Big{\|}\Big{\|}_{L^{p}_{ \omega}}\leqslant N\|h\|_{C^{\alpha^{\prime}}}\|f\|_{C^{\alpha}}n^{-\frac{ \alpha+1}{2}+\epsilon} \tag{3.18}\]
_with \(N=N(p,d,d_{1},\|\sigma\|_{C^{3}},\alpha,\alpha^{\prime},\lambda,\epsilon).\)_
### Girsanov transform
We add back the drift via a Girsanov transform, first still in the truncated diffusion case. Therefore we use yet another auxiliary processes \((\tilde{X}_{t}^{n}(y))_{t\in[0,1]}=(\tilde{X}_{t}^{n})_{t\in[0,1]},\) for \(y\in\mathbb{R}^{d},\) defined by the following recursion
\[\tilde{X}_{t}^{n}=\tilde{X}_{k_{n}(t)}^{n}+\int_{k_{n}(t)}^{t}\sigma(\tilde{X }_{k_{n}(r)}^{n})+\nabla\sigma\sigma(\tilde{X}_{k_{n}(t)}^{n})\chi(W_{r}-W_{k_{ n}(r)})\,\mathrm{d}W_{r}+\int_{k_{n}(t)}^{t}b(\tilde{X}_{k_{n}(r)}^{n})\,\mathrm{d}r, \tag{3.19}\]
\(\tilde{X}_{0}^{n}(y)=y.\) Here \(\chi:\mathbb{R}\mapsto\mathbb{R}\) is defined as in (2.27), we again use the convention \(\chi(x)=\chi(x_{i})_{1\leqslant i\leqslant d_{1}}\) for \(x\in\mathbb{R}^{d_{1}},\) so \(\chi(W_{r}-W_{k_{n}(r)}):=(\chi(W_{r}^{i}-W_{k_{n}(r)}^{i}))_{1\leqslant i \leqslant d_{1}}.\)
**Corollary 3.4**.: _Let \(\alpha\in(0,1),\,\epsilon\in(0,\frac{1}{2}),\,\alpha^{\prime}\in(1-2\epsilon,1),p\geqslant 2.\) Suppose Assumption 1.1 holds and that \((\tilde{X}_{t}^{n}(y))_{t\in[0,1]}\) is the solution to (3.19), \(y\in\mathbb{R}^{d}.\) Then for all functions \(h\in C^{\alpha^{\prime}},f\in C^{\alpha},\)\((s,t)\in[0,1]_{\varsigma}^{2},\) the following holds_
\[\Big{\|}\sup_{t\in[0,1]}\big{|}\int_{0}^{t}h(\tilde{X}_{r}^{n})\big{(}f(\tilde{ X}_{r}^{n})-f(\tilde{X}_{k_{n}(r)}^{n})\big{)}\,\mathrm{d}r\Big{\|}\Big{\|}_{L^{p}_{ \omega}}\leqslant N\|h\|_{C^{\alpha^{\prime}}}\|f\|_{C^{\alpha}}n^{-\frac{ \alpha+1}{2}+\epsilon}\]
_with \(N=N(p,d,d_{1},\|\sigma\|_{C^{3}},\alpha,\alpha^{\prime},\|b\|_{C^{\alpha}}, \lambda,\epsilon).\)_
Proof.: For any continuous process \(Z,\) we define
\[\mathcal{H}(Z)=\sup_{t\in[0,1]}\Big{|}\int_{0}^{t}h(r,Z_{r})[f(r,Z_{r})-f(r,Z_ {k_{n}(r)})]\,\mathrm{d}r\Big{|}. \tag{3.20}\]
From Corollary 3.3 we have \(\|\mathcal{H}(\hat{X}^{n})\|_{L^{m}_{\omega}}\lesssim_{m}\|h\|_{C^{\alpha^{ \prime}}}\|f\|_{C^{\alpha}}n^{-\frac{\alpha+1}{2}+\epsilon}\) for any \(m<\infty.\) Let \(B_{k_{n}(r),r}(\cdot):=\nabla\sigma\sigma(\cdot)\chi(W_{r}-W_{k_{n}(r)}).\) Define
\[\rho\coloneqq\exp\Big{(}-\int_{0}^{1}((\sigma+ B_{k_{n}(r),r})^{-1}b)(\hat{X}_{k_{n}(r)}^{n})\,\mathrm{d}W_{r}\] \[-\frac{1}{2}\int_{0}^{1}\Big{|}((\sigma+B_{k_{n}(r),r})^{-1}b)( \hat{X}_{k_{n}(r)}^{n})\Big{|}^{2}\,\mathrm{d}r\Big{)}.\]
By construction of \(\chi,\)\((\sigma+B_{k_{n}(r),r})^{-1}b\) is bounded, and therefore \(\rho\) is a probability density. Moreover, \(\|\rho\|_{L^{m}_{\omega}}\lesssim_{m}\ 1\) for any \(m<\infty.\) It follows from Girsanov theorem that under the
measure \(\rho d\mathbb{P},\hat{X}^{n}\) is distributed the same as \(\hat{X}^{n}\) under \(\mathbb{P}\), therefore by Holder's inequality we can write
\[\mathds{E}\mathcal{H}(\tilde{X}^{n})^{p}=\mathds{E}(\rho\mathcal{H}(\hat{X}^{n} )^{p})\leqslant\big{[}\mathds{E}\mathcal{H}(\hat{X}^{n})^{2p}\big{]}^{1/2} \big{[}\mathds{E}\rho^{2}\big{]}^{1/2}\lesssim\big{(}\|h\|_{C^{\alpha^{\prime }}}\|f\|_{C^{\alpha}}n^{-\frac{\alpha+1}{2}+\epsilon}\big{)}^{p}.\]
**Corollary 3.5**.: _Let \(\alpha\in(0,1)\), \(\epsilon\in(0,\frac{1}{2})\), \(\alpha^{\prime}\in(1-2\epsilon,1),p\geqslant 2.\) Suppose Assumption 1.1 holds and that \((X^{n}_{t}(y))_{t\in[0,1]}\) is the solution to (1.2), \(x^{n}_{0}\in\mathbb{R}^{d}.\) Then for all functions \(h\in\mathcal{C}^{\alpha^{\prime}},f\in\mathcal{C}^{\alpha}\), \((s,t)\in[0,1]^{2}_{<}\), the following holds_
\[\Big{\|}\sup_{t\in[0,1]}\big{|}\int_{0}^{t}h(X^{n}_{r})\big{(}f(X^{n}_{r})-f( X^{n}_{k_{n}(r)})\big{)}\;\mathrm{d}r\Big{\|}_{L^{p}_{\omega}}\leqslant N\|h\|_{C^{ \alpha^{\prime}}}\|f\|_{C^{\alpha}}n^{-\frac{\alpha+1}{2}+\epsilon} \tag{3.21}\]
_with \(N=N(p,d,d_{1},\|\sigma\|_{C^{3}},\alpha,\alpha^{\prime},\|b\|_{C^{\alpha}}, \lambda,\epsilon).\)_
Proof.: Recall that \((X^{n}_{t})_{t\in[0,1]}\) coincides with \((\tilde{X}^{n}_{t})_{t\in[0,1]}\) on \(\hat{\Omega}\) and \(\mathbb{P}(\hat{\Omega}^{c})\lesssim e^{-cn}.\) Using the notation from (3.20) and applying Corollary 3.4, we can write
\[\|\mathcal{H}(X^{n})\|_{L^{p}_{\omega}} \leqslant\|\mathcal{H}(\tilde{X}^{n})\|_{L^{p}_{\omega}}+\big{\|} \mathds{1}_{\hat{\Omega}^{c}}\big{(}\mathcal{H}(X^{n})-\mathcal{H}(\tilde{X} ^{n})\big{)}\big{\|}\] \[\lesssim\|h\|_{C^{\alpha^{\prime}}}\|f\|_{C^{\alpha}}n^{-\frac{ \alpha+1}{2}+\epsilon}+\|f\|_{\mathds{B}}\|h\|_{\mathds{B}}\mathbb{P}(\hat{ \Omega}^{c}).\]
This implies (3.21).
## 4. Proof of the main result
The final ingredient of the proof is (an appropriate form) of the Zvonkin transformation, for which we need some regularity result for the PDE associated to (1.1).
_Assumption 4.1_.: Let \(a(x):\mathbb{R}^{d}\mapsto\mathbb{R}^{d\times d}.\) Assume
* there exists a constant \(\lambda>0\) such that (1.3) holds,
* \((a_{ij})_{1\leqslant i,j\leqslant d}\) is uniformly continuous with modulus of continuity \(h\), in the sense that \[|a(x)-a(y)|\leqslant h(|x-y|)\quad\forall(x,y)\in\mathbb{R}^{2d}.\] (4.1)
For some parameter \(\theta>0\) we consider the elliptic equation
\[\frac{1}{2}\sum_{i,j=1}^{d}a^{ij}\partial_{ij}u+\nabla u\cdot b-\theta u=f. \tag{4.2}\]
We then have the following results on solutions of (4.2). While such statements are fairly standard, in these particular forms we have not found an exact reference, so for the sake of completeness short proofs are provided in Appendix A.
**Theorem 4.2**.: _Let \(f,b\in\mathcal{C}^{\alpha}\) with \(\alpha\in(0,\infty).\) Suppose Assumption 4.1 holds with taking \(h(x)=x^{\alpha}\) for \(x\in\mathbb{R}_{+}.\) There exists \(\theta^{*}>0\) depending on \(\alpha,\lambda\) and \(\|b\|_{\mathcal{C}^{\alpha}}\) such that for \(\theta\geqslant\theta^{*}\), there exists
a unique solution \(u\in\mathcal{C}^{2+\alpha}\) to (4.2) such that for any \(\gamma\in[\alpha,\alpha+2)\), there exists a constant \(C\) depending on \(\alpha\), \(\|b\|_{\mathcal{C}^{\alpha}}\) and \(\gamma\), independent of \(\theta\), such that_
\[\|u\|_{\mathcal{C}^{\gamma}}\leqslant C\theta^{\frac{\gamma-(2+\alpha)}{2}}\|f \|_{\mathcal{C}^{\alpha}}. \tag{4.3}\]
In addition, let us recall two elementary properties of the approximation \(X^{n}\) defined by (1.2). For proofs, see [33, Section 7.8.8].
**Proposition 4.3**.: _Assume that \(b\in\mathcal{C}^{0}\) and \(\sigma\in\mathcal{C}^{1}\). Then for any \(m\in[1,\infty)\) and there exists a constant \(N=N(m,d,d_{1},\|b\|_{\mathcal{C}^{0}},\|\sigma\|_{\mathcal{C}^{1}})\) such that for all \(n\in\mathbb{N}\), \(0\leqslant s\leqslant t\leqslant 1\), \(f\in\mathcal{C}^{2}\) one has_
\[\left\|X^{n}_{t}-X^{n}_{s}\right\|_{L^{m}_{\infty}}\leqslant N(t-s)^{1/2} \tag{4.4}\]
_and_
\[\left\|f(X^{n}_{t})-f(X^{n}_{\kappa_{n}(t)})-[\nabla f\sigma](X^{n}_{\kappa_{ n}(t)})(W_{t}-W_{\kappa_{n}(t)})\right\|_{L^{m}_{\infty}}\leqslant N\|f\|_{ \mathcal{C}^{2}}n^{-1}. \tag{4.5}\]
Now we are in position of giving the proof of the main theorem.
Proof of Theorem 1.2.: Take \(u\) to be the solution to (4.2) with \(f=-b\), \(a=\sigma\sigma^{*}\), and \(\theta\) large enough which is determined later. By Theorem 4.2 we know \(u\in\mathcal{C}^{2+\alpha}\). Then by Ito's formula we have
\[\int_{0}^{t}b(X_{r})dr= u(x_{0})-u(X_{t})+\int_{0}^{t}[\nabla u\cdot\sigma](X_{r})dW_{r}+ \theta\int_{0}^{t}u(X_{r})dr. \tag{4.6}\]
Similarly (using the summation convention for repeated indices \(i,j,k\))
\[\int_{0}^{t}b(X^{n}_{r})\,\mathrm{d}r= u(x^{n}_{0})-u(X^{n}_{t})+\theta\int_{0}^{t}u(X^{n}_{r})\,\mathrm{d}r\] \[+\int_{0}^{t}\nabla u(X^{n}_{r})\cdot[\sigma(X^{n}_{k_{n}(r)})+ \nabla\sigma\sigma(X^{n}_{k_{n}(r)})(W_{r}-W_{k_{n}(r)})]\,\mathrm{d}W_{r}\] \[+\int_{0}^{t}\left(\nabla u(X^{n}_{r})\cdot[b(X^{n}_{k_{n}(r)})- b(X^{n}_{r})]\right)\mathrm{d}r\] \[+\frac{1}{2}\int_{0}^{t}\left(\partial_{ij}u(X^{n}_{r})[a^{ij}(X^ {n}_{k_{n}(r)})-a^{ij}(X^{n}_{r})]+B^{ij}_{r}(X^{n}_{k_{n}(r)})\partial_{ij}u( X^{n}_{r})\right)\mathrm{d}r\] \[+\int_{0}^{t}\left(\partial_{ij}u(X^{n}_{r})[\sigma^{ki}(X^{n}_{k _{n}(r)})\Sigma^{kj}_{r}(X^{n}_{k_{n}(r)})]\right)\mathrm{d}r, \tag{4.7}\]
where \((B^{ij}_{r}(x))_{1\leqslant i\leqslant d,1\leqslant j\leqslant d}:=\Sigma_{r} (x)\Sigma_{r}(x)^{*}\) and \(\Sigma_{r}(x)\coloneqq\nabla\sigma\sigma(x)(W_{r}-W_{k_{n}(r)})\). From equations (1.1) and (1.2) we have
\[X_{t}-X^{n}_{t}=x_{0}-x^{n}_{0} +\int_{0}^{t}[b(X_{r})-b(X^{n}_{r})]\,\mathrm{d}r+\int_{0}^{t}[b( X^{n}_{r})-b(X^{n}_{k_{n}(r)})]\,\mathrm{d}r\] \[+\int_{0}^{t}[\sigma(X_{r})-\sigma(X^{n}_{r})]\,\mathrm{d}W_{r}\] \[+\int_{0}^{t}[\sigma(X^{n}_{r})-\sigma(X^{n}_{k_{n}(r)})-(\nabla \sigma\sigma)(X^{n}_{k_{n}(r)})(W_{r}-W_{k_{n}(r)})]\,\mathrm{d}W_{r}.\]
We use (4.6) and (4.7) to rewrite the first integral on the right-hand side. We raise to \(m\)-th power for \(m\geqslant 2\) and get
\[\phi_{t}:=\sup_{s\in[0,t]}|X_{s}-X_{s}^{n}|^{p}\leqslant N\big{(}|x_{0}-x_{0}^{n }|^{p}+\sum_{\ell=1}^{4}V_{t}^{\ell}+\sum_{\ell=1}^{2}I_{t}^{\ell}\big{)},\]
with
\[V_{t}^{1}: =|u(x_{0})-u(x_{0}^{n})|^{p}+\sup_{s\in[0,t]}\big{[}|u(X_{s}^{n}) -u(X_{s})|^{p}+\theta^{p}\int_{0}^{s}|u(X_{r})-u(X_{r}^{n})|\,\mathrm{d}r\big{]} ^{p},\] \[V_{t}^{2}: =\sup_{s\in[0,t]}\Big{|}\int_{0}^{s}\Big{(}(\nabla u(X_{r}^{n})+ I)(b(X_{r}^{n})-b(X_{k_{n}(r)}^{n}))\Big{)}\,\mathrm{d}r\big{|}^{p},\] \[V_{t}^{3}: =\sup_{s\in[0,t]}\Big{|}\int_{0}^{s}\partial_{ij}u(X_{r}^{n})[a^ {ij}(X_{r}^{n})-a^{ij}(X_{k_{n}(r)}^{n})]\,\mathrm{d}r\big{|}^{p},\] \[V_{t}^{4}: =\sup_{s\in[0,t]}\Big{|}\int_{0}^{s}\Big{(}B_{r}^{ij}(X_{k_{n}(r) }^{n})\partial_{ij}u(X_{r}^{n})\Big{)}\,\mathrm{d}r\big{|}^{p},\] \[V_{t}^{5}: =\sup_{s\in[0,t]}\Big{|}\int_{0}^{s}\Big{(}\partial_{ij}u(X_{r}^ {n})[\sigma^{ki}(X_{k_{n}(r)}^{n})\Sigma_{r}^{kj}(X_{k_{n}(r)}^{n})]\Big{)}\, \mathrm{d}r\big{|}^{p},\] \[I_{t}^{1}: =\sup_{s\in[0,t]}\Big{|}\int_{0}^{s}\Big{(}[(\nabla u+I)\cdot \sigma](X_{r})-[(\nabla u+I)\cdot\sigma](X_{r}^{n})\Big{)}\,\mathrm{d}W_{r} \big{|}^{p},\] \[I_{t}^{2}: =\sup_{s\in[0,t]}\Big{|}\int_{0}^{s}\Big{(}[\nabla u(X_{r}^{n})+ I]\cdot\big{[}\sigma(X_{r}^{n})-\sigma(X_{k_{n}(r)}^{n})-\nabla\sigma\sigma(X_{k_{n} (r)}^{n})(W_{r}-W_{k_{n}(r)})\big{]}\Big{)}\,\mathrm{d}W_{r}\big{|}^{p}.\]
Now by (4.3) we can take \(\theta\) to be large enough so that \(N\|\nabla u\|_{\mathrm{B}}\leqslant 1/4\), then we have the obvious bound
\[NV_{t}^{1}\leqslant\frac{1}{2}\sup_{s\in[0,t]}|X_{s}-X_{s}^{n}|^{p}+\theta^{p} \|u\|_{C^{1}}\int_{0}^{t}\sup_{s\in[0,r]}|X_{s}-X_{s}^{n}|^{p}\,\mathrm{d}r= \frac{1}{2}\phi_{t}+\theta^{p}\int_{0}^{t}\phi_{r}\,\mathrm{d}r. \tag{4.8}\]
Applying Corollary 3.5 with \(h=\nabla u+I\) and \(f=b\), we get
\[\|V_{t}^{2}\|_{L_{\omega}^{1}}\lesssim\big{(}n^{-\frac{1+\alpha}{2}+\epsilon} \big{)}^{p}. \tag{4.9}\]
Similarly, but with the roles played by \(h=\partial_{ij}u\), \(f=a^{ij}\), we get
\[\|V_{t}^{3}\|_{L_{\omega}^{1}}\lesssim \big{(}n^{-\frac{1+\alpha}{2}+\epsilon}\big{)}^{p}. \tag{4.10}\]
Since \(\mathds{E}|B^{ij}(X_{k_{n}(r)}^{n})|^{p}\lesssim\mathds{E}|W_{r}-W_{k_{n}(r)} |^{2p}\lesssim n^{-p}\), we also have
\[\|V_{t}^{4}\|_{L_{\omega}^{1}}\lesssim\int_{0}^{t}\mathds{E}|B^{ij}(X_{k_{n}(r )}^{n})|^{p}\,\mathrm{d}r\lesssim Nn^{-p}. \tag{4.11}\]
We manipulate the term \(V^{5}\) as
\[\int_{0}^{t}\Bigl{(}\partial_{ij}u(X^{n}_{r})\bigl{[}\sigma^{ki}(X^ {n}_{k_{n}(r)})\Sigma^{kj}(X^{n}_{k_{n}(r)})\bigr{]}\Bigr{)}\,\mathrm{d}r\] \[= \int_{0}^{t}\Big{(}\bigl{[}\partial_{ij}u(X^{n}_{r})\sigma^{ki}(X ^{n}_{r})\bigl{(}\sigma^{ki}(X^{n}_{r})-\sigma^{ki}(X^{n}_{k_{n}(r)})\bigr{)} \bigr{]}\] \[\qquad\quad-\bigl{[}\partial_{ij}u(X^{n}_{r})\bigl{(}\sigma^{ki}( X^{n}_{r})-\sigma^{ki}(X^{n}_{k_{n}(r)})\bigr{)}^{2}\bigr{]}\] \[\qquad\quad-\bigl{[}\partial_{ij}u(X^{n}_{r})\sigma^{ki}(X^{n}_{k _{n}(r)})\bigl{(}\sigma^{ki}(X^{n}_{r})-\sigma^{ki}(X^{n}_{k_{n}(r)})-\Sigma^{ kj}(X^{n}_{k_{n}(r)})\bigr{)}\bigr{]}\Bigr{)}\,\mathrm{d}r\] \[= :v_{t}^{51}-v_{t}^{52}-v_{t}^{53}.\]
Applying Corollary 3.5 with \(h=\partial_{ij}u\sigma^{ki}\) and \(f=\sigma^{ki}\), we get
\[\bigl{\|}\bigl{\|}\sup_{s\in[0,t]}|v_{s}^{51}|^{p}\bigr{\|}_{L^{1}_{\omega}} \lesssim \bigl{(}n^{-\frac{1+\alpha}{2}+\epsilon}\bigr{)}^{p}.\]
From (4.4) it is immediate that
\[\bigl{\|}\sup_{s\in[0,t]}|v_{s}^{52}|^{p}\bigr{\|}_{L^{1}_{\omega}} \lesssim n^{-p}.\]
Finally, from (4.5) with \(f=\sigma^{ik}\)
\[\bigl{\|}\sup_{s\in[0,t]}|v_{s}^{53}|^{p}\bigr{\|}_{L^{1}_{\omega}} \lesssim n^{-p}.\]
Therefore
\[\|V_{t}^{5}\|_{L^{1}_{\omega}}\lesssim \bigl{(}n^{-\frac{1+\alpha}{2}+\epsilon}\bigr{)}^{p}. \tag{4.12}\]
From the pathwise BDG inequality [34, Theorem 5] it follows that there exist martingales \(M^{1}\) and \(M^{2}\) such that with probability one
\[|I^{1}_{t}|\lesssim N\Big{(}\int_{0}^{t}\bigl{|}\nabla\bigl{[}(\nabla u+I) \cdot\sigma\bigr{]}\bigr{|}^{2}\bigl{|}X_{r}-X^{n}_{r}\bigr{|}^{2}\,\mathrm{d }r\Big{)}^{\frac{p}{2}}+M^{1}_{t}\leq N\int_{0}^{t}\phi_{r}\,\mathrm{d}r+M^{1} _{t}, \tag{4.13}\]
and
\[|I^{2}_{t}| \lesssim N\Big{(}\int_{0}^{t}\Bigl{|}\bigl{[}\nabla u(X^{n}_{r})+I \bigr{]}\cdot\bigl{[}\sigma(X^{n}_{r})-\sigma(X^{n}_{k_{n}(r)})-\nabla\sigma \sigma(X^{n}_{k_{n}(r)})(W_{r}-W_{k_{n}(r)})\bigr{]}\Bigr{|}^{2}\,\mathrm{d}r \Big{)}^{\frac{p}{2}}\] \[\qquad+M^{2}_{t}\] \[\leq N\int_{0}^{t}\Big{|}\sigma(X^{n}_{r})-\sigma(X^{n}_{k_{n}(r) })-\nabla\sigma\sigma(X^{n}_{k_{n}(r)})(W_{r}-W_{k_{n}(r)})\Bigr{|}^{p}\, \mathrm{d}r+M^{2}_{t}\] \[=: I^{2,1}_{t}+M^{2}_{t}. \tag{4.14}\]
Once again from (4.5) we have the bound
\[\|I^{2,1}_{t}\|_{L^{1}_{\omega}}\lesssim n^{-p}. \tag{4.15}\]
Now we let
\[V_{t}:=V_{t}^{2}+V_{t}^{3}+V_{t}^{4}+I_{t}^{2,1},\quad M_{t}:=M_{t}^{1}+M_{t}^{2},\]
then with (4.8), (4.13) and (4.14) we can write
\[\phi_{t}\leqslant N\big{(}|x_{0}-x_{0}^{n}|^{p}+\int_{0}^{t}\phi_{r}\,\mathrm{d }r+V_{t}\big{)}+M_{t}.\]
From (4.9), (4.10), (4.11), (4.12) and (4.15) we have the estimate
\[\|V_{t}\|_{L_{\infty}^{1}}\lesssim\big{(}n^{-\frac{1+\alpha}{2}+\varepsilon} \big{)}^{p}.\]
The claimed bound (1.4) follows from an appropriate version of Gronwall's inequality (for an "appropriate version" see e.g [24, Lemma 3.5]).
## Appendix A Proof for regularity estimates of PDEs
Proof of Theorem 4.2.: As for the existence and uniqueness, it was already shown in [16, Theorem 4.3.1]. Therefore it is enough to show (4.3). We start with the case that \(a\) is a constant matrix and first order term of (4.2) vanishes. We stress that all proportionality constants in the relations \(\lesssim\) below depend on \(d,\alpha,\|b\|_{C^{\alpha}},\lambda\), and not on \(\theta\).
Case I. \(a\) is a constant positive definite matrix and \(b\equiv 0\).
As explained in [17, Proof of Ch.1, Section 6, 2. Lemma], for a general non-degenerate constant matrix \(a\), by changing of coordinate it is enough to consider the following resolvent equation
\[\frac{1}{2}\Delta u-\theta u=f.\]
The solution to the above resolvent equation can be represented as
\[u=\int_{0}^{\infty}e^{-\theta t}P_{t}f\,\mathrm{d}t.\]
Then simply
\[\|u\|_{C^{\alpha}}\leqslant\int_{0}^{\infty}e^{-\theta t}\|P_{t}f\|_{C^{ \alpha}}\,\mathrm{d}t\lesssim\int_{0}^{\infty}e^{-\theta t}t^{-\frac{T-\alpha }{2}}\,\mathrm{d}t\|f\|_{C^{\alpha}}\lesssim\theta^{\frac{T-(2+\alpha)}{2}}\| f\|_{C^{\alpha}}.\] (A.1)
Case II. \(a\) satisfies Assumption 4.1 and \(b\equiv 0\).
Here we apply frozen coefficient method. Let \(\xi\) be a nonnegative and nonzero smooth function with support in \(B_{\delta}=\big{\{}x\in\mathbb{R}^{d}:|x|\leqslant\delta\big{\}}\), where \(\delta>0\) is to be chosen later. Define for \(z\in\mathbb{R}^{d}\)
\[\xi_{z}(x)\coloneqq\xi(x-z),\quad a_{z}\coloneqq a(z),\quad u_{z}(x) \coloneqq\xi_{z}(x)u(x),\quad f_{z}(x)\coloneqq\xi_{z}(x)f(x).\] (A.2)
Then we can observe that
\[a_{z}^{ij}\partial_{ij}u_{z}-\theta u_{z}=g_{z}\]
where
\[g_{z}\coloneqq f_{z}-(a^{ij}\partial_{ij}u)\xi_{z}+a_{z}^{ij}\partial_{ij}u_{z}\] \[= f_{z}-(a^{ij}-a_{z}^{ij})\partial_{ij}u\cdot\xi_{z}+a_{z}^{ij}( \partial_{i}u\partial_{j}\xi_{z}+\partial_{j}u\partial_{i}\xi_{z}+u\partial_{ ij}\xi_{z}).\]
We use the fact that \(a\) satisfies Assumption 4.1 then get
\[\|g_{z}\|_{C^{\alpha}}\lesssim \big{(}\|f_{z}\|_{C^{\alpha}}+\|a\|_{C^{\alpha}}\delta^{\alpha}\|u_ {z}\|_{C^{2+\alpha}}+\lambda^{-1}(\|u_{z}\|_{C^{1+\alpha}}\|\partial_{i}\xi_{z} \|_{C^{\alpha}}+\|\partial_{ij}\xi_{z}\|_{C^{\alpha}}\|u_{z}\|_{C^{\alpha}}) \big{)}.\] \[\lesssim \big{(}\|f_{z}\|_{C^{\alpha}}+\delta^{\alpha}\|u_{z}\|_{C^{2+ \alpha}}+\lambda^{-1}(\delta^{-(1+\alpha)}\|u_{z}\|_{C^{1+\alpha}}+\delta^{-(2 +\alpha)}\|u_{z}\|_{C^{\alpha}})\big{)}.\]
Besides due to the interpolation and Young's inequality for product we have that for any \(\epsilon>0\), there exists \(C_{\epsilon}>0\) so that
\[\|u_{z}\|_{C^{1+\alpha}}\leqslant C_{\epsilon}\|u_{z}\|_{C^{\alpha}}+\epsilon\| u_{z}\|_{C^{2+\alpha}},\]
it implies that
\[\|g_{z}\|_{C^{\alpha}}\lesssim\big{(}\|f_{z}\|_{C^{\alpha}}+(\delta^{\alpha}+ \delta^{-(1+\alpha)}\epsilon)\|u_{z}\|_{C^{2+\alpha}}+(C_{\epsilon}\delta^{- (1+\alpha)}+\delta^{-(2+\alpha)})\|u_{z}\|_{C^{\alpha}}\big{)}.\] (A.3)
First by taking \(\gamma=2+\alpha\) in (A.1) then
\[\|u_{z}\|_{C^{2+\alpha}} \leqslant C_{1}\|g_{z}\|_{C^{\alpha}}\] \[\leqslant C_{2}\big{(}\|f_{z}\|_{C^{\alpha}}+(\delta^{\alpha}+ \delta^{-(1+\alpha)}\epsilon)\|u_{z}\|_{C^{2+\alpha}}+(C_{\epsilon}\delta^{-( 1+\alpha)}+\delta^{-(2+\alpha)})\|u_{z}\|_{C^{\alpha}}\big{)}.\]
with some \(C_{1},C_{2}\lesssim 1\). Then if we choose \(\delta\) to be small and \(\epsilon\) to be small such that \(C_{2}\big{(}\delta^{\alpha}+\delta^{-(1+\alpha)}\epsilon\big{)}<\frac{1}{2}\), then we get
\[\|u_{z}\|_{C^{2+\alpha}}\lesssim\|f_{z}\|_{C^{\alpha}}+\|u_{z}\|_{C^{\alpha}}.\] (A.4)
Plug the above into (A.3) then get
\[\|g_{z}\|_{C^{\alpha}}\lesssim\|f_{z}\|_{C^{\alpha}}+\|u_{z}\|_{C^{\alpha}}\] (A.5)
We again use (A.1) by taking \(\gamma=\alpha\), together with (A.5)
\[\|u\|_{C^{\alpha}}\leqslant C_{3}\theta^{-1}\|g_{z}\|_{C^{\alpha}}\leqslant C _{4}\theta^{-1}\big{(}\|f_{z}\|_{C^{\alpha}}+\|u_{z}\|_{C^{\alpha}}\big{)},\]
again here the positive constants \(C_{3},C_{4}\lesssim 1\). Take \(\theta\) to be large so that \(C_{4}\theta^{-1}\leqslant\frac{1}{2}\) then we have
\[\|u\|_{C^{\alpha}}\lesssim\|f_{z}\|_{C^{\alpha}}.\] (A.6)
Combining (A.1), (A.5) and (A.6) yields
\[\|u_{z}\|_{C^{\gamma}}\lesssim \theta^{\frac{\gamma-(2+\alpha)}{2}}\|g_{z}\|_{C^{\alpha}}\lesssim \theta^{\frac{\gamma-(2+\alpha)}{2}}\|f_{z}\|_{C^{\alpha}}.\]
We take supermum over \(z\in\mathbb{R}^{d}\) and get
\[\|u\|_{C^{\gamma}}\lesssim\theta^{\frac{\gamma-(2+\alpha)}{2}}\|f\|_{C^{\alpha}}.\] (A.7)
Case III. \(a\) satisfies Assumption 4.1 and \(b\in C^{\alpha}\).
By (A.7), for \(\theta>\theta_{0}\) the following holds
\[\|u\|_{C^{\gamma}}\lesssim\theta^{\frac{\gamma-(2+\alpha)}{2}}\big{(}\|f\|_{C^ {\alpha}}+\|b\cdot\nabla u\|_{C^{\alpha}}\big{)}.\] (A.8)
By taking \(\gamma=1+\alpha\), it is evident that
\[\|u\|_{C^{1+\alpha}}\leqslant C\theta^{-\frac{1}{2}}\big{(}\|f\|_{C^{\alpha}}+\|b\cdot \nabla u\|_{C^{\alpha}}\big{)}\leqslant C\theta^{-\frac{1}{2}}\big{(}\|f\|_{C^{ \alpha}}+\|b\|_{C^{\alpha}}\|u\|_{C^{1+\alpha}}\big{)}\]
with some \(C\lesssim 1\). Now we take \(\theta\) to be larger enough so that
\[C\theta^{-\frac{1}{2}}\|b\|_{C^{\alpha}}\leq\frac{1}{2},\]
we denote this \(\theta\) as \(\theta^{*}\) (depending on \(\|b\|_{C^{\alpha}},\alpha,\lambda\)). Therefore for \(\theta\geqslant\theta^{*}\), we get from (A.8) that
\[\|u\|_{C^{\gamma}}\lesssim\theta^{\frac{\gamma-(2+\alpha)}{2}}\|f\|_{C^{ \alpha}}.\]
We get the desired estimate (4.3).
## Acknowledgements
This research is supported by the Austrian Science Fund (FWF) Stand-Alone programme P 34992. CL acknowledges the financial support by DFG via Research Unit FOR 2402 when this project started.
|
2304.10984
|
IBBT: Informed Batch Belief Trees for Motion Planning Under Uncertainty
|
In this work, we propose the Informed Batch Belief Trees (IBBT) algorithm for
motion planning under motion and sensing uncertainties. The original stochastic
motion planning problem is divided into a deterministic motion planning problem
and a graph search problem. We solve the deterministic planning problem using
sampling-based methods such as PRM or RRG to construct a graph of nominal
trajectories. Then, an informed cost-to-go heuristic for the original problem
is computed based on the nominal trajectory graph. Finally, we grow a belief
tree by searching over the graph using the proposed heuristic. IBBT interleaves
between batch state sampling, nominal trajectory graph construction, heuristic
computing, and search over the graph to find belief space motion plans. IBBT is
an anytime, incremental algorithm. With an increasing number of batches of
samples added to the graph, the algorithm finds motion plans that converge to
the optimal one. IBBT is efficient by reusing results between sequential
iterations. The belief tree searching is an ordered search guided by an
informed heuristic. We test IBBT in different planning environments. Our
numerical investigation confirms that IBBT finds non-trivial motion plans and
is faster compared with previous similar methods.
|
Dongliang Zheng, Panagiotis Tsiotras
|
2023-04-21T14:31:19Z
|
http://arxiv.org/abs/2304.10984v1
|
# IBBT: Informed Batch Belief Trees for Motion Planning
###### Abstract
In this work, we propose the Informed Batch Belief Trees (IBBT) algorithm for motion planning under motion and sensing uncertainties. The original stochastic motion planning problem is divided into a _deterministic_ motion planning problem and a graph search problem. We solve the deterministic planning problem using sampling-based methods such as PRM or RRG to construct a graph of nominal trajectories. Then, an informed cost-to-go heuristic for the original problem is computed based on the nominal trajectory graph. Finally, we grow a belief tree by searching over the graph using the proposed heuristic. IBBT interleaves between batch state sampling, nominal trajectory graph construction, heuristic computing, and search over the graph to find belief space motion plans. IBBT is an anytime, incremental algorithm. With an increasing number of batches of samples added to the graph, the algorithm finds motion plans that converge to the optimal one. IBBT is efficient by reusing results between sequential iterations. The belief tree searching is an ordered search guided by an informed heuristic. We test IBBT in different planning environments. Our numerical investigation confirms that IBBT finds non-trivial motion plans and is faster compared with previous similar methods.
## I Introduction
For safe and reliable autonomous robot operation in a real-world environment, consideration of various uncertainties becomes necessary. These uncertainties may arise from an inaccurate motion model, actuation or sensor noise, partial sensing, and the presence of other agents moving in the same environment. In this paper, we study the safe motion planning problem for robot systems with nontrivial dynamics, motion uncertainty, and state-dependent measurement uncertainty in an environment with non-convex obstacles.
Planning under uncertainty is referred to as belief space planning (BSP), where the state of the robot is characterized by a probability distribution function (pdf) over all possible states. This pdf is commonly referred to as the _belief_ or information state [1, 2]. A BSP problem can be formulated as a partially observable Markov decision process (POMDP) problem [3]. Solving POMDPs for continuous state, control, and observation spaces, is, however, intractable. Existing methods based on discretization are resolution-limited [4, 5]. Optimization over the entire discretized belief space to find a path is computationally expensive and does not scale well to large-scale problems. Online POMDP algorithms are often limited to short-horizon planning, have challenges when dealing with local minima, and are not suitable for global planning in large environments [6]
Planning in infinite-dimensional distributional (e.g., belief) spaces can become more tractable by using sampling-based methods [7]. For example, belief roadmap methods [8] build a belief roadmap to reduce estimation uncertainty; the rapidly-exploring random belief trees (RRBT) algorithm [9] has been proposed to grow a tree in the belief space. Owing to their advantages in avoiding local minima, dealing with nonconvex obstacles and high-dimensional state spaces, along with their anytime property, sampling-based methods have gained increased attention in the robotics community [10, 11, 12, 13, 14].
Robot safety under uncertainty can be also formulated as a chance-constrained optimization problem [15, 16, 17, 9]. In addition to minimizing the cost function, one also wants the robot not to collide with obstacles, with high probability. By approximating the chance constraints as deterministic constraints, references [15, 16, 17] solve the problem using an optimization-based framework. However, those approaches lack scalability with respect to problem complexity [18], and the explicit representation of the obstacles is usually required.
In this paper, we focus on sampling-based approaches similar to [9, 10, 19]. One challenge of sampling-based algorithms for planning under uncertainty is the lack of the optimal substructure property, which has been discussed in [9, 14, 20]. The lack of optimal substructure property is further explained by the lack of total ordering on paths based on cost. Specifically, it is not enough to only minimize the usual cost function \(-\) explicitly finding paths that reduce the uncertainty of the robot is also important (see Figure 1(a)). The RRBT algorithm proposed in [9] overcomes the lack of optimal substructure property by introducing a partial-ordering of belief nodes and by keeping all non-dominated nodes in the belief tree. Note that without this partial-ordering, the methods in [10, 11, 12, 19] may not be able to find a solution, even if one exists. Minimizing the cost and checking the chance constraints can only guarantee that the existing paths in the tree satisfy the chance constraints. Without searching for paths that explicitly reduce state uncertainty, it will be difficult for future paths to satisfy the chance constraints.
In this paper, we propose the Informed Batch Belief Tree (IBBT) algorithm, which improves over the RRBT algorithm with the introduction of _batch sampling_ and _ordered graph
search guided by an informed heuristic_. Firstly, IBBT uses the partial ordering of belief nodes as in [9]. Compared to [10, 11, 12, 19], IBBT is able to find sophisticated plans that visit and revisit the information-rich region to gain information. Secondly, RRBT uses unordered search like RRT* while IBBT uses batch sampling and ordered search. RRBT adds one sample each time to the graph randomly. As shown in [21] and [22], ordered searches such as FMT* and BIT* perform better than RRT*. Thirdly, RRBT only uses the cost-to-come cost to guide the belief tree search while IBBT introduces a cost-to-go heuristic and uses the total path cost heuristic for informed belief tree search. After adding a sample, RRBT performs an exhaustive graph search. Thus all non-dominated belief nodes are added to the belief tree. With batch sampling and informed graph search, IBBT avoids adding unnecessary belief nodes. Thus, IBBT is able to find the initial solution in a shorter time and has better cost-time performance compared to RRBT.
## II Related Works
In [8], the problem of finding the minimum estimation uncertainty path for a robot from a starting position to a goal is studied by building a roadmap. In [9, 23], it was noted that the true _a priori_ probability distribution of the state should be used for motion planning instead of assuming maximum likelihood observations [8, 24]. A linear-quadratic Gaussian (LQG) controller along with the RRT algorithm [25] were used for motion planning in [23]. To achieve asymptotic optimality, the authors in [9] incrementally construct a graph and search over the graph to find all non-dominated belief nodes. Given the current graph, the Pareto frontier of belief nodes at each vertex is saved, where the Pareto frontier is defined by considering both the path cost and the node uncertainty.
In [11] high-frequency replanning is shown to be able to better react to uncertainty during plan execution. Monte Carlo simulation and importance sampling are used in [12] to compute the collision probability. Moving obstacles are considered in [18]. In [26], state dependence of the collision probability is considered and incorporated with chance-constraint RRT* [10, 27]. In [28], a roadmap search method is proposed to deal with localization uncertainty; however, solutions for which the robot needs to revisit a position to gain information are ruled out. Distributionally robust RRT is proposed in [19, 29], where moment-based ambiguity sets of distributions are used to enforce chance constraints instead of assuming Gaussian distributions. Similarly, a moment-based approach that considers non-Gaussian state distributions is studied in [30]. In [31], the Wasserstein distance is used as a metric for Gaussian belief space planning. The algorithm is compared with RRBT. However, from the simulation results, RRBT usually finds better (lower cost) plans and thus has a better convergence performance.
Other works that are not based on sampling-based methods formulate the chance-constrained motion planning problem as an optimization problem [15, 16, 17]. In those methods, the explicit representation of the obstacles is usually required. The obstacles may be represented by convex constraints or polynomial constraints. The chance constraints are then approximated as deterministic constraints and the optimization problem is solved by convex [16] or nonlinear programming [17]. Differential dynamic programming has also been used to solve motion planning under uncertainty [2, 32, 33]. These algorithms find a locally optimal trajectory in the neighborhood of a given reference trajectory. The algorithms iteratively linearize the system dynamics along the reference trajectory and solve an LQG problem to find the next reference trajectory.
## III Problem formulation
We consider the problem of planning for a robot with nontrivial dynamics, model uncertainty, measurement uncertainty from sensor noise, and obstacle constraints. The state-space \(\mathcal{X}\) is decomposed into free space \(\mathcal{X}_{\text{free}}\) and obstacle space \(\mathcal{X}_{\text{obs}}\). The motion planning problem is given by
\[\operatorname*{arg\,min}_{u_{k}}\ \mathbb{E}\left[\sum_{k=0}^{N-1}J(x_{k },u_{k})\right], \tag{1}\] \[\text{s.t.}\ \ x_{0}\sim\mathcal{N}(\bar{x}_{s},P_{0}),\ \bar{x}_{N}=\bar{x}_{g},\] (2) \[P(x_{k}\in\mathcal{X}_{\text{obs}})<\delta,\ k=0,\cdots,N,\] (3) \[x_{k+1}=f(x_{k},u_{k},w_{k}),\ k=0,\ldots,N-1,\] (4) \[y_{k}=h(x_{k},v_{k}),\ k=0,\ldots,N-1, \tag{5}\]
where (4) and (5) are the motion and sensing models, respectively. Furthermore, \(x_{k}\in\mathbb{R}^{u_{k}}\) is the state, \(u_{k}\in\mathbb{R}^{u_{k}}\) is the control input, and \(y_{k}\in\mathbb{R}^{u_{y}}\) is the measurement at time step \(k=0,1,\ldots,N-1\), where the steps of the noise processes \(w_{k}\in\mathbb{R}^{u_{k}}\) and \(v_{k}\in\mathbb{R}^{u_{y}}\) are i.i.d standard Gaussian random vectors, respectively. We assume that \((w_{k})_{k=0}^{N-1}\) and \((v_{k})_{k=0}^{N-1}\) are independent. Expression (2) is the boundary condition for the motion planning problem. The goal is to steer the system from some initial distribution to a goal state. Since the robot state is uncertain, the mean of the final state \(\bar{x}_{N}\) is constrained to be equal to the goal state \(\bar{x}_{g}\). Condition (3) is a chance constraint that enforces safety of the robot.
Similar to [9], the motion plan considered in this paper is formed by a nominal trajectory and a feedback controller that stabilizes the system around the nominal trajectory. Specifically, we will use a Connect function that returns a nominal trajectory and a stabilizing controller between two states \(\bar{x}^{a}\) and \(\bar{x}^{b}\),
\[(\bar{X}^{a,b},\bar{U}^{a,b},K^{a,b})=\texttt{Connect}(\bar{x}^{a},\bar{x}^{b }), \tag{6}\]
\(\bar{X}^{a,b}\) and \(\bar{U}^{a,b}\) are the sequence of states and controls of the nominal trajectory, and \(K^{a,b}\) is a sequence of the corresponding feedback control gains. The nominal trajectory can be obtained by solving a deterministic optimal control problem with boundary conditions \(\bar{x}^{a}\) and \(\bar{x}^{b}\), and system dynamics \(\bar{x}_{k+1}=f(\bar{x}_{k},\bar{u}_{k},0)\). The stabilizing controller can be computed using, for example, finite-time LQR design [14].
A Kalman filter is used for online state estimation, which gives the state estimate1\(\hat{x}_{k}\) of \((x_{k}-\bar{x}_{k})\). Thus, the control at
time \(k\) is given by
\[u_{k}=\tilde{u}_{k}+K_{k}\tilde{x}_{k}. \tag{7}\]
With the introduction of the Connect function, the optimal motion planning problem (1)-(5) is reformulated as finding the sequence of intermediate states \((\tilde{x}^{0},\tilde{x}^{1},\cdots,\tilde{x}^{t})\). The final control is given by
\[(u_{k})_{k=0}^{N-1}=(\texttt{Connect}(\tilde{x}^{0},\tilde{x}^{1}),\cdots, \texttt{Connect}(\tilde{x}^{t-1},\tilde{x}^{t})). \tag{8}\]
The remaining problem is to find the optimal sequence of intermediate states and enforce the chance constraints (3).
## IV Covariance Propagation
We assume that the system given by (4) and (5) is locally well approximated by its linearization along the nominal trajectory. This is a common assumption as the system will stay close to the nominal trajectory using the feedback controller [14, 20]. Define
\[\tilde{x}_{k} =x_{k}-\tilde{x}_{k}, \tag{9a}\] \[\tilde{u}_{k} =u_{k}-\tilde{u}_{k},\] (9b) \[\tilde{y}_{k} =y_{k}-h(\tilde{x}_{k},0). \tag{9c}\]
By linearizing along \((\tilde{x}_{k},\tilde{u}_{k})\), the error dynamics is
\[\tilde{x}_{k} =A_{k-1}\tilde{x}_{k-1}+B_{k-1}\tilde{u}_{k-1}+G_{k-1}w_{k-1}, \tag{10}\] \[\tilde{y}_{k} =C_{k}\tilde{x}_{k}+D_{k}v_{k}.\]
We will consider this linear time-varying system hereafter. A Kalman filter is used for estimating \(\tilde{x}_{k}\) and is given by
\[\tilde{x}_{k} =\tilde{x}_{k^{\prime}}+L_{k}(\tilde{y}_{k}-C_{k}\tilde{x}_{k^{ \prime}}), \tag{11}\] \[\tilde{x}_{k^{\prime}} =A_{k-1}\tilde{x}_{k-1}+B_{k-1}\tilde{u}_{k-1}, \tag{12}\]
where,
\[L_{k} =\tilde{P}_{k^{\prime}}C_{k}^{\tau}(C_{k}\tilde{P}_{k^{\prime}}C_ {k}^{\tau}+D_{k}D_{k}^{\tau})^{-1}, \tag{13a}\] \[\tilde{P}_{k} =(I-L_{k}C_{k})\tilde{P}_{k^{\prime}},\] (13b) \[\tilde{P}_{k^{\prime}} =A_{k-1}\tilde{P}_{k-1}A_{k-1}^{\tau}+G_{k-1}G_{k-1}^{\tau}, \tag{13c}\]
and \(L_{k}\) is the Kalman gain. The covariances of \(\tilde{x}_{k}\), \(\tilde{x}_{k}\) and \(\tilde{x}_{k}\triangleq\tilde{x}_{k}-\tilde{x}_{k}\) are denoted as \(P_{k}=\mathbb{E}[\tilde{x}_{k}\tilde{x}_{k}^{\tau}]\), \(\tilde{P}_{k}=\mathbb{E}[\tilde{x}_{k}\tilde{x}_{k}^{\tau}]\) and \(\tilde{P}_{k}=\mathbb{E}[\tilde{x}_{k}\tilde{x}_{k}^{\tau}]\), respectively. Note that the covariance of \(x_{k}\) is also given by \(P_{k}\) and the estimation error covariance \(\tilde{P}_{k}\) is computed from (13b). From (10)-(12), it can be verified that \(\mathbb{E}[\tilde{x}_{k}]=\mathbb{E}[\tilde{x}_{k}]=\mathbb{E}[\tilde{x}_{k^{ \prime}}]\). Since \(\mathbb{E}[\tilde{x}_{0}]=0\), by choosing \(\mathbb{E}[\tilde{x}_{0}]=0\), we have \(\mathbb{E}[\tilde{x}_{k}]=0\) for \(k=0,\cdots,N\). Using (11) and (12) we also have that
\[\hat{P}_{k} =\mathbb{E}[\tilde{x}_{k}\tilde{x}_{k}^{\tau}]\] \[=\mathbb{E}[\tilde{x}_{k^{\prime}}\tilde{x}_{k^{\prime}}^{\tau}]+ L_{k}(C_{k}\tilde{P}_{k^{\prime}}C_{k}^{\tau}+D_{k}D_{k}^{\tau})L_{k}^{\tau}\] \[=(A_{k-1}+B_{k-1}K_{k-1})\hat{P}_{k-1}(A_{k-1}+B_{k-1}K_{k-1})^{ \tau}+L_{k}C_{k}\tilde{P}_{k^{\prime}}. \tag{14}\]
Using the fact that \(\mathbb{E}[\tilde{x}_{k}\tilde{x}_{k}^{\tau}]=0\), it can be verified that \(P_{k}=\hat{P}_{k}+\hat{P}_{k}\). Thus, given the feedback gains \(K_{k}\) and the Kalman filter gain \(L_{k}\), we can predict the covariances of the state estimation error and the state along the trajectory, which also provides the state distributions in the case of a Gaussian distribution.
## V Informed Batch Belief Tree Algorithm
### _Motivation_
The motivation of IBBT is shown in Figure 1. Two paths reach point \(B\) in Figure 1(a). The red path reaches \(B\) with a large cost but with low uncertainty. The blue path reaches \(B\) with a small cost but with high uncertainty. In this case, the blue path cannot dominate the red path, as it will incur a high probability of chance constraint violation for future segments of the path. Thus, in RRBT, both paths are preserved in the belief tree. More specifically, RRBT will find all non-dominated belief nodes by exhaustively searching the graph.
However, IBBT avoids exhaustive graph search and hence avoids adding unnecessary belief nodes. In Figure 1(b), if the blue path \(\overline{BG}\) (starting anywhere inside the blue ellipse) satisfies the chance constraint, the blue path \(\overline{SBG}\) will be the solution of the problem since it satisfies the chance constraints and has a lower cost than \(\overline{SABG}\). The operation of searching the current graph to find more paths reaching \(B\) with less uncertainty (but a higher cost), including the red one, becomes redundant.
Here, we assume that the cost of the nominal trajectory, \(\sum_{k=0}^{N-1}J(\tilde{x}_{k},\tilde{u}_{k})\), makes most of the cost in (1). That is, for the path \(\overline{BG}\), starting from the red ellipse and the blue ellipse will incur a similar cost. Reducing the uncertainty at node \(B\) is mainly for satisfying the chance constraint of the future trajectory. Such an assumption can also be found, for example, in [13].
RRBT performs an exhaustive search to find all non-dominated nodes whenever a vertex is added to the graph. Specifically, RRBT will spend a lot of effort finding nodes with low uncertainty but a high cost-to-come. Such nodes are only necessary if they are indeed part of the optimal path. If the blue path in Figure 1(b) is the solution, we do not need to search for other non-dominated nodes (red ellipse). However, since we do not know if the future blue path \(\overline{BG}\) will satisfy the chance constraint or not, the red node may still be needed. Thus, IBBT explores the graph and adds
Fig. 1: (a) RRBT: Two paths reach the same point \(B\). The red path detours to an information-rich region to reduce uncertainty. Both paths are explored and preserved in the belief tree in RRBT as it finds all non-dominated belief nodes. (b) IBBT avoids exploring unnecessary belief nodes. If the blue path \(\overline{BG}\) satisfies the chance constraint, the whole blue path \(\overline{SBG}\) satisfies the chance constraint and has a lower cost than the red path \(\overline{SABG}\). The operation of finding more paths reaching \(B\) with less uncertainty (but larger cost), including the red one, becomes redundant.
belief nodes to the belief tree only when is necessary. This is done by batch sampling and using an informed heuristic.
### _Nominal Trajectory Graph_
The stochastic motion planning problem (1)-(5) is divided into a simpler deterministic planning problem and a belief tree search problem. The deterministic planning problem is given by
\[\operatorname*{arg\,min}_{\bar{u}_{k}}\ \sum_{k=0}^{N-1}J(\bar{x}_{k}, \bar{u}_{k}), \tag{15}\] \[\text{s.t.}\ \ \bar{x}_{0}=\bar{x}_{s},\ \bar{x}_{N}=\bar{x}_{g},\] (16) \[\bar{x}_{k}\notin\mathcal{X}_{\text{obs}},\ k=0,\cdots,N,\] (17) \[\bar{x}_{k+1}=f(\bar{x}_{k},\bar{u}_{k},0). \tag{18}\]
The deterministic planning problem can be solved using sampling-based methods. The Rapidly-exploring Random Graph (RRG) [7] algorithm is adopted to add a batch of samples and maintain a graph of nominal trajectories. Similarly, the PRM algorithm [34] may be used in place of RRG.
```
1for\(i=1:m\)do
2\(\bar{x}_{\text{rand}}\leftarrow\textsc{SampleFree}\);
3\(v_{\text{nearest}}\leftarrow\textsc{Nearest}(V,\bar{x}_{\text{rand}})\);
4\(e_{\text{nearest}}\leftarrow\textsc{Connect}(v_{\text{nearest}}\text{-} \bar{x},\bar{x}_{\text{rand}})\);
5if\(\mathit{ObstacleFree}(e_{\text{nearest}})\)then
6\(V_{\text{near}}\leftarrow\textsc{Near}(V,\bar{x}_{\text{rand}})\);
7\(V\gets V\cup\{v(\bar{x}_{\text{rand}})\}\);
8\(E\gets E\cup\{e_{\text{nearest}}\}\);
9\(e\leftarrow\textsc{Connect}(\bar{x}_{\text{rand}},v_{\text{nearest}}\text{-} \bar{x})\);
10
11if\(\mathit{ObstacleFree}(e)\)then
12\(E\gets E\cup\{e\}\);
13
14foreach\(v_{\text{near}}\in V_{\text{near}}\)do
15\(e\leftarrow\textsc{Connect}(v_{\text{near}}\text{-}\bar{x},\bar{x}_{\text{rand }})\);
16if\(\mathit{ObstacleFree}(e)\)then
17\(\mathit{ObstacleFree}(e)\)then
18\(E\gets E\cup\{e\}\);
19
20
21
22
23
24return\(G,V_{\text{new}}\);
```
**Algorithm 1**RRG-D\((G,m)\)
The RRG-D algorithm given by Algorithm 1 follows the RRG algorithm developed in [7] with the additional consideration of system dynamics. RRG-D uses the Connect function introduced in Section III to build a graph of nominal trajectories. The edge is added to the graph only if the nominal trajectory is obstacle-free, which is indicated by the ObstacleFree checking in Algorithm 1. RRG-D draws \(m\) samples whenever it is called by the ISBT algorithm. The \(m\) samples constitute one batch. The sampled states \(\bar{x}\) along with the edges \(e\) connecting them generate a graph in the search space. For belief space planning, each vertex \(v\) has both state information \(v.\bar{x}\) and belief information \(v.N\). We use \(v(\bar{x})\) to refer to the vertex \(v\) whose state is \(v.\bar{x}\). RRG-D returns the updated new graph and the newly added vertex set \(V_{\text{new}}\).
### _Ibbt_
The Informed Batch Belief Tree algorithm repeatedly performs two main operations: It first builds a graph of nominal trajectories to explore the state space of the robot, and then it searches over this graph to grow a belief tree in the belief space. The ISBT algorithm is given by Algorithm 2 and Algorithm 3.
```
1\(n.P\gets R_{0}\); \(n.P\leftarrow\bar{R}_{0}\); \(n.c\gets 0\); \(n.h\leftarrow\text{Inf}\);
2\(n.\text{parent}\leftarrow\text{null}\);
3\(v_{s}.N\leftarrow\{n\}\); \(v_{g}.N\leftarrow\emptyset\);
4\(v_{s}.\bar{x}\leftarrow\bar{x}_{s}\); \(v_{g}.\bar{x}\leftarrow\bar{x}_{g}\);
5\(v_{s}.h\leftarrow\text{Inf}\); \(v_{g}.h\gets 0\);
6\(V\leftarrow\{v_{s},v_{g}\}\); \(E\leftarrow\emptyset\); \(G\leftarrow(V,E)\);
7\(Q\leftarrow\{n\}\); \(\mathit{Cost}\leftarrow\text{Inf}\);
8repeat
9\((G,V_{\text{new}})=\textsc{RRG-D}(G,m)\);
10\(G=\textsc{ValueIteration}(G)\);
11foreach\(v_{\text{new}}\in V_{\text{new}}\)do
12foreach\(v_{\text{neighbor}}\) of \(v_{\text{new}}\)do
13\(Q\gets Q\cup v_{\text{neighbor}}.N\);
14
15\(\text{Prune}(Q,\mathit{Cost})\);
16\(Q\gets Q\cup v_{g}.N\);
17\((G,Q,\text{flag})=\textsc{GraphSearch}(G,Q)\);
18ifflagthen
19\(\mathit{Cost}=\min\{n.c|\forall n\in v_{g}.N\}\);
20
21untilStop; return\(G,\text{flag}\);
```
**Algorithm 2**Informed Batch Belief Tree
Additional variables are needed to define a belief tree. A belief node \(n\) is defined by a state covariance \(n.P\)
an estimation error covariance \(n.\bar{P}\), a cost-to-come \(n.c\), a heuristic cost-to-go \(n.h\), and a parent node index \(n.\)parent. A vertex \(v\) is defined by a state \(v.\bar{x}\), a set of belief nodes \(v.N\), and a vertex cost \(v.h\).
The graph search given by Algorithm 3 repeats two primitive procedures to grow a belief tree: _Belief node selection_ which selects the best node in the belief queue for expansion; _Belief propagation_ which propagates the selected belief node to its neighbor vertices to generate new belief nodes. The metric to rank the belief nodes in the belief queue is vital for efficient graph search.
Based on the nominal trajectory graph, we can compute the cost-to-go for all vertices. A nominal trajectory graph is shown in Figure 2(a). Every edge in the graph is computed by solving a deterministic optimal control problem with edge cost given by (15). We compute the cost-to-go \(v_{i}.h\) using value iteration for every vertex in the graph. \(v_{i}.h\) is the true cost-to-go for the nominal trajectory graph and is an informed, admissible cost-to-go heuristic for the belief tree search problem. Here we assume that \(J(\bar{x}_{k},\bar{u}_{k})\leq\mathbb{E}\left[J(x_{k},u_{k})\right]\), thus \(\sum_{k=0}^{N-1}J(\bar{x}_{k},\bar{u}_{k})\leq\mathbb{E}\left[\sum_{k=0}^{N-1} J(x_{k},u_{k})\right]\). This assumption is true when \(J(x_{k},u_{k})\) is a convex function by using Jensen's inequality [35]. For example, a quadratic cost is very common in robotics applications where we want to minimize control effort and state uncertainty (covariance). Moreover, additional chance constraint checking is performed in the belief tree search. Therefore, \(\sum_{k=0}^{N-1}J(\bar{x}_{k},\bar{u}_{k})\) is an underestimate of the actual cost.
The nodes in the belief node queue are ranked based on the total heuristic cost \(n.f=n.c+n.h\). All belief nodes at the same vertex have the same heuristic cost-to-go and \(n.h=v.h\). In Figure 2(b), two belief nodes \(n_{1}\), \(n_{2}\) are shown at vertex \(v_{i}\). Their total heuristic costs are \(n_{1}.f=n_{1}.c+v_{i}.h\) and \(n_{2}.f=n_{2}.c+v_{i}.h\), respectively.
The partial ordering of belief nodes is defined as follows [9]. Let \(n_{a}\) and \(n_{b}\) be two belief nodes of the same vertex \(v\). We use \(n_{a}<n_{b}\) to denote that belief node \(n_{b}\) is dominated by \(n_{a}\). \(n_{a}<n_{b}\) is true if
\[(n_{a}.f<n_{b}.f)\wedge(n_{a}.P<n_{b}.P)\wedge(n_{a}.\bar{P}<n_{b}.\bar{P}). \tag{19}\]
In this case, \(n_{a}\) is better than \(n_{b}\) since it traces back a path that reaches \(v\) with less cost and less uncertainty compared with \(n_{b}\). Next, we summarize some primitive procedures used in the IBBT algorithm.
**Pop:**\(\texttt{Pop}(Q)\) selects the best belief node in term of the lowest cost \(n.f\) from belief queue \(Q\) and removes it from \(Q\).
**Propagate:** The Propagate procedure implements three operations: covariance propagation, chance constraint evaluation, and cost calculation. Propagate\((e,n)\) performs the covariance propagation using (13a)-(14). It takes an edge \(e\) and a belief node \(n\) at the starting vertex of the edge as inputs. Chance constraints are evaluated using the state covariance \(P_{k}\) along the edge. If there are no chance constraint violations, a new belief \(n_{\text{new}}\) is returned, which is the final belief at the end vertex of the edge. Otherwise, the procedure returns no belief. The cost-to-come of \(n_{\text{new}}\) is the sum of \(n.c\) and the cost of edge \(e\) by applying the controller (7) associated with \(e\).
**Append Belief:** The function \(\texttt{AppendBelief}(G,v,n_{\text{new}})\) decides if the new belief \(n_{\text{new}}\) should be added to vertex \(v\) or not. If \(n_{\text{new}}\) is not dominated by any existing belief nodes in \(v.N\), \(n_{\text{new}}\) is added to \(v.N\). Note that adding \(n_{\text{new}}\) means extending the current belief tree such that \(n_{\text{new}}\) becomes a leaf node of the current belief tree. Next, we also check if any existing belief node in \(v.N\) is dominated by \(n_{\text{new}}\). If an existing belief is dominated, its descendant and the node itself are pruned.
**Prune Node Queue:** The function \(\texttt{Prune}(Q,Cost)\) removes nodes in \(Q\) whose total heuristic cost is greater than \(Cost\). \(Cost\) is the cost of the current solution found.
**Value Iteration:** The function \(\texttt{ValueIteration}(G)\) computes the cost-to-go for all vertices in \(G\) use value iteration. The value iteration is done using the nominal trajectory graph. For vertices whose cost-to-go values are computed in the last iteration (before calling this function), their values are reused for initialization for faster convergence.
In Algorithm 2, Line 1-5 initializes the graph and the belief tree. The initial condition of the motion planning problem is given by the starting state \(\bar{x}_{s}\), state covariance \(P_{0}\), and estimation error covariance \(\bar{P}_{0}\). The goal state is \(\bar{x}_{s}\). In Line 6, the queue \(Q\) is initialized with the initial node \(n\) and the cost of the current solution is set as infinity. In Line 8, the RRG-D is called to add \(m\) samples and maintain a graph of nominal trajectories, \(V_{\text{new}}\) is the set of newly added vertices after calling RRG-D. Based the on the nominal trajectory graph, cost-to-go for all vertices in \(G\) is computed using value iteration (Line 9). Line 10-12 update the belief node queue after batch sampling. For every vertex that has an outgoing edge towards \(v_{\text{new}}\), all the belief nodes at that vertex are added to the queue.
In Algorithm 3, the belief \(n\) is propagated outwards to all the neighbor vertices of \(v(n)\) to grow the belief tree in Line 7-11. \(v(n)\) refers to the vertex associated with \(n\). \(v_{\text{neighbor}}\) is a neighbor of \(v(n)\) when there is an edge \(e_{\text{neighbor}}\) from \(v(n)\) to \(v_{\text{neighbor}}\) in the graph. The new belief \(n_{\text{new}}\) is added to the
Fig. 2: (a) Nominal trajectory graph. Each edge is computed by solving a deterministic optimal control problem with edge cost given by (15). (b) Two belief nodes are shown at vertex \(v_{z}\).
\(v_{\text{neighbor-}}N\) and \(Q\) if the belief tree extension is successful. Then, \(n\) is marked as the parent node of \(n_{\text{new}}\). Note that each belief node traces back a unique path from the initial belief node. For every belief node in the belief tree, we already found a feasible path (satisfies chance constraint) to this node. Algorithm 3 terminates when the belief node at \(\tilde{x}_{g}\) is selected for expansion (Line 4-6, Algorithm 3) or \(Q\) is empty. In the first case, the best solution is found. In the second case, no solution exists given the current graph.
## VI Experimental Results
In this section, we test the IBBT algorithm for different motion planning problems and compared the results with the RRBT algorithm [9].
### _Double Integrator_
The first planning environment is shown in Figure 3. The gray areas are obstacles and the blue region is the information-rich region, that is, the measurement noise is small when the robot is in this region. We use the 2D double integrator dynamics with motion and sensing uncertainties as an example. The system model is linear and is given by
\[\begin{split} x_{k+1}&=A_{k}x_{k}+B_{k}u_{k}+G_{k} w_{k},\\ y_{k}&=C_{k}x_{k}+D_{k}v_{k},\end{split} \tag{20}\]
where the system state includes position and velocity, the control input is the acceleration. The system matrices are given by
\[\begin{split} A_{k}=\begin{bmatrix}1&0&\Delta t&0\\ 0&1&0&\Delta t\\ 0&0&1&0\\ 0&0&0&1\end{bmatrix},\quad B_{k}=\begin{bmatrix}\Delta t^{2}/2&0\\ 0&\Delta t^{2}/2\\ \Delta t&0\\ 0&\Delta t\end{bmatrix},\quad C_{k}=I_{4},\end{split} \tag{21}\]
\(G_{k}=\sqrt{\Delta t}\operatorname{diag}(0.03,0.03,0.02,0.02)\), and \(D_{k}=0.01I_{4}\) when the robot is in a information-rich region, otherwise \(D_{k}=I_{4}\).
To compute the nominal trajectories, the analytical solution is available [20]. An LQG controller is used to compute the feedback gain \(K\) in the Connect function. The collision probability in the chance constraint is approximated using Monte Carlo simulations. We sample from the state distribution and count the number of samples that collide with the obstacles. The ratio of collided samples to the total samples is the approximate collision probability.
We compared the performance of RRBT and IBBT to find the first solution. The belief tree from RRBT is shown in Figure 3(a), and the belief tree from IBBT is shown in Figure 3(b). Both algorithms use the same set of states and find the same solution, which is given in Figure 4. The robot first goes down to the information-rich region to reduce its uncertainty, while directly moving toward to goal will violate the chance constraint.
Fewer belief nodes are searched and added to the tree using IBBT compared with RRBT, even though they return the same solution. IBBT uses batch sampling and computes the informed cost-to-go heuristic to guide the belief tree search, while RRBT only uses the cost-to-come. RRBT tries to find all non-dominated belief nodes whenever a vertex is added to the graph. Thus, it will find belief nodes that have low uncertainty but high cost-to-come (shown as small ellipses in Figure 3(a)). However, if such a node is not part of the solution path, this computation is not necessary. The comparison of the results is shown in Figure 5. The solving time for IBBT and RRBT is around 0.05 sec and 0.14 sec respectively.
Fig. 4: First solution found by both algorithms.
Fig. 5: Comparison between the IBBT and the RRBT algorithms. IBBT is faster to find the first solution.
Fig. 3: (a) Nominal trajectory graph and belief tree from the RRBT algorithm. (b) Nominal trajectory graph and belief tree from the IBBT algorithm. Both algorithms stop when they find the first solution. The extra ellipses in the left figure indicate that RRBT adds more nodes to the belief tree by exhaustive search. IBBT avoids unnecessary belief nodes expansion and find the same solution faster.
The second planning environment is shown in Figure 6. The problem setting is similar to the first environment except that more obstacles and information-rich regions are added. The first solution and the improved solution are shown in Figure 6(a) and Figure 6(b), respectively. The green lines are the mean trajectories. The gray lines around the green lines are the Monte-Carlo simulation results. The comparison with the RRBT algorithm is given in Figure 7. After finding the initial solution, both algorithms are able to improve their solution when more samples are added to the graph but IBBT is able to find a better solution in a much shorter time.
### _Dubins Vehicle_
Finally, we tested our algorithm using the Dubins vehicle model. The deterministic discrete-time model is given by
\[\begin{split} x_{k+1}&=x_{k}+\cos\theta_{k}\Delta t,\\ y_{k+1}&=y_{k}+\sin\theta_{k}\Delta t,\\ \theta_{k+1}&=\theta_{k}+u_{k}\Delta t.\end{split} \tag{22}\]
The nominal trajectory for the Dubins vehicle is chosen as the minimum length path connecting two configurations of the vehicle. The analytical solution for the nominal trajectory is available in [36].
After linearization, the error dynamics around the nominal path is given by (20), where the system matrices are
\[A_{k}=\begin{bmatrix}1&0&-\sin\theta_{k}\Delta t\\ 0&1&\cos\theta_{k}\Delta t\\ 0&0&1\end{bmatrix},\quad B_{k}=\begin{bmatrix}0\\ 0\\ \Delta t\end{bmatrix},\quad C_{k}=I_{3}. \tag{23}\]
\(G_{k}=\sqrt{\Delta t}\operatorname{diag}(0.02,0.02,0.02)\), \(D_{k}=0.1I_{3}\) when the robot is in a information-rich region, otherwise \(D_{k}=2I_{3}\). An LQG controller is used to compute the feedback gain \(K\), the weighting matrices of the LQG cost are \(Q=2I_{3}\) and \(R=1\).
The first solution and the improved solution are shown in Figure 8(a) and Figure 8(b), respectively. The green line is the mean trajectory. The gray lines around the green lines are the Monte-Carlo simulations. The comparison with the RRBT algorithm is given in Figure 9. After finding the initial solution, both algorithms are able to improve their current solution when more samples are added to the graph. Again, IBBT has better cost vs. time performance.
## VII Conclusion
We developed an online, anytime, incremental algorithm, IBBT, for motion planning under uncertainties. The algorithm considers a robot that is partially observable, has motion uncertainty, and operates in a continuous domain. The algorithm interleaves between batch sampling, building a graph of nominal trajectories in the state space, and searches over the graph to grow a belief tree. The heuristic cost-to-go is computed using the nominal trajectory graph along
Fig. 8: Planning results of the Dubins vehicle. (a) First solution. (b) Final solution.
Fig. 6: Planning results of double integrator. (a) The first solution returned by IBBT; (b) Final solution with solving time less than 2 sec.
Fig. 7: Comparison between IBBT and RRBT. IBBT has better cost-time performance and finds the first solution with less time.
with value iteration. This cost-to-go along with the cost-to-come provides an informed heuristic to guide the belief tree search. The algorithm finds motion plans that converge to the optimal one as more batches of samples are added to the graph. We have tested the IBBT algorithm in different planning environments. The proposed algorithm finds non-trivial motion plans and provides better solutions using a smaller amount of time compared with previous methods.
|
2304.01137
|
Resource Allocation in IRS-aided Optical Wireless Communication Systems
|
One of the main challenges facing optical wireless communication (OWC)
systems is service disconnection in high blockage probability scenarios where
users might lose the line of sight (LoS) connection with their corresponding
access points (APs). In this work, we study the deployment of passive
reflecting surfaces referred to as Intelligent Reflecting Surfaces (IRSs) in
indoor visible light communication (VLC) to boost users signal to noise ratio
(SNR) and ensure service continuity. We formulate an optimization problem to
allocate APs and the mirrors of IRSs to users such that the sum rate is
increased. The results show a 35% increase in the sum rate of the IRS-aided OWC
system compared to the sum rate achieved by only considering the LoS channel
components. The results also shows that the deployment of IRSs improves the sum
rate under LoS blockage.
|
Ahrar N. Hamad, Ahmad Adnan Qidan, Taisir E. H. El-Gorashi, Jaafar M. H. Elmirghani
|
2023-04-03T17:03:11Z
|
http://arxiv.org/abs/2304.01137v1
|
# Resource Allocation in IRS-aided Optical Wireless Communication Systems
###### Abstract
One of the main challenges facing optical wireless communication (OWC) systems is service disconnection in high blockage probability scenarios where users might lose the line of sight (LoS) connection with their corresponding access points (APs). In this work, we study the deployment of passive reflecting surfaces referred to as Intelligent Reflecting Surfaces (IRSs) in indoor visible light communication (VLC) to boost users signal to noise ratio (SNR) and ensure service continuity. We formulate an optimization problem to allocate APs and the mirrors of IRSs to users such that the sum rate is increased. The results show a 35% increase in the sum rate of the IRS-aided OWC system compared to the sum rate achieved by only considering the LoS channel components. The results also shows that the deployment of IRSs improves the sum rate under LoS blockage.
**Keywords**: optical wireless communication (OWC), intelligent reflecting surface (IRS), resource allocation, optimization, beam blockage.
## 1 Introduction
In recent years, the need for high-speed wireless connectivity has grown significantly leading researchers from both academia and industry to investigate new technologies capable of supporting the escalating traffic of data intensive internet-based applications [1, 2]. In this context, optical wireless communication (OWC) has emerged as a promising 6G technology complementing radio frequency (RF) wireless systems to relieve the spectrum shortage [3, 4, 5, 6]. The large, unregulated optical spectrum bandwidth has the potential to support data rates beyond 20 Gbit/s per user. Other advantages of OWC include high energy efficiency, enriched security and low cost [7, 8, 9, 10, 11, 12]. However, OWC faces a number of challenges including inter-symbol interference (ISI) caused by multipath propagation and power constraints imposed by eye and skin health and safety. An angle diversity receiver (ADR) composed of multiple photodiodes, each with narrow field of view (FoV) [7, 8, 13], can be used to overcome Inter Symbol Interference (ISI) while maintaining users' connectivity. OWC systems can also suffer from severe performance degradation due to line of sight (LoS) blockage by objects in the environments. Moreover, in visible light communication (VLC) [14], where LED-based optical APs are used for illumination and communication, the confined coverage area of APs results in the need for a large number of APs to ensure full coverage.
Recently, Intelligent Reflecting Surfaces (IRSs) have been proposed to expand the connectivity of RF networks, while using a limited number of base stations, by focusing the base station signals towards users to reduce ISI and overcome LoS blockage. An IRS is composed of passive reflecting elements made from metasurfaces or mirrors. Typically, these reflecting elements are arranged in a planar surface to independently reflect the incident signals at different angles by different amplitude and/or phase [15].The deployment of IRS in conventional RF networks was shown to enhance performance in terms of spectral and energy efficiency [16].
In the context of indoor OWC, IRSs can be deployed as a key solution to ease the impact of LoS blockage, broaden the coverage and enhance the achievable user rate. In [17], the use of multiple reconfigurable intelligent surfaces in OWC was shown to improve the system performance by reducing the outage probability. In [18], the performance of meta surface and mirror array-based reflectors is studied in an OWC system and the results show that the power received by users is determined by the number of reflecting elements and their orientation and position. In addition, a novel physical layer security technique for an IRS-aided indoor OWC system is proposed in [19]. The orientation of the mirrors is optimized such that the IRS-based optical channel of the legitimate user is enhanced, while ensuring that eavesdroppers experience a weak IRS-based optical channel. Moreover, the optimization of IRS reflection coefficient using greedy algorithm to maximize the sum rate was considered in [20]. It is also shown that IRS based beam steering can improve signal reception in VLC systems by steering the incident light beam using meta lens and crystal liquid [21].
In this paper, we investigate the improvement of sum rate in mirror-based IRS-aided OWC systems. We optimize the allocation of APs and mirrors to users such that the sum rate of users is maximized. We solve the optimization problem using exhaustive search and compare the sum rate in the IRS-aided OWC system to the sum rate achievable by the LoS channel components only and LoS and diffuse non-line-of-sight (NLoS) channel components under varying transmitted optical power and LoS blockage ratios. The rest of this paper is organized as follows: In Section 2, the system model is presented. The simulation results are given and discussed in Section 3. Finally, conclusions are presented in Section 4.
## 2 System Model
A downlink VLC system is considered in an indoor environment with \(L\) multi-LED-based optical APs deployed on the ceiling of the room to provide illumination and communication for \(K\) users distributed randomly on the communication plane, as shown in Fig. 1 (a). Each AP is composed of multiple LED transmitters to ensure an expanded coverage area under eye safety power constraints. Each user is equipped with an ADR set to ensure that the user can be served by all the APs in the room. The direction of the ADR photodiodes is determined by their azimuth angle (\(AZ\)) and the elevation angle (\(EL\)). The walls, ceiling, and floor are modelled as Lambertian reflectors. More information on the calculation of the channel responses can be found in [22, 23, 24, 25, 26, 27, 28]. The reflected signals from the walls, ceiling and floor of the room cannot be controlled to maximize the gain of users. To enhance the SNR, mirror arrays are mounted on the walls to act as IRSs specular reflecting the APs signal to the APs assigned users. Each array of mirrors is composed of \(width_{m}\times height_{m}\) identical, passive, smooth reflecting rotational \(m\) mirrors. Each mirror is fixed to a randomly selected rotation determined by two independent angles; the roll angle around the x-axis and the yaw angle around the z-axis, as shown in Fig. 1 (b). Note that, given the large number of mirrors and the limited area of the indoor environment, random rotations of mirrors will most probably result in full coverage of the room, i.e., each user will most probably find a mirror that enhances its received signal.
Typically, OWC is intensity modulation/direct detection (IM/DD). Hence, on-off Key (OOK) modulation is considered to avoid complexity. All the APs are connected to a central unit (CU) that collects information on the network status in terms of resource availability and users demands. The CU uses this information to optimize the allocation of APs and mirrors to users such that the users sum rate is maximized. The optical channel is composed of a LoS component, which is a direct link from the AP to the user, and NLoS components made of diffuse reflections by objects in the environment. In this work, without loss of generality, we model up to the second reflections, ignoring higher reflections components of the NLoS channel. In addition to the LoS and diffuse NLoS components, the users in an IRS-aided -OWC system receive mirror-reflected components. Note that in this work we only consider first reflections by the mirrors, i.e., a signal from an AP reflected by a mirror to a user. The signal received by user \(k\) from optical AP \(l\) can be written as
\[P_{k}=P_{l}[\begin{array}{c}O_{k,l}h_{k}^{LoS}\end{array}+h_{k}^{NLOS}+h_{k }^{IRS}\end{array}]+\begin{array}{c}n_{k}, \tag{1}\]
where \(P_{t}\) is the transmitted power, \(h_{k}^{LoS}\) is the LoS channel impulse response, \(h_{k}^{NLOS}\)is the impulse response of the NLoS channel components reflected by the walls, ceiling and floor, \(h_{k}^{IRS}\) is the IRS channel impulse response of the reflected components by the mirrors within the array, \(n_{k}\) is the real-valued Additive White Gaussian Noise (AWGN) of user \(k\) with zero mean and variance and \(O_{k,l}\) is a binary variable \(O_{k,l}=1\) if there is no obstacle between user \(k\) and AP \(l\), otherwise \(O_{k,l}=\begin{array}{c}0.\end{array}\)
## 3 Simulation Results
To demonstrate the improvement in OWC systems sum rate obtained by optimizing resource allocation in an IRS-aided OWC system, we consider the system model with room dimensions of \(5m\times 5m\times 3m\) with four LED-based APs (\(L=4\)) deployed on the ceiling and two mirror arrays mounted on opposite walls. Each mirror array contains \(5\times 5\) reflective elements, each with an effective area of \(25\ cm\times 15\ cm\). As mentioned earlier, each
Figure 1: (a) IRS-aided downlink VLC system model, (b) Rotational Mirror.
mirror is fixed to a randomly selected rotation. On the receiving plane, four active users (\(K=4\)) are randomly distributed. Other simulation parameters are listed in Table 1.
An optimization problem is formulated to allocate resources among APs and mirrors in each mirror array to users, with the objective of maximizing the aggregate sum rate of all users. It is worth mentioning that the resources considered in this work are assigned on a fractional time basis, i.e., resources are devoted to a user for the time required to send its data. In this context, a utility-based objective function in a logarithmic form is defined to maximize the sum rate and to preserve proportional fairness among users. The optimization problem is solved through exhaustive search.
\begin{table}
\begin{tabular}{|l|c|} \hline
**Parameter** & **Value** \\ \hline
**Room Configurations** \\ \hline Length x Width x Height & 5 x 5 x 3 m\({}^{3}\) \\ \hline Reflectivity of walls, floor, ceiling & 0.8, 0.3, 0.8 \\ \hline Area of diffuse reflecting element & 1st & 2nd \\ \cline{2-3} & 5 cm x 5 cm & 20 cm x 20 cm \\ \hline
**LED Transmitter** \\ \hline Quantity & 4 \\ \hline Location & (1.5,1.5,3), (1.5,3.5,3), \\ & (3.5,1.5,3), (3.5,3.5,3) \\ \hline Transmitted power & 2 W \\ \hline Half power semi-angle & 60\({}^{\circ}\) \\ \hline
**MA-IRS** \\ \hline Number of Mirror array & 2 \\ \hline Reflectivity of Mirror & 0.95 \\ \hline
**ADR Receiver** \\ \hline Quantity & 4 \\ \hline Responsivity & 0.4 A/W \\ \hline Physical area in a PD & 20 \(mm^{2}\) \\ \hline Bandwidth & 20 MHz \\ \hline ADR Photodetector Branches & 1 & 2 & 3 & 4 \\ \hline Azimuth angels & 0\({}^{\circ}\) & 90\({}^{\circ}\) & 180\({}^{\circ}\) & 270\({}^{\circ}\) \\ \hline Elevation angels & 60\({}^{\circ}\) & 60\({}^{\circ}\) & 60\({}^{\circ}\) \\ \hline Field of View & 25\({}^{\circ}\) & 25\({}^{\circ}\) & 25\({}^{\circ}\) \\ \hline \end{tabular}
\end{table}
Table 1: SYSTEM PARAMETERS [29]
Figure 2: Impulse channel responses for a user located in the centre of the room.
In Fig. 2, the impulse channel responses of LoS, diffuse NLoS and IRS-NLoS specular reflected components are depicted for a user located in the center of the room to illustrate their power contributions. It can be seen that the power contribution of the LoS component is significant compared to the diffuse components, which means that if a user loses the LoS link with its corresponding AP, it experiences very low SNR, and therefore the deployment of IRS is essential in OWC. Moreover, the reflected signal from the mirror has higher power compared to the power contributions of the reflected signals from the walls, ceiling and floor due to the ability of IRS to focus the reflected signals towards users.
Figure. 3 compares the IRS-aided OWC system sum rate to the sum rate achievable by the LoS channel components only and LoS and diffuse non-line-of-sight (NLoS) channels. The figure shows that the sum rate increases as the transmitted optical power increases. However, the transmitted power is subject to illumination and eye safety regulations. Deploying the IRS improved the sum rate by 35% and 29% compared to LoS only and LoS and diffused NLoS, respectively.
Figure. 4 shows the sum rate against the LoS blockage ratio. As expected, the achievable rate decreases as the LoS blockage ratio increases. However, the use of IRS relieves the blockage problem in OWC. The use of two 5x5 mirror arrays achieves a sum rate up to 4.8 bps/Hz at a blockage ratio of 1 while the sum rate is limited to 1.8 bps/Hz considering the LoS and diffuse NLoS components. The results also show improvement in the sum rate (2.4 bps/Hz at a blockage ratio of 1) by using a single 5x5 mirror array compared to LoS only.
## 4 Conclusions
This paper investigated the performance of IRS-aided indoor OWC system composed of multiple LED-based APs serving multiple users and arrays of rotational mirrors acting as the IRSs. An optimization problem was formulated
Figure 4: Sum rate versus blockage ratio.
Figure 3: Sum rate versus transmit optical power.
to maximize the sum rate by optimum allocation of APs and IRS mirrors. The allocation of APs and mirrors is solved through exhaustive search. To relax complexity, APs are first allocated to users followed by mirror allocation to users. The results show that deploying two 5x5 mirror arrays increases the sum rate by 35% and 30% compared to the sum rate achieved by the LoS channel component only and the LoS and diffuse NLoS channel components, respectively. The results also shows that the deployment of mirror-based IRSs improves the sum rate under LoS blockage.
## Acknowledgements
This work has been supported in part by the Engineering and Physical Sciences Research Council (EPSRC), in part by the INTERNET project under Grant EP/H040536/1, and in part by the STAR project under Grant EP/K016873/1 and in part by the TOWS project under Grant EP/S016570/1. All data are provided in full in the results section of this paper.
|
2306.00450
|
Exploring Open-Vocabulary Semantic Segmentation without Human Labels
|
Semantic segmentation is a crucial task in computer vision that involves
segmenting images into semantically meaningful regions at the pixel level.
However, existing approaches often rely on expensive human annotations as
supervision for model training, limiting their scalability to large, unlabeled
datasets. To address this challenge, we present ZeroSeg, a novel method that
leverages the existing pretrained vision-language (VL) model (e.g. CLIP) to
train open-vocabulary zero-shot semantic segmentation models. Although acquired
extensive knowledge of visual concepts, it is non-trivial to exploit knowledge
from these VL models to the task of semantic segmentation, as they are usually
trained at an image level. ZeroSeg overcomes this by distilling the visual
concepts learned by VL models into a set of segment tokens, each summarizing a
localized region of the target image. We evaluate ZeroSeg on multiple popular
segmentation benchmarks, including PASCAL VOC 2012, PASCAL Context, and COCO,
in a zero-shot manner (i.e., no training or adaption on target segmentation
datasets). Our approach achieves state-of-the-art performance when compared to
other zero-shot segmentation methods under the same training data, while also
performing competitively compared to strongly supervised methods. Finally, we
also demonstrated the effectiveness of ZeroSeg on open-vocabulary segmentation,
through both human studies and qualitative visualizations.
|
Jun Chen, Deyao Zhu, Guocheng Qian, Bernard Ghanem, Zhicheng Yan, Chenchen Zhu, Fanyi Xiao, Mohamed Elhoseiny, Sean Chang Culatana
|
2023-06-01T08:47:06Z
|
http://arxiv.org/abs/2306.00450v1
|
# Exploring Open-Vocabulary Semantic Segmentation without Human Labels
###### Abstract
Semantic segmentation is a crucial task in computer vision that involves segmenting images into semantically meaningful regions at the pixel level. However, existing approaches often rely on expensive human annotations as supervision for model training, limiting their scalability to large, unlabeled datasets. To address this challenge, we present ZeroSeg, a novel method that leverages the existing pretrained vision-language (VL) model (_e.g_. CLIP [39]) to train open-vocabulary zero-shot semantic segmentation models. Although acquired extensive knowledge of visual concepts, it is non-trivial to exploit knowledge from these VL models to the task of semantic segmentation, as they are usually trained at an image level. ZeroSeg overcomes this by distilling the visual concepts learned by VL models into a set of segment tokens, each summarizing a localized region of the target image. We evaluate ZeroSeg on multiple popular segmentation benchmarks, including PASCAL VOC 2012, PASCAL Context, and COCO, in a zero-shot manner (_i_., _no training or adaption on target segmentation datasets). Our approach achieves state-of-the-art performance when compared to other zero-shot segmentation methods under the same training data, while also performing competitively compared to strongly supervised methods. Finally, we also demonstrated the effectiveness of ZeroSeg on open-vocabulary segmentation, through both human studies and qualitative visualizations.
## 1 Introduction
Semantic segmentation involves dividing an image into distinct regions and assigning each area a corresponding label, and the open-vocabulary setting targets performing segmentation with an unrestricted vocabulary. This process typically necessitates human-generated annotations, such as per-pixel label supervision [55, 19, 24, 40, 45, 53, 56, 11], or image-level supervision, e.g. human natural language [20, 16, 48]. However, it can be time-consuming and expensive to obtain these annotations, and thus the resulting model can not be trained on large amounts of data. Recently, new developments in the field of vision and language learning [39, 26, 1, 52, 8, 58] have emerged. Although some of these approaches have demonstrated impressive open-vocabulary image/object classification capabilities, their performance for open-vocabulary semantic segmentation has been less promising. Nonetheless, they provide a potential alternative solution to overcome the limitations of traditional supervised methods.
To improve the scalability of semantic segmentation for a large or open vocabulary, researchers have explored models
Figure 1: **ZeroSeg overview.** ZeroSeg is a zero-shot open-vocabulary method for semantic segmentation. The approach begins by dividing the input image into a set of multi-scale views. Each view is then individually processed by a pretrained CLIP visual encoder model to extract visual concepts. These visual concepts are then distilled into our ZeroSeg model via the proposed segment matching loss. After training, our ZeroSeg model can be directly transferred to downstream semantic segmentation tasks in a zero-shot manner (_i_.\(e\)., no training or adaption on target datasets). The entire training process does not require any human labels.
that can learn directly from tens of millions of text samples [20, 48, 16]. However, these vision-language (VL) models are prohibitively expensive to train and thus it is best to be able to exploit pretrained VL model weights (_e.g_., CLIP) for downstream segmentation tasks. However, to directly adapt CLIP for per-pixel semantic segmentation is not trivial, since CLIP has only been trained using coarse-grained image-level supervision, even though it has learned extensive visual concepts.
Initial attempts have been made to also leverage pre-trained vision-language models for open-vocabulary semantic segmentation, such as those discussed in [50, 33]. However, these previous attempts primarily treated CLIP as a zero-shot segment-level classifier or as a visual backbone for the improved initialization. They usually still need to require expensive per-pixel level labels or extensive image-text pairs for the training. In contrast, our proposed method treats CLIP as a teacher model and distills its knowledge into our newly designed segmentation model, named ZeroSeg, to facilitate semantic segmentation. This process enables the direct transfer of various learned visual concepts into ZeroSeg without the need for any human labels, thereby naturally extending CLIP for open-vocabulary semantic segmentation.
One of the main challenges in using a large pretrained vision-language model for per-pixel level supervision is how to effectively group and categorize semantically consistent pixels. To tackle this problem, we have incorporated a segments-grouping approach [48] into our ZeroSeg model. This approach automates the grouping of pixels into more significant, arbitrary-shaped segments. With these segments, it then becomes much easier to distill semantic information from the CLIP visual encoder to these localized image regions. As illustrated in Fig. 1, ZeroSeg divides the input image into multiple scaled regions and extracts their semantic features via the CLIP visual encoder. Each of those regional features will be distilled into a set of learnable segment tokens both locally and globally. The visual segments will finally emerge to match the consistency with the different scales of semantic information from CLIP. Additionally, to improve the efficiency of training, our model also incorporates a masked autoencoder [21].
To assess the efficacy of our proposed model, we trained ZeroSeg using only the ImageNet 1k dataset [13], without any human label supervision. Our findings reveal that our model is comparable in performance to those that were trained with human-label supervision. Specifically, we achieved a mean intersection over union (mIoU) of 40.8 on PASCAL VOC 2012 [18], a mIoU of 20.6 on PASCAL Context [34], and a mIoU of 20.4 on the COCO dataset [29] in a zero-shot manner. These results are comparable to models such as GroupViT [48] and MaskCLIP [16], which were pretrained on 26M and 20M image-text pairs, respectively, indicating the efficiency and effectiveness of our approach. Additionally, our model has performed well in a larger-vocabulary (1000 classes) semantic segmentation task. Our work is the first to enable open-vocabulary semantic segmentation by only distilling knowledge from the pretrained vision-language without any human labels.
**Contributions.** We make the following contributions:
* Our research introduces ZeroSeg, a model that enables efficient open-vocabulary semantic segmentation without relying on human annotations. By distilling knowledge from a pretrained vision-language model, ZeroSeg bypasses the need for training on a large dataset of image-text pairs.
* The success of ZeroSeg is attributed to its carefully-designed architecture, which includes segment matching loss and multi-scaled feature distillation loss. These components are crucial for achieving accurate segmentation without human labels.
* Despite being pretrained on only ImageNet-1k, which has almost 20 times fewer samples than the other baseline models trained on text supervision, ZeroSeg achieves comparable results. As a result, our model provides a significant increase in training efficiency without sacrificing performance.
## 2 Related Works
**Supervised semantic segmentation.** Fully supervised semantic segmentation methods rely on per-pixel level supervision and have achieved significant success. Many such methods have been proposed, including [10, 31, 55, 19, 24, 40, 45, 53, 56, 11]. They have achieved strong performance for in-domain semantic segmentation. However, these methods often struggle to generalize to new visual concepts that were not present in the training dataset. This limitation can be attributed to the fact that fully supervised methods require pixel-level annotations for all object classes of interest, making them impractical for scenarios where new object classes are encountered at test time.
**Semantic segmentation with less supervision.** Obtaining dense per-pixel labels is often costly and time-consuming, leading to a trend of research on learning to segment with less supervision. Some works leverage image-level labels, such as classification labels [46, 38, 49], captions [20, 16, 48], or pseudo-masks [28]. Few-shot methods [32, 14, 30, 35, 42, 51] have also been proposed to perform segmentation with fewer pixel-wise labels. In addition, zero-shot semantic segmentation approaches [5, 47, 23, 2, 27] have been developed to segment unseen visual concepts by aligning with language embeddings, but they still require per-pixel label supervision on seen categories at the beginning. Our approach differs from previous methods in that
we rely solely on a CLIP vision encoder as the teacher without any per-pixel labels or language signals as supervision, allowing our strategy to train on any images. This enables more flexible and efficient semantic segmentation learning.
**Open-vocabulary segmentation.** Open-vocabulary segmentation aims to segment images beyond a closed-set vocabulary. Early attempts at open-vocabulary segmentation involved linking pixels to word concepts from WordNet [54]. However, recent developments in CLIP-based methods have significantly improved the ability to perform open-vocabulary segmentation. For example, Xu _et al._[50] propose using CLIP to classify mask segments generated by a pretrained mask generator [12]. Li _et al._ encode pixel embeddings from a pretrained visual encoder and classify each embedding with the CLIP text encoder [39]. MaskCLIP+ [57] adapts a frozen CLIP model and leverages pseudo-per-pixel labeling for semantic segmentation. Additionally, GroupViT [48] and OpenSeg [20] learn segmentation masks from large-scale text supervision. In contrast to these approaches, we generate segments by only distilling the knowledge from CLIP vision encoder.
**Denoising autoencoder.** Denoising autoencoders [21, 9, 3, 15] have gained popularity as a means of reconstructing original images from corrupted inputs. This technique is widely used in representation learning. There are various denoising strategies including jigsaw puzzles [36], inpainting [37], and color restoration [25], etc. Among these strategies, MAE [21], or masked autoencoder, stands out for its ability to reconstruct missing patches with superior performance. MAE also improves training efficiency by reducing the number of input tokens in the encoder. Our ZeroSeg also builds upon the success of MAE and incorporates a masked autoencoder to improve the training efficiency and semantic representation for those segments.
## 3 Method
This section presents our proposed architecture, ZeroSeg, which learns to perform semantic segmentation by only distilling the knowledge from the CLIP vision encoder. The architecture of ZeroSeg is illustrated in Figure 2. ZeroSeg incorporates a masked encoder [21] as the main backbone, and it has two different heads, the first one is the reconstruction decoder for reconstructing the masked patches. The other one is the segmentation head to learn the semantic segmentation task. By incorporating the masked encoder-decoder, we empirically found that it can generate more reliable segmentations while being more efficient. During training, only a fraction (40%) of the visual patches are fed into the encoder, while the masked decoder reconstructs the remaining patches. We divide the full image into grids of multiple scales, and then compute images features from these grids. Next, we distill the grid features into the ZeroSeg model with mainly two losses. The first one is a multiscale feature distillation loss, while the other one is a segment matching loss to promote the semantic consistency between the segments and the visual concepts from the CLIP visual encoder.
### Architecture
We build our ZeroSeg model based upon the recent masked autoencoder (MAE) work [21], which aims to learn semantically meaningful representations through reconstructing masked-out image pixels. Similar to MAE, ZeroSeg leverages an asymmetric encoder-decoder architecture (Fig. 2 left). When presented with an image, the encoder divides it into a sequence of non-overlapping patches. The encoder then selects a subset of visual tokens from each patch as input and generates the corresponding latent representation. Subsequently, the decoder utilizes this latent representation to reconstruct the missing patches, thereby producing a reconstructed image. ZeroSeg then trains the model by minimizing the mean squared error (MSE) between the reconstructed image and the original image in the pixel space, with the expectation that the resulting encoder would produce useful semantic representation that could benefit downstream tasks.
In addition to the encoder-decoder structure tailored for mask autoencoding, we also incorporate an important segmentation head design (Fig. 2 right) to help ZeroSeg learn to perform open-vocabulary semantic segmentation.
To group visual concepts, we build upon the previous work GroupViT [48]. This approach involves organizing grouping layers into a hierarchy of stages, with each stage containing a grouping block to combine smaller groups into larger ones. Specifically, at each grouping layer, learnable segment tokens are used to bring semantically similar tokens together to form a single segment token based on their similarity. Finally, the image segments are merged into a fixed number of segment tokens \(\{\texttt{g}_{1},\texttt{g}_{2},...,\texttt{g}_{m}\}\), each corresponding to a disjoint image region. This grouping process enables the method to organize visual information into arbitrary semantically meaningful image segments.
Though successful, GroupViT requires a large set of image-caption pairs for training, which is cumbersome and, as we will show, introduces bias into the type of data included in the training set that ultimately hurts the performance on the segmentation task. For this reason, we propose a text-free segmentation head in ZeroSeg (shown in Fig. 2). This means that all we need for training is a set of unlabeled images, which simplifies the training and makes our method much more widely applicable. Specifically, to derive the semantic representation for segment tokens, we extract multi-scale image features using a pretrained CLIP visual encoder and distill them into these tokens. Since CLIP visual encoders are trained to produce representations matching the text encoder outputs, we leverage this to pro
duce the "pseudo text supervision" and thus avoid any text annotations.
### Multi-scale image feature distillation
**Multi-scale image feature extraction.** An image can contain complex and diverse semantic information. Since the CLIP model only provides a single global representation for the entire image, it may not be sufficient to extract detailed regional semantic information. As we will show in experiments, it's inadequate to naively adapting the CLIP model to our context, as it fails to capture the concept specific (_i.e_., objects or stuff) information which is critical for semantic segmentation. To address this limitation, we propose a multi-scale image feature extraction strategy to better capture regional semantic information at different scales. Specifically, this strategy involves dividing the full image into multiple views, such as 2x2, 3x3 grids, each corresponds to a different sub-region of the full image, as illustrated in Fig. 2. We then resize each view into a full-size image, and pass them through the CLIP visual encoder to produce image features of different scales: \(\{\mathrm{v}_{1},\mathrm{v}_{2},...,\mathrm{v}_{n}\}\), which are more likely to capture diverse objects and extract more object-localized semantic information.
**Multi-scale feature distillation loss.** To leverage the semantic information in the multi-scale CLIP visual features, we adopt a Transformer layer to encode all segment tokens, followed by an average pooling and an MLP layer to obtain the global image representation \(z\). We then compute the multi-scale feature distillation loss between \(z\) and the set of multi-scale image features \(\{\mathrm{v}_{1},\mathrm{v}_{2},...,\mathrm{v}_{n}\}\). For each v, we distill its knowledge to \(z\) using an L\({}_{1}\) loss. This process compels the global image feature \(z\) to capture diverse and distinct regional semantic representations, thereby contributing to a more comprehensive semantic understanding of the image.
**Segment matching loss.** The current top-down approach for learning semantic masks with segment tokens lacks object-grounded constraints, which can potentially result in inconsistent semantic regions being captured by each segment token (_e_.\(g\)., mask pixels leaking into neighboring objects). This inconsistency can lead to incorrect segment classification. To overcome this, we propose a new segment matching loss \(\mathcal{L}_{match}\) as follows:
\[\mathcal{L}_{match}=\sum_{i=1}^{m}\min_{j}|\mathrm{g}_{i}-\mathrm{v}_{j}| \tag{1}\]
\(\mathcal{L}_{match}\) aims to map each segment token \(\mathrm{g}_{i}\) to its most semantically aligned multi-scale image region feature \(\mathrm{v}_{j}\), as illustrated in Fig 2 (right). Note that this segment matching loss is only computed between each segment token \(\mathrm{g}_{i}\) and local-regional features excluding the full-size image features. This design is to encourage each segment token to capture more object-centric semantic information. We
Figure 2: **Training ZeroSeg model.** ZeroSeg architecture consists of a ViT encoder and two heads including a decoder head and a segmentation head. The outputs from the decoder head is used to reconstruct the masked input image during training (_i.e_., masked autoencoding [21]), while the outputs from the segmentation head are transformed into several segment tokens \(\{g\}\) to learn semantic segmentation via distillation. To effectively distill localized semantic information to the segmentation model, ZeroSeg employs a multi-scale feature generation method that divides the input image into multi-scale views, using \(e\)._g_. 2\(\times\)2 and 3\(\times\)3 grids, and pass these views to a pretrained CLIP visual encoder to produce visual features \(\{v_{1}\), \(v_{2}\),...,\(v_{n}\}\). Then, ZeroSeg distills semantic information from these multi-scale features to the segmentation model via two loss functions. The first one is an L\({}_{1}\) distillation loss between \(\{v_{1},v_{2},v_{3},...,v_{n}\}\) and the global feature \(z\). The second one is a segment matching loss to perform distillation between local region features \(\{v_{2},v_{3},...,v_{n}\}\) (excluding \(v_{1}\) since it corresponds to the full-sized image feature) and segment tokens. For each segment token, this loss function searches for its nearest neighbor local region, and minimizes the L\({}_{1}\) distance between them.
achieve this by minimizing the L\({}_{1}\) distance between each g\({}_{i}\) and its nearest v\({}_{j}\), also measured in L\({}_{1}\) distance. As we will show in Sec. 4.4, adding this segment matching loss largely helped improve the semantic segmentation accuracy, by avoiding poor matches between segment tokens and image regions during training.
## 4 Results
### Implementation details
**Model architecture.** Our proposed model, ZeroSeg, is based on the ViT-base architecture [17]. We use a 12-layer ViT transformer as our encoder. While for the reconstruction and segmentation heads, we adopt two transformer decoders each consisting of 8 and 5 transformer layers, respectively. Two grouping stages are appended to the segmentation head after the 2nd and 4th transformer layers, employing 32 and 8 learnable group tokens, respectively. To encode the positional information of image patches, we utilize absolute positional encoding [44] for both the encoder and the masked decoder. Multi-scale image features are extracted using a pretrained CLIP-L vision encoder. Details on the specific hyperparameters can be found in our Supplementary Materials.
**Training details.** In our work, we mainly train our ZeroSeg model on images from ImageNet 1k [13] dataset. We also train on CC3M [7] and COCO [29] for ablation study. We train our model on ImageNet-1K dataset for 80 epochs, with the first 20 epochs as the warm-up period, during which we use a base learning rate of 1.5e-4. We use the AdamW optimizer and a batch size of 4096. We only employ the center crop without any other augmentation strategies, hence we can pre-compute and cache the multi-scale image features using the CLIP model for better training efficiency. Finally, all training images are rescaled to 224\(\times\)224 during training.
### Comparison to the state-of-the-arts
We evaluate ZeroSeg on three benchmark datasets: PASCAL VOC 2012 [18], PASCAL Context [34], and COCO [29]. These datasets consist of 20, 59, and 80 foreground classes, respectively. To generate text embeddings for each class \(c\) during inference, we feed the classes to the CLIP text encoder using a set of predefined prompt templates (_e.g_., "a photo of the {class}") and produce the corresponding class embeddings \(t_{c}\), \(c\in\{1,2,...,C\}\), where \(C\) is the total number of foreground classes. We then compute the cosine similarity between each group token g\({}_{m}\) and class embedding \(t_{c}\). Following [48], we adopt a threshold to filter out the background class and then take the nearest neighbor class as the semantic label for each group token. Specifically, we set the threshold to 0.95 for PASCAL VOC, 0.05 for PASCAL Context and 0.35 for COCO. All images are resized to have a shorter side length of 448 during inference.
We compare our ZeroSeg model to various supervised and weakly-supervised semantic segmentation methods, including DeiT [43], DINO [6], MoCo [22], GroupViT [48], MaskCLIP [16], MaskCLIP+ [57] and SegCLIP [33]. Notably, our ZeroSeg model is the only one method that does not require any form of human labels during the training process. For fair comparisons, all models are using the same ViT architecture as the backbone [17].
Table 1 summarizes the results of our comparison. First, the results demonstrate that ZeroSeg can achieve competitive performance to several non-zero-shot supervised segmentation baselines, despite not using any segmentation label during training. Specifically, ZeroSeg achieved an mIoU
\begin{table}
\begin{tabular}{l l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Pretraining} & \multicolumn{4}{c}{Transfer Learning} \\ Models & Arch & Dataset & Scale & Supervision & Require labels & Zeroshot & VOC & Context & COCO \\ \hline DeiT\({}^{\#}\)[43] & ViT & IN-1K [13] & 1.3M & class & Yes & ✗ & 53.0 & 35.9 & - \\ DINO\({}^{\#}\)[6] & ViT & IN-1K & 1.3M & self & Yes & ✗ & 39.1 & 20.4 & - \\ MoCo\({}^{\#}\)[22] & ViT & IN-1K & 1.3M & self & Yes & ✗ & 34.3 & 21.3 & - \\ MaskCLIP+ [16] & ViT & Context+COCO+IN-22k & 14M & pseudo masks & Yes & ✗ & - & 31.1 & 18.0 \\ \hline GroupViT & ViT & CC12M+YFCC & 26M & text & Yes & ✓ & 52.3 & 22.4 & 24.3 \\ \hline CLIP & ViT & LAION-20M [16] & 20M & text & Yes & ✓ & - & 13.5 & 8.2 \\ MaskCLIP [16] & ViT & LAION-20M [16] & 20M & text & Yes & ✓ & - & 17.7 & 11.8 \\ GroupViT\({}^{*}\)[48] & ViT & CC3M+COCO & 3.4M & text & Yes & ✓ & 28.1 & 14.8 & 12.9 \\ SegCLIP [33] & ViT & CC3M+COCO & 3.4M & text+CLIP\({}_{\text{T}}\) & Yes & ✓ & 33.3 & 19.1 & 15.2 \\ \hline ZeroSeg (Ours) & ViT & CC3M+COCO & 3.4M & CLIP\({}_{\text{V}}\) & No & ✓ & 37.3 & 19.7 & 17.8 \\ ZeroSeg (Ours) & ViT & IN-1K & 1.3M & CLIP\({}_{\text{V}}\) & No & ✓ & **40.8** & **20.4** & **20.2** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparison to state-of-the-arts baselines.** In the top section, we compare ZeroSeg to fully supervised segmentation methods. Whereas in the middle and bottom sections, we compare ZeroSeg to zero-shot segmentation methods which do not require any finetuning or adaption on target segmentation datasets. Note that MaskCLIP+ training requires a pretrained MaskCLIP model to generate pseudo segmentation ground truth and an adaption step on target segmentation datatsets. CLIP\({}_{\text{V}}\) and CLIP\({}_{\text{T}}\) denote the visual and text encoder of a pretrained CLIP model, respectively. \(\#\) refers to numbers reported in GroupViT [48], while \(*\) refers to results reported from SegCLIP [33]. All results are reported using the mIoU metric.
score of 40.8 on VOC, surpassing the performance of the supervised segmentation model with DINO and MoCo pretraining by +1.7 and +6.5 mIoU, respectively. Comparing to other zero-shot segmentation methods, ZeroSeg outperforms all baselines with a large margin when trained on a similar amount of data. For example, when trained on CC3M+COCO, ZeroSeg outperforms GroupViT and SegCLIP on VOC by +9.2% and +4.0%, respectively. In fact, ZeroSeg even outperforms MaskCLIP (+2.7 on PASCAL Context, +8.4 on COCO) which is trained on 15\(\times\) more data (1.3M _vs_. 20M). These results demonstrate that our ZeroSeg model not only learns strong zero-shot segmentation capability, but also achieves so with high data efficiency. Finally, an interesting observation is that training on 1.3M ImageNet images yield better results compared to training on 3.4M images from CC3M and COCO, we hypothesize that this is due to the fact that ImageNet contains more common objects compared to Conceptual Caption, making it more aligned to objects seen in popular semantic segmentation benchmarks. This also highlights the advantage of not relying on texts during training, as it allows ZeroSeg to be trained on the widest possible range of data sources.
### Open-vocabulary semantic segmentation
Due to the high annotation costs, popular semantic segmentation datasets all have relatively small vocabulary (_e.g_., 20 and 59 classes for PASCAL VOC and Context). This means that it still remains relatively unexplored on how segmentation models perform in an open vocabulary setting. Though as an important task with great practical values, it's non-trivial to conduct evaluation for open-vocabulary semantic segmentation. Therefore, to facilitate the evaluation, we simulate the open-vocabulary setting by constructing a large vocabulary consisting of 1000 classes from ImageNet [13], and compare ZeroSeg against the GroupViT baseline using this vocabulary. For test images, we randomly sample 200 images from the Conceptual Caption validation set. We generate segmentation masks using both our ZeroSeg model trained on 1.3M ImageNet images, and the GroupViT model trained on 26M image-text pairs from CC12M [7]+YFCC [41]. Since there are no ground-truth segmentation labels for Conceptual Caption, we conduct a human study to evaluate the quality of the generated segmentations. Specifically, we resort to Amazon Mechanical Turk for this. We assign each image with overlaid segmentation masks to 5 different workers, and ask each worker to decide which one in the pair has better segmentation quality.
Table 2 displays the evaluation results of our study. The results demonstrate that ZeroSeg received a larger number of votes than GroupViT (68% _vs_. 32%), indicating that ZeroSeg is capable of generating more reliable and human-preferable segmentation, particularly when dealing with a large vocabulary. These findings highlight the open-vocabulary benefits of transferring knowledge from large-pretrained vision-language models.
### Ablation study
**Impact of multi-scale image feature distillation.** In this study, we explore the impact of different designs for the multi-scale image feature distillation method. Specifically, we vary the number and the size of the grids used to compute the multi-scale features. For example, "1\(\times\)1+2\(\times\)2" refers to combining the full image feature (1\(\times\)1) and features computed from each of the 2\(\times\)2 grids. All ablative results are presented in Table 3. Our finding suggests that it's insufficient to produce accurate semantic segmentation, when we only distill the knowledge to our ZeroSeg model from a full-sized image feature (1\(\times\)1), as it fails to capture enough localized semantic features. Therefore, we explore more grid size settings such as 2\(\times\)2, 3\(\times\)3, and 4\(\times\)4, as they are supposed to capture different levels of object details in the image. When combined with the full image feature (1\(\times\)1), we observe that 3\(\times\)3 grids outperform other settings (40.2 mIoU), while it works the best when we com
\begin{table}
\begin{tabular}{l c} Ablations & VOC (mIoU) \\ \hline Base & 21.1 \\ Base+Multi-scale & 28.5 \\ Base+segment matching & 38.6 \\ Base+Multi-scale+segment matching & **40.8** \\ \end{tabular}
\end{table}
Table 4: **Ablating distillation losses.** ‘Base’ refers to the setting where distillation is applied only between the full image feature and the global image representation \(z\). Meanwhile, ‘Multi-scale’ refers to that the distillation is applied between all multi-scale features and the global representation \(z\). Finally, ‘segment matching’ refers to turning on the segment matching loss computed between each segment token and the multi-scale image features.
\begin{table}
\begin{tabular}{l c} \hline Model & GroupViT & ZeroSeg (ours) \\ \hline \#votes & 323/1000 & 677/1000 \\ \end{tabular}
\end{table}
Table 2: **Human study for open-vocabulary segmentation.** We compare the number of favoring votes received by ZeroSeg and GroupViT, when asking AMT workers to evaluate the quality of segmentation results on sampled images from Conceptual Caption.
\begin{table}
\begin{tabular}{l c} Window Scales & VOC (mIoU) \\ \hline
1x1 & 21.1 \\
1x1+2x2 & 23.7 \\
1x1+3x3 & 40.2 \\
1x1+4x4 & 32.4 \\
1x1+2x2+3x3+4x4 & **40.8** \\ \end{tabular}
\end{table}
Table 3: **Ablating multi-scale image features.** We dissect the impact of different settings to compute the multi-scale image feature. As an example, 2\(\times\)2 refers to the setting where the full image is divided into 2\(\times\)2 non-overlapping grids. Note that the segment matching loss is applied to all settings except for the 1\(\times\)1 grid.
bine all grid sizes to produce multi-scale features for distillation. Overall, Table 3 demonstrates that the multi-scale image feature design has a significant impact on the success of distillation, as it almost doubled the segmentation mIoU on VOC (21.1 to 40.8).
**Impact of segment matching loss.** We compare the performance of our model with and without the segment matching loss. The results are presented in Table 4. We first compare the _base_ to the _base_ + _multi-scale_ setting. _base_ refers to the setting in which we only distill knowledge from the full image feature (_i.e._, 1\(\times\)1 grid) to the global image representation \(z\) (Fig. 2). Whereas _multi-scale_ refers to the distillation loss between the multi-scale image features (2x2, 3x3 and 4x4 grid features) and \(z\). Our findings indicate that including the segment matching loss results in a large improvement over the model's performance. Specifically, the addition of the segment matching loss led to a 17.5 mIoU increase on PASCAL VOC over the _base_ model. Additionally, the segment matching loss also improves the performance of the _base_ + _multi-scale_ setting by 12.3 mIoU. These results suggest that the segment matching loss plays a crucial role in effectively transferring visual concept knowledge from CLIP to segment tokens. Overall, this ablative experimental result highlights the importance of the segment matching loss for our model's success.
**Mask ratio for encoder input.** As shown in [21], the mask ratio for the encoder input plays an important role affecting both the representation quality and the efficiency. We ablate the impact of different mask ratios on semantic segmentation accuracy in Table 5. The results suggest that a mask ratio of 60% leads to the best accuracy at an mIoU of 40.8% on VOC, while providing a \(\sim\)36% speedup compared to the training without any masks, and it also brings a improvement over 5.2 mIoU. Therefore, we choose 60% mask ratio as our default mask ratio for all the future experiments. Note that this is lower than the 75% mask ratio used in the MAE paper [21], suggesting that it requires seeing more pixels (_i.e._, lower mask ratio) to better learn pixel-level tasks.
### Qualitative visualizations
**Visualizing open-vocabulary semantic segmentation.** In addition to the human study results described in Sec. 4.3, we present qualitative visualizations for open-vocabulary segmentation in this section. To do this, we apply both our ZeroSeg model and the GroupViT model (using the publicly released weights) to perform zero-shot open-vocabulary semantic segmentation. In Fig. 4, we visualize the results on 4 images randomly sampled from both the ImageNet and the Conceptual Caption validation set. From the figure, it's clear that ZeroSeg produces better results under the open-vocabulary setting, as it inherited CLIP model's extraordinary capability for fine-grained classification. For example, in the top-right image, ZeroSeg accurately predicts the _shovel_ class, rather than simply categorizing everything as _toolkit_, which is the case for GroupViT.
**Visualization of the ablation on loss functions.** To qualitatively observe the impact of different loss functions, we visualize the segmentation masks on two images, selected from the ImageNet validation set, using variants of ZeroSeg models trained with different loss functions. The visualizations are presented in Fig. 3. We observe that the _base_ model is not able to produce meaningful segments, despite producing the correct object class label. With the _multi-scale_ loss added, the model starts to produce localized segments, but still lags behind on the precise delineation of object boundaries. Finally, by integrating both the _multi-scale_ and the _segment matching_ loss, our ZeroSeg model now produces much more accurate object boundaries, demonstrating the effectiveness of both losses.
## 5 Discussion
In this work, we present ZeroSeg as a novel method for training open-vocabulary zero-shot semantic segmentation models without using any human labels. ZeroSeg learns to perform semantic segmentation by distilling knowledge
Figure 3: **Segmentation quality with different losses.** We present the qualitatively segmentation results from models trained with different loss functions. Specifically, we compare models trained with only the global distillation loss (‘Base’), adding in the multi-scale loss (‘Multi-scaled’), and with the combined multi-scale and segment matching losses (‘Multi-scaled + Segment matching’).
\begin{table}
\begin{tabular}{l c c c c c c c c} Mask ratio & 0\% & 20\% & 30\% & 40\% & 50\% & 60\% & 70\% & 80\% \\ \hline mIoU & 35.6 & 37.6 & 38.7 & 40.4 & 39.8 & **40.8** & 33.3 & 32.8 \\ Speedup (\%) & 0 & 15 & 19 & 26 & 32 & 36 & 39 & 43 \\ \end{tabular}
\end{table}
Table 5: **Ablating mask ratios**. We study the impact of different mask ratios on segmentation quality (mIoU on VOC [18]) and the training speed. The relative speed-up is measured on the full model, by comparing to the setting of mask ratio being 0.
from a large-scale pretrained vision-language model. This is a challenging task since these VL models are usually trained at an image-level and are not designed for pixel-level tasks like semantic segmentation.
To effectively distill visual knowledge from the pretrained VL model to our ZeroSeg model, we designed two loss functions: the multi-scaled feature distillation loss and the segment matching loss. The multi-scaled feature distillation loss helps ZeroSeg capture object-localized semantic information at different scales. On the other hand, the segment matching loss aims to help align each segment token with the corresponding image region, and thus produce spatially consistent semantic features. Through our experiments, we demonstrated that both losses are critical to achieving good segmentation accuracy and they are supplementary to each other.
We train ZeroSeg on 1.3M ImageNet images and observe that it achieves comparable or better results, compared to those models that are either pretrained on much larger image-text pair datasets, or finetuned with segmentation labels in a supervised manner. Furthermore, through human study and visualizations, we demonstrate that ZeroSeg outperforms GroupViT on the task of open-vocabulary segmentation.
We also discovered that GroupViT struggles with object classes that are defined by sub-words such as 'ground', a sub-word of background, or compound words like 'bed-clothes', 'keyboard', and'motorbike' in Table 5, which might stem from the misunderstanding in the language context during the model training. In contrast, ZeroSeg performs much better (+18.07%) on those sub-word or compound words since training ZeroSeg does not rely on textual context.
Overall, with ZeroSeg, we demonstrated that it's possible to effectively train semantic segmentation models by transferring the knowledge from a pretrained, general-purpose vision-language model. We hope this could open a new door to leverage the recent trendy efforts on foundation models [4] to benefit pixel-level downstream tasks like semantic segmentation.
**Broader impact.** Our model has the unique capability to learn segmentation from images without human labels, thus enabling use cases across diverse domains. However, it's important to acknowledge that the large pretrained vision-language models on which our model is based may perpetuate biases present in the training data. Therefore, mitiga
\begin{table}
\begin{tabular}{l c c c c c} mIoU & bedclothes & ground & keyboard & motorbike & avg \\ \hline GroupViT & 0.91 & 9.33 & 7.39 & 21.47 & 9.78 \\ ZeroSeg & **11.21** & **23.31** & **29.1** & **47.77** & **27.85** \\ \end{tabular}
\end{table}
Table 6: Performance on semantic classes with sub-word or compound word.
Figure 4: **Comparing GroupViT and ZeroSeg on open-vocabulary semantic segmentation. We present a comparison on open-vocabulary semantic segmentation between GroupViT and our ZeroSeg model. To simulate the open-vocabulary setting, we use a large vocabulary comprising 1000 classes from ImageNet. Half of the test images are sampled from the ImageNet validation set (top 2 rows), while other half from the Conceptual Caption dataset (bottom 2 rows). For each image, we show the original input, the output from GroupViT, and the output from our ZeroSeg model.**
tions like careful training data filtering is crucial to ensure the ethical use of our model.
**Acknowledgements.** We would like to express our sincere appreciation to Xinlei Chen and Saining Xie for providing their thoughtful suggestions.
|
2308.07693
|
A hybrid method of generating spin-squeezed states for quantum-enhanced
atom interferometry
|
We introduce a new spin-squeezing technique that is a hybrid of two well
established spin-squeezing techniques, quantum nondemolition measurement (QND)
and one-axis twisting (OAT). This hybrid method aims to improve spin-squeezing
over what is currently achievable using QND and OAT. In practical situations,
the strength of both the QND and OAT interactions is limited. We found that in
these situations, the hybrid scheme performed considerably better than either
OAT or QND used in isolation. As QND and OAT have both been realised
experimentally, this technique could be implemented in current atom
interferometry setups with only minor modifications to the experiment.
|
Liam Fuderer, Joseph J Hope, Simon A Haine
|
2023-08-15T10:46:19Z
|
http://arxiv.org/abs/2308.07693v2
|
# A hybrid method of generating spin-squeezed states for quantum-enhanced atom interferometry
###### Abstract
We introduce a new spin-squeezing technique that is a hybrid of two well established spin-squeezing techniques, quantum nondemolition measurement (QND) and one-axis twisting (OAT). This hybrid method aims to improve spin-squeezing over what is currently achievable using QND and OAT. In practical situations, the strength of both the QND and OAT interactions is limited. We found that in these situations, the hybrid scheme performed considerably better than either OAT or QND used in isolation. As QND and OAT have both been realised experimentally, this technique could be implemented in current atom interferometry setups with only minor modifications to the experiment.
## I Introduction
Atom interferometry is capable of providing state-of-the-art measurements of gravitational fields [1; 2; 3; 4; 5; 6; 7], gravitational gradients [8; 9; 10; 11; 12; 13], and magnetic fields [14], with future applications such as minerals exploration [15; 16], hydrology [17], inertial navigation [18; 19; 20; 21], and possible tests of candidate theories of quantum gravity [22; 23; 24]. There is considerable recent interest in the use of quantum entanglement in atom interferometry [25; 26; 27], which would improve the precision, measurement rate, and reduce the overall size of these devices [26]. Two of the most promising routes to generating useful entanglement are spin-squeezing via One-Axis Twisting (OAT) [28; 29; 30; 31], or via Quantum non-demolition (QND) measurements [32; 33; 34; 35; 36; 37; 38]. These methods have been used in proof of principle experiments [39; 40], but have not yet found utility in a state-of-the-art inertial sensors. In particular, typical atom interferometry experiments with ultra-cold atoms have only modest optical densities, and permit only modest levels of squeezing via QND [41]. While the use of an optical cavity can significantly improve the amount of achievable squeezing [36; 37; 39; 40], this adds considerable experimental overhead, increasing the size, weight, and complexity of the experiment [19].
A recent proposal showed that in a freely expanding Bose-Einstein condensate (BEC), strong atom-atom interactions can generate substantial spin-squeezing via OAT between two momentum modes without degrading mode overlap or causing significant phase diffusion, and could potentially allow for a high precision, spin-squeezed gravimetry measurement [42]. However, only a modest level of OAT interactions are achievable via this method, as the interactions are quickly reduced due to the expansion of the atomic clouds. Here, we present a hybrid scheme that utilises both QND and OAT. In particular, by combining both schemes, we can achieve levels of squeezing significantly higher than either scheme on their own. When restricting ourselves to the small levels of QND interaction that free-space QND permits, and the weak OAT interaction that would be generated via the scheme presented in Szigeti _et al._[42], we show that combining these schemes can give significantly better levels of squeezing than either in isolation. Furthermore, these schemes are entirely compatible, and can be implemented together without compromising the performance of either one.
## II Combining QND and OAT to improve spin-squeezing
We will begin by describing the hybrid scheme, and then going into the details of each element, specifically, the QND and OAT interactions. Assuming a BEC of \(N\) atoms with two hyperfine states \(|1\rangle\) and \(|2\rangle\), we introduce the pseudo-spin operators \(\hat{J}_{k}=\frac{1}{2}(\hat{a}_{1}^{\dagger}\,\hat{a}_{2}^{\dagger})\sigma_{ k}(\hat{a}_{1}\,\hat{a}_{2})^{T}\), where \(\sigma_{k}\) is the \(k\)th Pauli matrix, and \(\hat{a}_{1}\) and \(\hat{a}_{2}\) are the annihilation operators for atomic states \(|1\rangle\) and \(|2\rangle\) respectively. These two states may also carry an associated momentum difference, such as is the case in an atom interferometer used to measure gravity. When used as the input to a Mach-Zehnder interferometer, the achievable phase-sensitivity is
\[\Delta\phi=\frac{\xi}{\sqrt{N}} \tag{1}\]
where
\[\xi=\sqrt{N}\frac{\sqrt{\text{Var}(\hat{J}_{z})}}{|\langle\hat{J}_{\text{z}} \rangle|} \tag{2}\]
where \(\xi\) is the Wineland spin-squeezing parameter [43], with \(\xi<1\) indicating spin-squeezing. The hybrid scheme involves first applying the QND interaction, and then using the OAT interaction to further enhance the squeezing (see figure (1)). Specifically, we initially prepare our system with all atoms in state \(|1\rangle\) (maximum \(\hat{J}_{z}\) eigenstate), before applying a beamsplitter pulse which puts
each atom in an equal coherent superposition of \(|1\rangle\) and \(|2\rangle\). This is a rotation about the \(\hat{J}_{y}\) axis by \(\pi/2\), creating a coherent spin-state (CSS) on the equator of the Bloch sphere, or a maximal \(\hat{J}_{x}\) eigenstate. The QND interaction reduces the uncertainty in the \(\hat{J}_{z}\) axis, while increasing fluctuations in the \(\hat{J}_{y}\) axis. The state is then rotated by an angle \(\theta_{\text{QND}}\) about the \(\hat{J}_{x}\) axis, before the OAT interaction is applied, which causes a non-linear'shearing' of the state. Rotating the QND state before the OAT interaction increases the variance in \(\hat{J}_{z}\), which causes the state to shear faster under the OAT interaction, ultimately increasing the amount of spin-squeezing achievable. The value of \(\theta_{\text{QND}}\) that optimises the spin-squeezing parameter depends on both the amount of QND interaction before, and OAT interaction after the rotation. The QND interaction is achieved via an optical coupling, and can occur on time-scales much faster than the atomic dynamics, while the OAT interaction is achieved via utilisation of the interatomic interactions, and typically takes several milliseconds to achieve significant shearing. As these two interactions utilise different resources, they are entirely compatible and can be applied sequentially as described above. As QND and OAT have both been realised experimentally, this technique could be implemented in current atom interferometry setups with only minor modifications to the experiment. Specifically, as described in [42], OAT can be achieved by replacing the free-expansion time with two additional beamsplitter pulses, implemented via the same laser system as the interferometric pulses. QND is achieved by using an off-resonant laser (or a pair of lasers, one for each hyperfine state) to perform state-dependent number estimation of the sample after an initial additional beamsplitter pulse.
We will describe the dynamics of this scheme quantitatively in section III. We will now briefly review the principles and performance of the OAT and QND interactions individually, before assessing the performance of the hybrid scheme.
Figure 1: a) Hybrid method schematic. The model is a modification to the current OAT state-preparation sequence. The output state from the hybrid model is used as the initial state for the interferometer sequence. b) Bloch sphere representation of OAT (top row) and the proposed hybrid method (bottom row) for an initial CSS. In OAT, the initial CSS (i) undergoes nonlinear shearing creating a state with reduced variance in some direction (ii). This state is then rotated about the \(J_{x}\) axis by an amount \(\theta_{\text{OAT}}\) in order to create a state with reduced variance in the \(J_{z}\) axis (iii). In the hybrid method, the initial state first undergoes QND squeezing, creating a state with reduced variance in the \(J_{z}\) axis (iv). This state is then rotated about the \(J_{x}\) axis by an amount \(\theta_{\text{QND}}\) (v), such that it undergoes more rapid shearing under OAT dynamics (vi). This state is then rotated by amount \(\theta_{\text{OAT}}\) to reduce the variance in the \(J_{z}\) direction. When the degree of OAT or QND interaction is limited, the hybrid scheme can produce better spin squeezing than either OAT or QND used in isolation.
### QND-Squeezing
By illuminating the atomic sample with a laser detuned from some excited state \(|e\rangle\), the population in each of the hyperfine states \(|a_{1}\rangle\) and \(|a_{2}\rangle\) is imprinted on the phase of the light. Consequently, a measurement of the phase allows one to infer information about the population difference, collapsing the atomic state into a spin-squeezed state (SSS), with reduced variance in \(\hat{J}_{z}\)[25; 44]. As the collapse is random, feedback (via a rotation around the \(\hat{J}_{y}\) axis of magnitude proportional to the measurement result) is used to re-center the state on the equator, such that \(\langle\hat{J}_{z}\rangle=0\). The amount of spin-squeezing depends on the strength of the atom-light entanglement. QND can be achieved using monochromatic (one-colour) [45], or dichromatic (two-colour) laser light [41; 46]. One-colour QND is susceptible to dephasing of the atoms due to the inhomogeneous spatial profile of the laser, which degrades spin-squeezing [47]. Two-colour QND rectifies this issue and suppresses experimental noise, such as vibrations in mirror positions, to first order. Assuming the laser is spatially homogeneous, both methods can be shown to have the same spin-squeezing. For simplicity, we will therefore proceed with a one-colour QND model without loss of generality, although a two-colour scheme may be favourable for experimental implementation.
A schematic of the atom-light interaction is shown in Figure 2. A single laser is detuned from resonance by an amount \((-)\Delta\) for the \(|a_{1(2)}\rangle\rightarrow|e\rangle\) transition. Adiabatically eliminating the excited state [44]\(|e\rangle\) gives the effective hamiltonian
\[\hat{H}_{\text{QND}} = -\hbar\chi_{\text{QND}}\hat{J}_{z}\hat{b}^{\dagger}\hat{b}, \tag{3}\]
where \(\hat{b}\) is the annihilation operator for a pulse of light of duration \(t_{p}\), and \(\chi_{\text{QND}}\) is the atom-light interaction strength, which for the detuning \(\Delta\) shown in figure 2 is
\[\chi_{\text{QND}}=\frac{2\sigma_{0}\Gamma}{A\Delta t_{p}}. \tag{4}\]
Here, \(\sigma_{0}\) is the resonant scattering cross-section, \(\Gamma\) is the transition spontaneous emission rate, \(A\) is the cross-sectional area of the laser incident on the atomic sample.
The degree of spin-squeezing is well characterised by the resonant optical depth, \(d\), and the inelastic scattering rate integrated over the pulse duration, \(\eta\), which can be written in terms of the system parameters as
\[d = \frac{\sigma_{0}N}{A}, \tag{5}\] \[\text{and }\eta = \frac{2\sigma_{0}}{A}\left(\frac{\Gamma}{\Delta}\right)^{2}N_{p}, \tag{6}\]
where \(N_{p}\) is the total photon number. Due to the narrow momentum linewidth of the Bose-Einstein condensate, any spontaneous emissions causes the atom to be scattered into a distinguishable momentum state, and is therefore lost from the interference measurement [41]. We treat these spontaneous emission events by coupling vacuum noise fluctuations into the atomic operators [48].
For a fixed resonant optical depth, there is a trade-off between the amount of information inferred about the population difference and information lost due to spontaneous emission, as seen in the inset of Figure 3. Optimising over the loss fraction gives [49; 32; 46] an optimum squeezing of
\[\xi_{\text{opt}}\approx d^{-1/4}. \tag{7}\]
Figure 3: Optimum spin-squeezing parameter \(\xi_{\text{opt}}\) from QND on the D1 line of a system of \(10^{5}\)\({}^{87}\)Rb atoms, calculated via Truncated Wigner (black stars), compared to analytic scaling \(\xi_{\text{opt}}=d^{-1/4}\) (red solid line). Inset figure: Scaling of the spin-squeezing parameter, \(\xi\), with the loss fraction \(\eta\) for a fixed resonant optical depth. Outer figure: The optimal spin-squeezing parameter as a function of optical depth, found by minimisation of \(\xi\) with respect to \(\eta\), for \(d=d_{1}=75\) (blue solid line) and \(d=d_{2}=387\) (green dashed line). \(\xi_{\text{opt}}\) scales as \(d^{-1/4}\).
Figure 2: a) Energy-level diagram of quantum non-demolition measurements. The atoms are illuminated by a laser (annihilation operator \(\hat{b}\)) equally detuned from the two atomic levels, \(|a_{1}\rangle\) and \(|a_{2}\rangle\), to an excited state \(|e\rangle\). This imprints the difference in the number of atoms in each state, \(\hat{J}_{z}\), into the phase of the laser. b) Schematic of experimental quantum non-demolition measurement in free-space. After the laser passes through the atoms, the phase is measured by a homodyne detector, allowing for an inference of \(\hat{J}_{z}\) to be made. Such an inference reduces the variance in \(\hat{J}_{z}\) and thus creates a SSS.
For Bose-condensed atoms, the atomic density is limited by three-body recombination, which ultimately limits the achievable optical depth. For fixed atomic density, adjusting the aspect ratio of the BEC to a cigar-shaped cloud with the long-axis aligned with the tightly focused probe beam increases the optical density. In this geometry, the limiting factor now becomes the Rayleigh length of the beam. Confining the atoms to a cylinder of diameter equal to the beam waste, and length equal to the Rayleigh length, and optimising the size of the beam waist gives a maximum optical depth of
\[d\leq\sigma_{0}\sqrt{\frac{N\rho}{\lambda}}\,, \tag{8}\]
where \(\rho\) is the atomic density, and \(\lambda\) is the optical wavelength. For a \(N=10^{5}\)\({}^{87}\)Rb BEC using the D1 line, optimising the optical depth for a fixed atomic density of \(10^{14}/\mathrm{cm}^{3}\), gives an optical depth of \(d=387\), and \(\xi_{\mathrm{opt}}\approx 0.23\). In order to further improve the achievable squeezing, the effective optical depth, and therefore the achievable level of QND squeezing further, a high finesse optical cavity could be employed.
### One-axis twisting.
One-axis twisting (OAT) dynamics is caused by a Hamiltonian of the form
\[\hat{H}_{\mathrm{OAT}}=\hbar\chi_{\mathrm{OAT}}(t)\hat{J}_{z}^{2}, \tag{9}\]
and leads to correlations between the relative number difference, and relative phase degrees of freedom [28]. This results in a'shearing' of the quantum state on the Bloch sphere, and a narrowing of the spin distribution along one axis (figure 4). In a two component BEC, OAT dynamics naturally arises from the inter-atomic interactions [50, 51, 52, 53, 54, 55, 29]. Introducing the usual Bosonic field operators for hyperfine state \(|j\rangle\), \(\hat{\psi}_{j}(\mathbf{r})\), which obey the usual commutation relations
\[\left[\hat{\psi}_{i}(\mathbf{r}),\hat{\psi}_{j}^{\dagger}(\mathbf{r}^{\prime} )\right]=\delta_{ij}\delta(\mathbf{r}-\mathbf{r}^{\prime})\,, \tag{10}\]
the Hamiltonian term describing inter-atomic interactions is
\[\hat{H}_{\mathrm{int}}=\sum_{i,j}\frac{U_{ij}}{2}\int\hat{\psi}_{i}^{\dagger} (\mathbf{r})\hat{\psi}_{j}^{\dagger}(\mathbf{r})\hat{\psi}_{i}(\mathbf{r}) \hat{\psi}_{j}(\mathbf{r})\,d^{3}\mathbf{r}\,. \tag{11}\]
Making a single-mode approximation as in [54], \(\hat{\psi}_{i}(\mathbf{r},t)\approx\hat{a}_{i}u_{i}(\mathbf{r},t)\) and ignoring terms linear in \(\hat{J}_{z}\), we recover Eq. (9), with \(\chi(t)=4(\chi_{11}(t)+\chi_{22}(t)-2\chi_{12}(t))\), with
\[\chi_{ij}(t)=\frac{U_{ij}}{2\hbar}\int|u_{i}(\mathbf{r},t)|^{2}|u_{j}(\mathbf{ r},t)|^{2}\,d^{3}\mathbf{r}\,. \tag{12}\]
The total effective OAT interaction is then given by the unitary \(\hat{U}_{\mathrm{OAT}}=\exp\Bigl{(}-i\lambda_{\mathrm{OAT}}\hat{J}_{z}^{2} \Bigr{)}\), where \(\lambda_{\mathrm{OAT}}=\int_{0}^{T}\chi(t)dt\), and \(T\) is the duration of the state preparation time. In the recent proposal by Szigeti _et al._[42], it was shown that spin-squeezing could be created by inducing significant OAT dynamics by spatially separating the two clouds in during the usual pre-expansion phase that usually proceeds atom interferometry. This sets \(\chi_{12}\to 0\), significantly increasing \(\chi\). As the clouds then expand, the magnitude of \(\chi\) decreases to zero, causing \(\lambda_{\mathrm{OAT}}\) to plateau. Benchmark calculations place the maximum interaction strength currently achievable at \(\lambda_{1}=6.5\times 10^{-5}\) for a system of \(N=10^{5}\) atoms [42], which is considerably less than the optimum value for this number of atoms (\(\lambda_{\mathrm{opt}}\approx 53\times 10^{-5}\)). However, our recent calculations indicate that interaction strengths of up to \(\lambda_{2}=9.8\times 10^{-5}\) are possible by instantaneously applying an inwards focusing potential immediately before expansion, in a similar method to that proposed by [56]. This value of \(\lambda_{\mathrm{OAT}}\) is based on modeling the delta-kick scheme proposed by [56] using the same simulation technique as [42]. These results are currently being prepared for publication. We use these values as references throughout our discussion.
## III Simulating the hybrid method
In order to simulate the combination of QND and OAT dynamics, we cannot rely on simple models that give the spin-squeezing parameter for QND dynamics. This is because we need to know the form of the full quantum state, in order to perform the subsequent OAT dynamics. In particular, the magnitude of the anti-squeezing
Figure 4: (a) Evolution of the Wigner quasi-probability distribution for an initial coherent spin state of 100 atoms under OAT dynamics, for (i) \(\lambda_{\mathrm{OAT}}=0\), (ii) \(\lambda_{\mathrm{OAT}}=0.025\), (iii) \(\lambda_{\mathrm{OAT}}=0.05\), (iv) \(\lambda_{\mathrm{OAT}}=0.05\), with an additional rotation around the \(\hat{J}_{x}\) axis applied to convert the squeezing into the \(\hat{J}_{z}\) direction. (b): Spin squeezing parameter as a function of \(\lambda_{\mathrm{OAT}}\).
in the conjugate (\(\hat{J}_{y}\)) axis will have a significant effect on the subsequent OAT evolution. The truncated Wigner (TW) method [57] has been successfully used to simulate BEC dynamics [58; 59; 60; 61]. Importantly, the TW method can be used to model the production of nonclassical correlations within the condensate [627 -64] including those generated OAT [53; 54; 42; 65] and atom-light interactions [66; 41]. The TW method also works well for large numbers of atoms, and can easily incorporate loss due to spontaneous emission. The derivation of the TW method has been described in detail elsewhere [67; 58; 68]. Briefly, the equation of motion for the Wigner function for the system can be found from the von-Neumann equation by using correspondences between differential operators on the Wigner function and the original quantum operators [69]. By truncating third- and higher-order derivatives (the TW approximation), a Fokker-Planck equation (FPE) is obtained. The FPE is then mapped to a set of stochastic differential equations for complex variables \(\{\alpha_{1}(t),\alpha_{2}(t),\beta(t)\}\), which loosely correspond to the annihilation operators of the system \(\{\hat{a}_{1}(t),\hat{a}_{2}(t),\hat{b}(t)\}\), with initial conditions stochastically sampled from the appropriate Wigner distribution [68; 70]. Moments of observables are then calculated via the mapping \(\langle\{f((\hat{a}_{1},\hat{a}_{1}^{\dagger},\hat{a}_{2},\hat{a}_{2}^{ \dagger},\hat{b},\hat{b}^{\dagger})\}_{\text{sym}}\rangle\rightarrow\overline {f(\alpha_{1},\alpha_{1}^{*},\alpha_{2},\alpha_{2}^{*},\beta,\beta^{*})}\), where'sym' denotes symmetric ordering and the overline denotes the mean over many stochastic trajectories.
### QND simulation
We begin by simulating the QND dynamics of a series of light pulses, each of duration \(t_{p}\), interacting with the BEC sequentially. In the absence of loss due to spontaneous emission, mapping Eq. (3) to the TW method, we find
\[i\frac{d}{dt}\alpha_{1} =\tfrac{\chi_{\text{QND}}}{2}{|\beta_{j}|}^{2}\alpha_{1} \tag{13a}\] \[i\frac{d}{dt}\alpha_{2} =\tfrac{\chi_{\text{QND}}}{2}{|\beta_{j}|}^{2}\alpha_{2}\] (13b) \[i\frac{d}{dt}\beta_{j} =\chi_{\text{QND}}\mathcal{J}_{z}\beta_{j} \tag{13c}\]
where \(\mathcal{J}_{z}=\frac{1}{2}({|\alpha_{1}|}^{2}-{|\alpha_{2}|}^{2})\), and \(\beta_{j}\) is the TW variable associated with the \(j\)th pulse. By assuming our pulses are short compared to the relevant atomic time-scales, we can take the continuum limit by introducing the parameter
\[\beta(t)=\frac{1}{\sqrt{t_{p}}}\sum_{j}\Pi_{j}(t)\beta_{j} \tag{14}\]
where
\[\Pi_{j}(t)=\begin{cases}1&\text{if }jt_{p}<t\leq(j+1)t_{p}\\ 0&\text{otherwise.}\end{cases} \tag{15}\]
By taking the limit \(t_{p}\to 0\), these equations can be solved analytically to give
\[\alpha_{1}(t) =\exp\left(-i\tfrac{\lambda_{\text{QND}}}{2}N_{p}(t)\right) \alpha_{1}(0) \tag{16a}\] \[\alpha_{2}(t) =\exp\left(i\tfrac{\lambda_{\text{QND}}}{2}N_{p}(t)\right)\alpha _{2}(0)\] (16b) \[\beta(t) =\exp\left(-i\lambda_{\text{QND}}\mathcal{J}_{z}\right)\beta(0) \tag{16c}\]
where \(N_{p}(t)=\int_{0}^{t}{|\beta(t^{\prime})|}^{2}dt\),
\[\lambda_{\text{QND}}=\chi_{\text{QND}}t_{p}=\frac{2\sigma_{0}\Gamma}{A\Delta}. \tag{17}\]
and the initial condition are given by
\[\alpha_{1}(0) =\sqrt{\frac{N_{t}}{2}}+\nu_{1} \tag{18a}\] \[\alpha_{2}(0) =\sqrt{\frac{N_{t}}{2}}+\nu_{2}\] (18b) \[\beta_{in}(t) =\beta_{0}+w(t) \tag{18c}\]
where \(\nu_{j}\) is complex Gaussian noise satisfying \(\overline{\nu_{i}^{*}\nu_{j}}=\frac{\delta_{ij}}{2}\), and \(\overline{w^{*}(t)w(t^{\prime})}=\frac{1}{2}\delta(t-t^{\prime})\).
We introduce the effect of spontaneous emission by noting that the fraction of atoms lost from each component in the duration of one pulse is \(f=1-e^{-\eta(t)}\), where
\[\eta(t)=\frac{2\sigma_{0}}{A}\left(\frac{\Gamma}{\Delta}\right)^{2}|\beta_{0}| ^{2}t\,. \tag{19}\]
By treating loss from the atomic system as an introduction of vacuum noise in the standard way [41], we obtain
\[\alpha_{1}(t) =\exp\left(-i\frac{\lambda_{\text{QND}}}{2}N_{p}(t)\right)\alpha_ {1}(0)\sqrt{1-f(t)}\] \[+ \sqrt{f(t)}v_{1}(0) \tag{20a}\] \[\alpha_{2}(t) =\exp\left(i\frac{\lambda_{\text{QND}}}{2}N_{p}(t)\right)\alpha_ {2}(0)\sqrt{1-f(t)}\] \[+ \sqrt{f(t)}v_{2}(0)\] (20b) \[\beta_{\text{out}}(t) =\exp\left(-i\lambda_{\text{QND}}\mathcal{J}_{z}(t)\right)\beta_ {\text{in}}(t) \tag{20c}\]
where \(v_{j}(0)\) is complex gaussian noise satisfying \(\overline{v_{i}^{*}v_{j}}=\frac{\delta_{ij}}{2}\). Here, we have defined \(\beta_{\text{out}}(t)\) as the light exiting the BEC after interacting with the atoms, and \(\beta_{\text{in}}(t)\) as the input light.
Equations 20 create correlations between the relative population imbalance of the atoms, and the phase of the light. In order to convert this into spin-squeezing, the population imbalance is inferred from the phase-quadrature of the light, which is obtained via homodyne measurement, and is represented by the quantity
\[Y=\frac{1}{\sqrt{T}}\int_{0}^{T}i\left(\beta_{\text{out}}(t)-\beta_{\text{out}}^ {*}(t)\right)dt\,. \tag{21}\]
This information is then used to implement the feedback step, by rotating the atomic state around the \(\hat{J}_{y}\) axis by an angle \(\theta_{y}\) proportional to the result of a measurement on the quadrature of the light. Specifically, we perform the transformation
\[\alpha_{1} \rightarrow\cos\frac{\theta_{y}}{2}\alpha_{1}+\sin\frac{\theta_{y}} {2}\alpha_{2} \tag{22a}\] \[\alpha_{2} \rightarrow\cos\frac{\theta_{y}}{2}\alpha_{2}-\sin\frac{\theta_{y}} {2}\alpha_{1} \tag{22b}\]
where
\[\theta_{y}=\sin^{-1}\left(\frac{Y}{\lambda_{\text{QND}}N_{t}\beta_{0}T}\right)\,. \tag{23}\]
The spin-squeezing parameter calculated via this method is shown in figure 3, and shows excellent agreement with the analytic solution.
### Combining OAT and QND
In order to simulate the hybrid scheme, we take the solution of equations [20], perform a rotation by an amount \(\theta_{\text{QND}}\), and use this as the initial condition for the OAT dynamics. OAT dynamics is simulated in TW via mapping equation 9 to the set of ODEs for the TW variables
\[i\frac{d}{dt}\alpha_{1}=\frac{\chi(t)}{2}\left(\left|\alpha_{1} \right|^{2}-\left|\alpha_{2}\right|^{2}\right)\alpha_{1} \tag{24a}\] \[i\frac{d}{dt}\alpha_{2}=-\frac{\chi(t)}{2}\left(\left|\alpha_{1} \right|^{2}-\left|\alpha_{2}\right|^{2}\right)\alpha_{2} \tag{24b}\]
which has the simple analytic solution
\[\alpha_{1}(t) =\exp\left(-i\frac{\lambda_{\text{OAT}}(t)}{2}\left(\left|\alpha_ {1}\right|^{2}-\left|\alpha_{2}\right|^{2}\right)\right)\alpha_{1}(0) \tag{25a}\] \[\alpha_{2}(t) =\exp\left(i\frac{\lambda_{\text{OAT}}(t)}{2}\left(\left|\alpha_ {1}\right|^{2}-\left|\alpha_{2}\right|^{2}\right)\right)\alpha_{2}(0) \tag{25b}\]
After these dynamics, the state is rotated around the \(J_{x}\) axis by an angle \(\theta_{\text{OAT}}\) in order to minimise the variance in \(J_{z}\).
To gain some intuition about the hybrid dynamics, we first simulate the system in the absence of spontaneous emission. Figure 5 shows how the spin-squeezing parameter evolves under OAT dynamics from an initially spin-squeezed state, compared to an initial CSS. Importantly, the role of the pre-OAT rotation by \(\theta_{\text{QND}}\) is clearly illustrated: for angles that increase the initial value of \(V(\hat{J}_{z})\) to larger than a CSS, the OAT dynamics occurs much faster. When a smaller rotation angle is used, slightly better spin-squeezing is achieved, at the expense of much slower evolution.
When loss is included in the QND model, the final state after the QND interaction is no longer a minimum uncertainty state. The extra anti-squeezing in the \(J_{y}\) direction will negatively affect the efficacy of the subsequent OAT dynamics. Figure 6 shows the spin-squeezing parameter including loss in the QND calculation, optimised over \(\eta\) and \(\theta_{\text{QND}}\), compared to purely OAT dynamics, for realistic values of \(\lambda_{\text{OAT}}\), ie \(\lambda_{\text{OAT}}\leq 2\times 10^{-4}\). We considered three optical depths corresponding to different exemplary QND laser configurations: \(d=50\), a poorly focused laser in free-space (\(A=(1.5\text{mm})^{2}\)); \(d=387\), an optimally focused laser in free-space (\(A=(0.53\text{mm})^{2}\)); and \(d=3500\), an optimally focused laser in a high-finesse optical cavity. with finesse \(\mathcal{F}=10^{4}\). The hybrid method outperformed OAT over all interaction strengths for each optical depth. For experimentally realisable interaction strengths, \(\lambda_{\text{OAT}}\leq\lambda_{1}\), the high-finesse cavity achieved the most spin-squeezing by a significant margin. As the interaction strength increased, however, this advantage diminished. Specifically, at \(\lambda_{1}\), the amount of spin-squeezing afforded by the optimal laser in free-space was on par with the high-finesse cavity, being roughly 4 times the spin-squeezing of OAT performed in isolation. Furthermore, at \(\lambda_{2}\), all three optical depths were comparable, providing roughly 2.5 times the spin-squeezing of OAT.
Figure 7 shows the scaling of optimal QND spin-squeezing against different optical depths. For both interaction strengths, the hybrid model significantly outperformed QND. Specifically, at \(d=387\), the hybrid method spin-squeezed 5 times (5.8 times) more than optimal QND for total interaction \(\lambda_{1}\) (\(\lambda_{2}\)). These results
Figure 5: Comparison of OAT and the hybrid scheme in the absence of loss from QND, for 100 atoms. After QND, the state has a spin-squeezing parameter of \(\xi=0.5\). a) \(\xi\) vs \(\lambda_{\text{OAT}}\) for three different values of \(\theta_{\text{QND}}\). Large angles increase the rate of spin-squeezing, but a worse optimum. b) Optimal value of \(\theta_{\text{QND}}\) as a function of \(\lambda_{\text{OAT}}\). Once \(\lambda_{\text{OAT}}\gg\lambda_{\text{opt}}\), the optimum angle reduces to zero to capitalise on the increased spin-squeezing of states with small initial variance in \(J_{z}\). c) Spin-squeezing parameter optimised over \(\theta_{\text{QND}}\). The hybrid model outperforms OAT over all interaction strengths.
are unsurprising, as we expect OAT to enhance spin-squeezing of a QND SSS. However, the hybrid method is also seen to outperform QND in an ultra-high-finesse cavity (\(d=10^{4}\)), the current leader in spin-squeezing demonstrated in proof-of-principle experiments [39]. Specifically, the hybrid method outperformed in-cavity QND for optical depths of \(d\approx 10\), meaning there is a large tolerance for imperfect laser focus. In addition, at the highest free-space optical depth, \(d=387\), the hybrid method provides 2.5 times the amount of spin-squeezing over in-cavity QND. We therefore conclude that an experimental implementation of the hybrid method with modest requirements on size, weight, and power requirements could significantly increase spin-squeezing over QND.
## IV Effect of experimental imperfections
While laser power may be stable over a single QND measurement, it may vary over multiple experiments. We incorporate these effects as fluctuations in the total photon during a QND measurement while fixing the beam-splitters before and after OAT is performed. Figure 8a shows how the optimised hybrid method is affected by these changes for OAT interaction \(\lambda_{1}=6.5\times 10^{-5}\). For fluctuations up to 30% of the total photon number, there is virtually no change in spin-squeezing. Thus, the hybrid model is robust to fluctuations in the power of the laser used to perform QND.
We also investigate the effect of fluctuations in the power of the beam-splitter pulses. These will manifest as stochastic noise in the rotation angle of the state. Assuming only low-frequency noise, the angle will vary from shot-to-shot, rather than a single run, so we set the fluctuation in both beam-splitters to be the same. The
Figure 8: a) Squeezing parameter scaling against an introduced deviation in photon number (which controls the quantum non-demolition measurement strength) for the hybrid squeezing method. The parameter changes minimally for a deviation up to 30% of the photon number. b) Squeezing parameter scaling against an introduced variance in rotation angle.
Figure 6: a) Squeezing parameter as a function of \(\lambda_{\mathrm{OAT}}\) for the hybrid scheme compared to OAT, for three optical depths \(d=50\) (red), 387 (blue) and 3500 (green). The hybrid method is optimised over \(\eta\) and \(\theta\). The dashed lines represent the OAT interaction strengths achievable via the scheme presented in [42], (\(\lambda_{\mathrm{OAT}}=\lambda_{1}\)), and using a delta-kick scheme (\(\lambda_{\mathrm{OAT}}=\lambda_{2}\)). The hybrid method significantly outperforms the OAT scheme at both interaction strengths. b) Optimal measurement strength \(\eta\) vs. \(\lambda_{\mathrm{OAT}}\). \(\eta_{\mathrm{opt}}\) decreases as \(\lambda_{\mathrm{OAT}}\) increases as a consequence of OAT being more efficient for higher purity states. c) Optimal rotation angle vs. \(\lambda_{\mathrm{OAT}}\). The behaviour of the rotation angle qualitatively mimics the lossless results.
Figure 7: Squeezing parameter scaling against resonant optical depth of the BEC for only QND (black), and the optimised hybrid method for a OAT interaction strength of \(\lambda_{1}\) (blue) and \(\lambda_{2}\) (red). Optical depths of \(d>387\) require a cavity. The dashed lines represent OAT spin-squeezing at \(\lambda_{1}\) and \(\lambda_{2}\) respectively. QND at \(d=10^{4}\) represents the best spin-squeezing available in experiments. The hybrid method exceeds this limit for \(d>10\) at both interaction strengths.
impact on the optimised hybrid model for OAT interaction \(\lambda_{1}\) is shown in Figure 8b. Fluctuations of 0.05 radians in both beam-splitters is shown to completely wash out the spin-squeezing of the hybrid model (blue points). However, this is not an issue with the hybrid model, but rather of the stringent requirements for manipulating entangled states. Indeed, even for OAT performed on a CSS, much of the benefit is lost with fluctuations in the second beam-splitter (black asterisks). Furthermore, the hybrid scheme is seen to be robust to fluctuations in only the first beam-splitter up to 0.05 radians (red dots), indicating that the second beam-splitter is responsible for the degradation in squeezing. The hybrid method therefore has more stringent stability requirements than OAT and QND, but only due to complications with manipulating entangled states.
## V Conclusion
Our investigation of a hybrid method of QND and OAT indicates that significant gains to precision can be made with current state-of-the-art experiments without imposing high size, weight, and power requirements. We demonstrated that by first performing QND followed by OAT, the amount of spin-squeezing that can be achieved is greater than both OAT and in-cavity QND. As both of the techniques have been demonstrated in proof-of-principle experiments, the hybrid method can be incorporated with only small modifications to the experimental apparatus and procedure. Furthermore, we demonstrated that the hybrid method is robust to fluctuations in laser power when performing QND calculations, but is still subject to the stringent requirements of fluctuations in beam-splitter power for highly entangled states. The discussion in this paper has been limited to a single-mode model of OAT, which will need to be extended to a multimode model for more realistic predictions of spin-squeezing.
This work has focussed on atomic interactions to achieve OAT dynamics. OAT dynamics can also be achieved by atom-light coupling via cavity feedback [71], and has been used to create spin-squeezing [72; 73; 37]. Interaction-based readouts [74] have also been performed via this method [75]. As it is possible to smoothly transition between cavity-based OAT and QND dynamics [76], exploration of the hybrid method in these systems is an exciting direction for future research.
###### Acknowledgements.
We would like to acknowledge fruitful discussions had with Stuart Szigeti, Zain Mehdi, and Karandeep Gill. We would also like to thank John Close for his input about experimental considerations for the model. SAH acknowledges support through an Australian Research Council Future Fellowship Grant No. FT210100809
|
2304.06658
|
Pure Spectroscopic Constraints on UV Luminosity Functions and Cosmic
Star Formation History From 25 Galaxies at $z_\mathrm{spec}=8.61-13.20$
Confirmed with JWST/NIRSpec
|
We present pure spectroscopic constraints on the UV luminosity functions and
cosmic star formation rate (SFR) densities from 25 galaxies at
$z_\mathrm{spec}=8.61-13.20$. By reducing the JWST/NIRSpec spectra taken in
multiple programs of ERO, ERS, GO, and DDT with our analysis technique, we
independently confirm 16 galaxies at $z_\mathrm{spec}=8.61-11.40$ including new
redshift determinations, and a bright interloper at $z_\mathrm{spec}=4.91$ that
was claimed as a photometric candidate at z~16. In conjunction with nine
galaxies at redshifts up to $z_\mathrm{spec}=13.20$ in the literature, we make
a sample of 25 spectroscopically-confirmed galaxies in total and carefully
derive the best estimates and lower limits of the UV luminosity functions.
These UV luminosity function constraints are consistent with the previous
photometric estimates within the uncertainties and indicate mild redshift
evolution towards z~12 showing tensions with some theoretical models of rapid
evolution. With these spectroscopic constraints, we obtain firm lower limits of
the cosmic SFR densities and spectroscopically confirm a high SFR density at
z~12 beyond the constant star-formation efficiency models, which supports
earlier claims from the photometric studies. While there are no
spectroscopically-confirmed galaxies with very large stellar masses violating
the $\Lambda$CDM model due to the removal of the bright interloper, we confirm
star-forming galaxies at $z_\mathrm{spec}=11-13$ with stellar masses much
higher than model predictions. Our results indicate possibilities of high
star-formation efficiency (>5%), hidden AGN, top-heavy initial mass function
(possibly with Pop-III), and large scatter/variance. Having these successful
and unsuccessful spectroscopy results, we suggest observational strategies for
efficiently removing low redshift interlopers for future JWST programs.
|
Yuichi Harikane, Kimihiko Nakajima, Masami Ouchi, Hiroya Umeda, Yuki Isobe, Yoshiaki Ono, Yi Xu, Yechi Zhang
|
2023-04-13T16:45:41Z
|
http://arxiv.org/abs/2304.06658v4
|
Pure Spectroscopic Constraints on UV Luminosity Functions and Cosmic Star Formation History From 25 Galaxies at \(\mathbf{z_{\rm spec}=8.61-13.20}\) Confirmed with JWST/NIRSpec
###### Abstract
We present pure spectroscopic constraints on the UV luminosity functions and cosmic star formation rate (SFR) densities from 25 galaxies at \(z_{\rm spec}=8.61-13.20\). By reducing the JWST/NIRSpec spectra taken in multiple programs of ERO, ERS, GO, and DDT with our analysis technique, we independently confirm 16 galaxies at \(z_{\rm spec}=8.61-11.40\) including new redshift determinations, and a bright interloper at \(z_{\rm spec}=4.91\) that was claimed as a photometric candidate at \(z\sim 16\). In conjunction with nine galaxies at redshifts up to \(z_{\rm spec}=13.20\) in the literature, we make a sample of 25 spectroscopically-confirmed galaxies in total and carefully derive the best estimates and lower limits of the UV luminosity functions. These UV luminosity function constraints are consistent with the previous photometric estimates within the uncertainties and indicate mild redshift evolution towards \(z\sim 12\) showing tensions with some theoretical models of rapid evolution. With these spectroscopic constraints, we obtain firm lower limits of the cosmic SFR densities and spectroscopically confirm a high SFR density at \(z\sim 12\) beyond the constant star-formation efficiency models, which supports earlier claims from the photometric studies. While there are no spectroscopically-confirmed galaxies with very large stellar masses violating the \(\Lambda\)CDM model due to the removal of the bright interloper, we confirm star-forming galaxies at \(z_{\rm spec}=11-13\) with stellar masses much higher than model predictions. Our results indicate possibilities of high star-formation efficiency (\(>5\%\)), hidden AGN, top-heavy initial mass function (possibly with Pop-III), and large scatter/variance. Having these successful and unsuccessful spectroscopy results, we suggest observational strategies for efficiently removing low-redshift interlopers for future JWST programs.
galaxies: formation -- galaxies: evolution -- galaxies: high-redshift +
Footnote †: journal: ApJ
## 1 Introduction
One of the most important goals in astronomy today is to understand galaxy formation from their birth stage to the current stage (Stark, 2016; Dayal & Ferrara, 2018; Ouchi et al., 2020; Robertson, 2021). To accomplish the goal, observations of present galaxies to first galaxies are key to revealing the entire process of galaxy formation. Before the operation of the James Webb Space Telescope (JWST), large telescopes such as the Hubble Space Telescope (HST) have driven observational studies of galaxy formation with millions of high redshift galaxies and revealed the evolution of the ultraviolet (UV) luminosity function and the cosmic star formation rate (SFR) density at \(2\lesssim z\lesssim 10\)(e.g., Madau & Dickinson, 2014; Bouwens et al., 2015, 2021; Finkelstein et al., 2015; Ishigaki et al., 2018; Ono et al., 2018; Harikane et al., 2022), possibly up to \(z\sim 11-13\)(e.g., Coe et al., 2013; Ellis et al., 2013; Harikane et al., 2022). Several studies discuss that the evolution of the cosmic SFR density at high redshifts is well reproduced by models assuming
constant star formation efficiencies (e.g., Bouche et al., 2010; Mason et al., 2015; Harikane et al., 2018, 2022b; Tacchella et al., 2018; Oesch et al., 2018; Bouwens et al., 2021), which is motivated by the clustering analysis of galaxies at \(z\sim 2-7\)(Harikane et al., 2016, 2018, 2022b) and by the abundance matching studies (e.g., Behroozi et al., 2013; Mason et al., 2015; Moster et al., 2018). Such models predict a rapid decline of the cosmic SFR density at \(z>10\) due to the decline of the halo number density (Harikane et al., 2018, 2022b; Oesch et al., 2018). However, some studies using photometric galaxy candidates at \(z\sim 10-12\) indicate that SFR densities at \(z>10\) are higher than these models predictions (Coe et al., 2013; Ellis et al., 2013; McLeod et al., 2016). Such high SFR densities at \(z>10\) are also suggested by mature stellar populations in galaxies at \(z\sim 6-9\)(Hashimoto et al., 2018; Mawatari et al., 2020).
JWST started its operation in early 2022 (Rigby et al., 2023), and the first datasets obtained with NIRCam (Rieke et al., 2003, 2005, 2023; Beichman et al., 2012) and NIRSpec (Jakobsen et al., 2022) were released on July 2022. The early JWST/NIRCam imaging datasets have allowed us to find a large number of galaxy candidates at \(z\sim 9-20\)(Naidu et al., 2022b; Castellano et al., 2022a, b; Adams et al., 2022; Atek et al., 2022; Finkelstein et al., 2022b; Yan et al., 2023; Donnan et al., 2023b; Harikane et al., 2023a), including bright galaxy candidates at \(z\sim 16\)(Donnan et al., 2023b; Harikane et al., 2023a). Subsequent studies have reported more candidates at \(z>10\) including sources found in newly obtained NIRCam images (Finkelstein et al., 2023; Bouwens et al., 2022b, a; Donnan et al., 2023a; Morishita & Stiavelli, 2022; Bradley et al., 2022; Perez-Gonzalez et al., 2023). These studies suggest that the UV luminosity function and cosmic SFR density at \(z>10\) do not show a rapid decline, in contrast to the predictions of the constant star formation efficiency models (e.g., Harikane et al., 2023a; Bouwens et al., 2022a, b). Several physical interpretations are discussed in Harikane et al. (2023a), including a high star formation efficiency, AGN activity, and a top-heavy IMF (see also e.g., Inayoshi et al., 2022). However, these discussions are based on the photometric galaxy candidates, and there are possibilities that these candidates are actually low-redshift interlopers (Zavala et al., 2023; Naidu et al., 2022a; Fujimoto et al., 2022). Since ALMA observations to date have not yielded conclusive redshifts for these high redshift galaxy candidates (Bakx et al., 2023; Popping, 2023; Yoon et al., 2022; Fujimoto et al., 2022), probably due to the low metallicity and/or high density (Harikane et al., 2020), JWST spectroscopy is crucial to obtain spectroscopic redshifts of these galaxy candidates at \(z\gtrsim 10\).
Recent JWST/NIRSpec spectroscopy has successfully confirmed the redshifts of galaxies at \(z>8\) (Figure 1). Early Director's Discretionary Time (DDT) observations obtained spectroscopic redshifts of two galaxies at \(z_{\rm spec}=9.51\) and 9.76 (Roberts-Borsani et al., 2022; Williams et al., 2022). The JWST Advanced Deep Extragalactic Survey (JADES) spectroscopically confirmed HST-selected galaxy candidates at \(z_{\rm spec}>10\)(Curtis-Lake et al., 2022), including a bright galaxy at \(z_{\rm spec}=10.60\), GN-z11 (Bunker et al., 2023, see also Oesch et al., 2016, Jiang et al., 2021). Recently, further NIRSpec spectroscopic observations have obtained spectroscopic redshifts of JWST-selected candidates at \(z_{\rm spec}>10\)(Curtis-Lake et al., 2022; Arrabal Haro et al., 2023a, b), including the highest-redshift galaxy confirmed at \(z_{\rm spec}=13.20\), GS-z13-0 (Curtis-Lake et al., 2022).
In this study, we present spectroscopic constraints on the UV luminosity functions and the cosmic SFR densities. Using the NIRSpec datasets obtained in multiple programs as well as spectroscopically-confirmed galaxies in the literature, we calculate the UV luminosity functions at \(z\sim 9-16\), and obtain the lower limit of the SFR densities at \(z\sim 9-12\). These spectroscopic constraints allow us to verify the earlier suggestions of the mild redshift evolution of the luminosity function and the high SFR density at \(z>10\) based on the photometric datasets. We also discuss the physical origin of the mild redshift evolution at \(z>10\) as well as strategies for future JWST surveys searching for galaxies at \(z>10\) based on the spectroscopic results.
This paper is organized as follows. Section 2 describes the JWST/NIRSpec observational data sets used in this study. In Section 3, we explain the calculation of the effective volume and present the results of the UV luminosity functions based on the spectroscopically-confirmed galaxies. Section 4 presents stellar masses of spectroscopically-confirmed galaxies, and Section 5 shows our spectroscopic constraints on the SFR densities. We discuss the physical interpretations of the obtained results and strategies to remove low redshift interlopers in future JWST observations in Section 6. Section 7 summarizes our findings. Throughout this paper, we use the Planck cosmological parameter sets of the TT, TE, EE+lowP+lensing+BAO result (Planck Collaboration et al., 2020): \(\Omega_{\rm m}=0.3111\), \(\Omega_{\Lambda}=0.6899\), \(\Omega_{\rm b}=0.0489\), \(h=0.6766\), and \(\sigma_{8}=0.8102\). We basically assume the Salpeter (1955) initial mass function (IMF). All magnitudes are in the AB system (Oke & Gunn, 1983).
## 2 Observational Dataset and Galaxy Sample
### ERO, ERS, GO, and DDT NIRSpec Observations
The data sets used in this study were obtained in the Early Release Observations (EROs; Pontoppidan et al., 2022) targeting the SMACS 0723 lensing cluster field (ERO-2736, PI: K. Pontoppidan), the Early Release Science (ERS) observations of GLASS (ERS-1324, PI: T. Treu; Treu et al., 2022) and the Cosmic Evolution Early Release Science (CEERS; ERS-1345, PI: S. Finkelstein; Finkelstein et al., 2023, Arrabal Haro et al., 2023), General Observer (GO) observations targeting a \(z\sim 11\) galaxy candidate (GO-1433, PI: D. Coe), and the Director's Discretionary Time (DDT) observations targeting \(z\sim 12-16\) galaxy candidates (DD-2750, PI: P. Arrabal Haro; Arrabal Haro et al., 2023). The ERO data were taken in the medium resolution (\(R\sim 1000\)) filter-grating pairs F170LP-G235M and F290LP-G395M covering the wavelength ranges of \(1.7-3.1\) and \(2.9-5.1\)\(\mu\)m, respectively. The total exposure time of the ERO data is 4.86 hours for each filter-grating pair. The GLASS data were taken with high resolution (\(R\sim 2700\)) filter-grating pairs of F100LP-G140H, F170LP-G235H, and F290LP-G395M covering the wavelength ranges of \(1.0-1.6\), \(1.7-3.1\) and \(2.9-5.1\)\(\mu\)m, respectively. The total exposure time of the CEERS data is 0.86 hours for each filter-grating pair. The GO-1433 and DDT data were obtained with the Prism and the total exposure times are 3.7 and 5.1 hours, respectively. These data were reduced with the JWST pipeline version 1.8.5 with the Calibration Reference Data System (CRDS) context file of jwst_1028.pmap or jwst_1027.pmap with additional processes improving the flux calibration, noise estimate, and the composition, in the same manner as Nakajima et al. (2023). Please see Nakajima et al. (2023) for details of the data reduction.
Figure 1: Absolute UV magnitude as a function of the redshift for galaxies at \(6<z<17\). The red diamonds are spectroscopically-confirmed galaxies at \(z_{\rm spec}>8.5\) summarized in Table 1. Galaxies at \(z_{\rm spec}>9.0\) are marked with their names. The red open symbols are galaxies with photometric redshifts selected with JWST/NIRCam in the literature (Naidu et al., 2022; Castellano et al., 2022; Adams et al., 2022; Atek et al., 2022; Donnan et al., 2023; Finkelstein et al., 2022; Harikane et al., 2023; Bouwens et al., 2022; Morishita and Stiavelli, 2022; Bradley et al., 2022; Pérez-González et al., 2023). If a photometric candidate is reported in more than one paper, we represent the candidate with a paper that reports for the first time. The gray circles denote dropout galaxies selected with deep HST images (Bouwens et al., 2015).
### Obtained Spectra of High Redshift Galaxies
We then investigated the reduced spectra to determine spectroscopic redshifts of galaxies at \(z\gtrsim 9\) by matching the coordinates of the spectroscopic targets with those of photometric galaxy candidates identified in the literature (Finkelstein et al., 2022, 2023; Naidu et al., 2022; Castellano et al., 2022, 2024; Adams et al., 2022; Atek et al., 2022; Donnan et al., 2023, 2023; Harikane et al., 2023, 2022; Bouwens et al., 2022, 2023; Morishita & Stiavelli, 2022), as well as by visually inspecting the spectra. We found that a total of 16 galaxies were spectroscopically confirmed at \(z_{\rm spec}>8.5\). Figures 2-8 show spectra of the galaxies, and Table 1 summarizes them. We describe the spectra of some spectroscopically-confirmed galaxies below.
Maisie's Galaxy was firstly photometrically identified in Finkelstein et al. (2022), and was reported as CR2-z12-1 in Harikane et al. (2023). As shown in the top panel of Figure 2, the spectrum of this galaxy shows the Lyman break, the [Oii]\(\lambda 3727\) line (\(5.5\sigma\)), and the possible [Neiii]\(\lambda 3869\) line (\(2.4\sigma\)). The redshift of this galaxy was reported to be \(z_{\rm spec}=11.44^{+0.09}_{-0.08}\) based on the Lyman break in Arrabal Haro et al. (2023). Arrabal Haro et al. (2023) discuss that the [Oii] line may not be real due to 1) the presence of image defects in the individual nodes, 2) no clear four negative patterns in the two-dimensional spectrum, and 3) a low [Neiii]/[Oii] ratio inconsistent with the low metallicity inferred from the spectral energy distribution (SED) fitting. However, 1) we do not find any obvious image defects in the spectra of the individual exposures (Figure 9), 2) it is not unusual that clear four negative patterns are not seen for the \(5.5\sigma\) line, and 3) the metallicity from the SED fitting is not robustly constrained and the low [Neiii]/[Oii] ratio ([Neiii]/[Oii]\(\sim 0.3\)) is seen in low-metallicity galaxies (Nakajima et al., 2022). Moreover, the wavelength of the possible [Neiii] line is consistent with that of [Oii]. Given these facts, we consider this [Oii] line is real. We determine the redshift to be \(z_{\rm spec}=11.40\) using the [Oii] line, which is consistent with the previous estimate based on the Lyman break but slightly lower than that (\(z_{\rm spec}=11.44^{+0.09}_{-0.08}\); Arrabal Haro et al., 2023).
The spectrum of CEERS2_588, which was photometrically identified in Finkelstein et al. (2023) and Donnan et al. (2023), shows the Lyman break, the [Oii]\(\lambda 3727\) line (\(5.7\sigma\)), and the possible [Neiii]\(\lambda 3869\) line (\(2.8\sigma\)), as shown in the middle panel of Figure 2. The [Oii] line feature is also seen in some of the individual frames (Figure 10), and they are not affected by an obvious image defect, indicating that this [Oii] line is real. The rest-frame equivalent width of the [Oii] line is \(\sim 100\) A, comparable to that seen in galaxies at \(z\sim 2-3\)(Reddy et al., 2018). We newly determine the spectroscopic redshift of CEERS2_588 to be \(z_{\rm spec}=11.04\) based on the [Oii] and [Neii] lines. The wavelength of the Lyman break is consistent with this redshift estimate. The [Neiii]/[Oii] ratio is low, [Neiii]/[Oii]\(\sim 0.6\), and is comparable to those seen in low-metallicity galaxies in Nakajima et al. (2022). Given its UV magnitude, \(M_{\rm UV}=-20.4\) mag, this galaxy is the most luminous galaxy spectroscopically confirmed at \(z_{\rm spec}>11.0\).
In the bottom panel of Figure 2, we present the spectrum of MACS0647-JD, which was firstly photometrically reported in Coe et al. (2013). This galaxy is a triply-lensed galaxy, and a recent study using NIRCam images suggests that MACS0647-JD is a merger. We show the NIRSpec spectrum of JD1 (MSA ID: 3593) in the observation ID of 23. The spectrum shows the Lyman break and the Ciii]\(\lambda 1909\), [Neiii]\(\lambda 3869\), and H\(\gamma\) emission lines, suggesting the spectroscopic redshift of \(z_{\rm spec}=10.17\). The data analysis by the PI team will be presented in Hsiao et al in prep.
There are four galaxies whose spectra show only the Lyman break without clear emission lines, CEERS2_7929, CEERS_99715, CEERS_35590, and CEERS2_2324 (Figure 3 and the top panel in Figure 4). For these sources, we fit model spectra to the observed ones at the observed wavelength of \(0.6-3.0\)\(\mu\)m including the break using prospector(Johnson et al., 2021), and obtain the best-fit spectroscopic redshifts. Model spectra are derived from Flexible Stellar Population Synthesis (FSPS; Conroy et al., 2009; Conroy & Gunn, 2010) package with the modules for Experiments in Stellar Astrophysics Isochrones and Stellar Tracks (MIST; Choi et al., 2016). The boost of ionizing flux production of massive stars is included in the MIST isochrones (Choi et al., 2017). Here we assume the stellar IMF determined by Chabrier (2003), the Calzetti et al. (2000) dust extinction law, and the intergalactic medium (IGM) attenuation model by Madau (1995). The Ly\(\alpha\) emission line is also masked considering the high IGM neutral fraction at these redshifts. We adopt a flexible star formation history with five bins that are spaced equally in logarithmic times between 0 Myr and a lookback time that corresponds to \(z=30\), where the SFR within each bin is constant. The parameter settings for the stellar mass, dust extinction, and metallicity are the same as those in Harikane et al. (2023). We search for the best-fit model to the observed photometry with the MCMC method by using emcee(Foreman-Mackey et al., 2013).
We obtain the best fit redshifts of \(z_{\rm spec}=10.10^{+0.07}_{-0.22}\) for CEERS2_7929, \(z_{\rm spec}=10.01^{+0.04}_{-0.10}\) for CEERS_99715, \(z_{\rm spec}=9.97^{+0.01}_{-0.16}\) for CEERS_35590, and \(z_{\rm spec}=9.74^{+0.03}_{-0.09}\) for CEERS2_2324. Our measurements agree with those in Arrabal Haro et al.
Figure 2: NIRSpec spectra of Maisie’s Galaxy (CR2-z16-1) at \(z_{\rm spec}=11.40\), CEERS2_588 at \(z_{\rm spec}=11.04\), and MACS0647-JD at \(z_{\rm spec}=10.17\). The top panel shows the two-dimensional spectrum (yellow is positive), and the bottom panel shows the one-dimensional spectrum. The red dashed line indicates the rest-frame 1215.67 Å corresponding to the Ly\(\alpha\)-break.
Figure 3: Same as Figure 2 but for CEERS2_7929 at \(z_{\rm spec}=10.10\), CEERS_99715 at \(z_{\rm spec}=10.01\), and CEERS_35590 at \(z_{\rm spec}=9.97\).
Figure 4: Same as Figure 2 but for CEERS2_2324 at \(z_{\rm spec}=9.74\), Gz9p3 at \(z_{\rm spec}=9.31\), and CEERS-24 at \(z_{\rm spec}=9.00\). CEERS2_2324 is not used in the constraints in this study because the observed break is not significant. The spectra of Gz9p3 and CEERS-24 are taken with the high and medium resolutions, respectively. We plot the smoothed one-dimensional spectra for them.
Figure 5: Same as Figure 2 but for CEERS-23 at \(z_{\rm spec}=8.88\), CEERS1_6059 at \(z_{\rm spec}=8.88\), and CEERS1_3858 at \(z_{\rm spec}=8.81\). The spectrum of CEERS-23 is taken with the medium resolution. We plot the smoothed one-dimensional spectrum for CEERS-23.
Figure 6: Same as Figure 2 but for CEERS_43833 at \(z_{\rm spec}=8.76\), CEERS-1025 at \(z_{\rm spec}=8.71\), and CEERS_1019 at \(z_{\rm spec}=8.68\). The spectrum of CEERS-1025 is taken with the medium resolution. We plot the smoothed one-dimensional spectrum for that source.
Figure 8: NIRSpec spectrum of 93316 (CR2-z16-1), a \(z\sim 16\) galaxy candidate that is found to be \(z_{\rm spec}=4.912\).
Figure 7: Same as Figure 2 but for CEERS_90671 at \(z_{\rm spec}=8.64\) and EGS_z910_44164 at \(z_{\rm spec}=8.61\).
Figure 10: Same as Figure 9 but for CEERS2_588 at \(z_{\rm spec}=11.04\). The [Oii]\(\lambda\)3727 line is clearly seen in the stacked two-dimensional and one-dimensional spectra and some of the individual spectra, and they are not contaminated by artifacts around the wavelength of [Oii], suggesting that this emission line feature is real. The red arrows indicate the positions of the [Oii] line.
Figure 9: Spectrum around [Oii]\(\lambda\)3727 and [Neiii]\(\lambda\)3869 emission lines for Maisie’s Galaxy (CR2-z12-1) at \(z_{\rm spec}=11.40\). The bottom two panels show the stacked two-dimensional and one-dimensional spectra. The other nine panels are spectra of nine individual exposures. The [Oii] line is clearly seen in both the stacked two-dimensional and one-dimensional spectra, and spectra of the individual exposures are not contaminated by artifacts around the wavelength of [Oii], suggesting that this emission line feature is real. The red arrow indicates the position of the [Oii] line.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{1}{c}{ Name} & R.A. & Decl. & \(z_{\rm spec}\) & \(M_{\rm UV}\) & Spec-z Ref. & Phot. Ref. \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline \multicolumn{6}{c}{High Redshift Galaxies at \(z_{\rm spec}>8.5\)} \\ GS-z13-0 & 03:32:35.97 & \(-\)27:46:35.4 & 13.20 (Break) & \(-\)18.5 & CL22 & Ro22 \\ GS-z12-0 & 03:32:39.92 & \(-\)27:49:17.6 & 12.63 (Break) & \(-\)18.8 & CL22 & Ro22 \\ GS-z11-0 & 03:32:39.54 & \(-\)27:46:28.6 & 11.58 (Break) & \(-\)19.3 & CL22 & Bow11,E13,Ro22 \\ Maisie’s Galaxy (CR2-i2-1) & 14:19:46.36 & \(+\)52:56:32.8 & 11.40 (Line) & \({}^{\star}\) & \(-\)20.1 & AH23a,This & Fi22b,Har23,Do23,Fi23,Bow22 \\ CEERS2\_588 & 14:19:37.59 & \(+\)52:56:43.8 & 11.04 (Line) & \(-\)20.4 & This & Fi23,Do23 \\ GN-z11 & 12:36:25.46 & \(+\)62:14:31.4 & 10.60 (Line) & \(-\)21.5 & Bu23 & Oe18,Tac23 \\ GS-z10-0 & 03:32:38.12 & \(-\)27:46:24.6 & 10.38 (Break) & \(-\)18.4 & CL22 & Oe18,Ro22 \\ MACS0647-JD\({}^{\dagger}\) & 06:47:55.73 & \(+\)70:14:35.8 & 10.17 (Line) & \(-\)20.3 & Hs-P,This & Co13,Hs22 \\ CEERS2\_7929 & 14:19:41.47 & \(+\)52:54:41.5 & 10.10 (Break) & \(-\)19.3 & AH23a,This & Fi23,Do23 \\ CEERS\_99715 & 14:19:14.84 & \(+\)52:44:13.6 & 10.01 (Break) & \({}^{\dagger}\) & \(-\)20.5 & AH23b,This & Fi23 \\ CEERS\_35590 & 14:18:55.81 & \(+\)52:45:29.1 & 9.7 (Break) & \({}^{\dagger}\) & \(-\)20.1 & AH23b,This & Fi23 \\ A2744-JD1 & 00:14:22.80 & \(-\)30:24:02.6 & 9.76 (Break) & \(-\)17.4 & RB22 & Zi14,Oe18 \\ CEERS2\_2324 & 14:19:26.78 & \(+\)52:54:16.6 & 9.74 (Break) & \({}^{\dagger}\) & \(-\)20.1 & AH23a,This & Fi23 \\
11027\({}^{\dagger}\) & 21:29:41.17 & \(+\)00:50:30.1 & 9.51 (Line) & \(-\)17.4 & Wi22 & Wi22 \\ GS+53.11243-27.77461 & 03:32:26.98 & \(-\)27:46:28.6 & 9.437 (Line) & \(-\)20.4 & Cam23 & \\ Gz9p3 & 00:14:28.14 & \(-\)30:25:32.0 & 9.313 (Line) & \(-\)21.6 & Boy23,This & Cas22 \\ MACS1149-JD1\({}^{\dagger}\) & 11:49:33.58 & \(+\)22:24:45.7 & 9.110 (Line) & \(-\)18.5 & Has18 & Zh12 \\ CEERS-24 & 14:19:35.34 & \(+\)52:50:37.9 & 8.998 (Line) & \(-\)19.4 & Tan23,Fu23,This & Wh23,Fi23 \\ CEERS-23 & 14:19:36.30 & \(+\)52:50:49.2 & 8.881 (Line) & \(-\)18.9 & Tan23,Fu23,This & Fi23 \\ CEERS1\_6059 & 14:20:02.81 & \(+\)52:59:17.9 & 8.876 (Line) & \(-\)20.8 & Fu23,Nak23,This & Wh23,Fi23 \\ CEERS1\_3858 & 14:19:58.66 & \(+\)52:59:21.8 & 8.807 (Line) & \(-\)20.4 & Fu23,This & Fi23 \\ CEERS\_43833 & 14:19:45.27 & \(+\)52:54:42.3 & 8.763 (Line) & \(-\)20.4 & AH23a,This & Fi23 \\ CEERS-1025 & 14:19:52.21 & \(+\)52:55:58.6 & 8.715 (Line) & \(-\)21.1 & Tan23,Nak23,Sn23,This & Fi22a \\ CEERS\_1019 & 14:20:08.49 & \(+\)52:53:26.4 & 8.679 (Line) & \(-\)22.1 & Zi15,Tan23,Nak23,Sa23,La23,This & Fi22a \\ CEERS\_90671 & 14:19:50.71 & \(+\)52:50:32.5 & 8.638 (Line) & \(-\)18.7 & AH23b,This & Fi23 \\ EGS\_z910\_44164 & 14:20:52.50 & \(+\)53:04:11.5 & 8.612 (Line) & \(-\)21.6 & La22,Tan23,Nak23,This & Fi22a \\ \hline \multicolumn{6}{c}{A Low Redshift Interloper} \\ \multicolumn{6}{c}{93316 (CR2-z16-1)} & 14:19:39.49 & \(+\)52:56:34.9 & 4.912 (Line) & \(\cdots\) & AH23a,This & Do23,Har23,Za22 \\ \multicolumn{6}{c}{NaNa122,Fi23,Bow22} \\ \hline \multicolumn{6}{c}{Note–(1) Name. Ones in the parentheses are names in Harikane et al. (2023a). (2) Right ascension. (3) Declination. (4) Spectroscopic redshift. The spectroscopic feature used to determine the redshift is noted (Break: Lyman break, Line: emission line). (5) Absolute UV magnitude. (6,7) References for spectroscopic redshifts and photometry (This: this work, AH23a: Arrahal Haro et al. (2023b), AH23b: Arrahal Haro et al. (2023a), Bow22: Bouwens et al. (2022a), Boy23: Boyct et al. (2023), Cam22: Cameron et al. (2023), Cas2: Castellano et al. (2022a), CL22: Curtis-Lake et al. (2022), Co13: Coe et al. (2013), El13: Ellis et al. (2013), Do23: Donnan et al. (2023b), Fi22b: Finkelstein et al. (2022b), Fl23: Finkelstein et al. (2023), Fu23: Fujimoto et al. (2023), Har22b: Harikane et al. (2022a), Har23: Harkane et al. (2023a), Hask-Hasli et al. (2018), H22: Hsiao et al. (2022), Hs-P: Hsiao et al. in prep., La22: Larson et al. (2022), La23: Larson et al. (2023), Na22: Naidu et al. (20222), Naak23: Nakajima et al. (2023), Oe18: Oesch et al. (2018), RB22: Roberts-Borsani et al. (2022), Ro22: Robertson et al. (2022), Sa23: Sanders et al. (2023), Tan23: Tang et al. (2023), Tac23: Tacchella et al. (2023), Wh23: Whitler et al. (2023), Wi22: Williams et al. (2022), Za2: Zavala et al. (2023), Zh12: Zheng et al. (2012), Zi14: Zitrin et al. (2014) Zi15: Zitrin et al. (2015)). \({}^{\dagger}\) The redshift of this object is highly uncertain because the observed break is not significant. We do not use these objects in the constraints on the UV luminosity function and the cosmic SFR densities. \({}^{*}\) The redshift of this object is reported to be \(z_{\rm spec}=11.44^{+0.09}_{-0.08}\) based on the Lyman break in Arrahal Haro et al. (2023b). We remeasure the redshift to be \(z_{\rm spec}=11.40\) based on the [Oii] emission line, consistent with their estimate within the error. \({}^{\ddagger}\) The spectroscopic redshifts presented here are based on our measurements using the Lyman break. Our redshifts, \(z_{\rm spec}=10.01^{+0.04}_{-0.10}\) and \(9.97^{+0.01}_{-0.16}\) for CEERS\_99715 and CEERS\_35590, respectively, are consistent with the measurements in Arra
(2023b,a), \(z_{\rm spec}=10.10^{+0.13}_{-0.26}\) for CEERS2_7929, \(z_{\rm spec}=9.77^{+0.37}_{-0.29}\) for CEERS_99715, and \(z_{\rm spec}=10.01^{+0.14}_{-0.19}\) for CEERS_35590. Arrabal Haro et al. (2023b) present the spectrum of CEERS2_2324 (Finkelstein et al., 2023) with a possible redshift of \(z_{\rm spec}=9.744\), but do not consider this galaxy confirmed due to the large uncertainty of the redshift estimate. Our spectrum also shows a continuum detection at \(\sim 1.3-1.9\)\(\mu\)m and a possible break around \(\sim 1.3\)\(\mu\)m, suggesting \(z_{\rm spec}=9.74^{+0.03}_{-0.09}\), consistent with the estimate in Arrabal Haro et al. (2023b). A possible emission line feature around \(\sim 5.15\)\(\mu\)m may not be real because an image defect is seen in one of the individual nods around this wavelength. Since the significance of the break is not high, we do not use this galaxy in our analysis.
CEERS_1019 was firstly spectroscopically confirmed with the Ly\(\alpha\) emission line in Zitrin et al. (2015). Recently, Larson et al. (2023) report a broad H\(\beta\) emission line in this galaxy, suggesting AGN activity (see also Harikane et al., 2023b). The NIRSpec spectrum (the bottom panel in Figure 6) shows multiple emission lines including Ni\(\rm{\textsc{iv}}\)]\(\lambda\)1486, C\(\rm{\textsc{iii}}\)]\(\lambda\)1909, [Oii]\(\lambda\)3727, [Neiii]\(\lambda\)3869, H\(\delta\), H\(\gamma\), H\(\beta\), and [Oiii]\(\lambda\lambda\)4959,5007 emission lines, indicating a spectroscopic redshift of \(z_{\rm spec}=8.679\). We do not include this galaxy in the calculation of the cosmic SFR density because of the possible AGN activity.
93316 was firstly photometrically reported in Donnan et al. (2023b) as a \(z\sim 16\) galaxy candidate (CR2-z16-1 in Harikane et al., 2023a), and many studies discuss the properties of this galaxy including the possible low-redshift solution based on the available photometric data (Harikane et al., 2023a; Zavala et al., 2023; Naidu et al., 2022a; Finkelstein et al., 2023; Bouwens et al., 2022a). As shown in Figure 8, the spectrum of 93316 obtained in Arrabal Haro et al. (2023b) shows two prominent emission lines at \(\sim 2.95\) and 3.89 \(\mu\)m. This indicates that 93316 is a galaxy at \(z_{\rm spec}=4.91\) whose strong emission lines mimic the photometric Lyman break at \(z\sim 16\), as discussed in Zavala et al. (2023) and Naidu et al. (2022a). This result indicates that galaxies at \(z=4.9\) with strong emission lines can be interlopers in a \(z\sim 16\) galaxy selection, and we need to carefully remove these interlopers from a sample of high redshift galaxy candidates. In Section 6.2, we discuss strategies to remove this kind of low redshift interlopers in the future JWST survey to search for galaxies at \(z\gtrsim 12\).
### Spectroscopically-Confirmed Galaxies in the Literature
In addition to the 16 confirmed galaxies at \(z_{\rm spec}>8.5\) described above, we compile spectroscopically-confirmed galaxies in the literature. The sample includes four galaxies at \(z_{\rm spec}=10.38-13.20\) reported and confirmed in Robertson et al. (2022) and Curtis-Lake et al. (2022), GN-z11 at \(z_{\rm spec}=10.60\) confirmed in Bunker et al. (2023), A2744-JD1 at \(z_{\rm spec}=9.76\) in Roberts-Borsani et al. (2022), 11027 at \(z_{\rm spec}=9.51\) in Williams et al. (2022), GS+53.11243-27.77461 at \(z_{\rm spec}=9.437\) in Cameron et al. (2023), and MACS1149-JD1 at \(z_{\rm spec}=9.110\) in Hashimoto et al. (2018). Including the galaxies confirmed in Section 2.2, our sample finally contains a total of 25 galaxies at \(z_{\rm spec}=8.610-13.20\), which is sufficiently large for the measurements of the UV luminosity functions and the cosmic SFR densities. Figure 1 show the absolute UV magnitudes and spectroscopic redshifts of our final sample of spectroscopically confirmed galaxies at \(z_{\rm spec}>8.5\), and Table 1 summarizes properties of them.
## 3 UV Luminosity Function
### Effective Volume Estimate
We calculate the UV luminosity functions at \(z\sim 9-12\) using the spectroscopically confirmed galaxies at \(z_{\rm spec}>8.5\). We divide our sample into the three redshift subsample at \(z_{\rm spec}=8.5-9.5\), \(9.5-11.0\), and \(11.0-13.5\), and calculate the number densities at \(z\sim 9\), 10, and 12. We also calculate an upper limit on the number density at \(z\sim 16\) based on the result of the \(z\sim 16\) candidate, 93316, which is found to be \(z_{\rm spec}=4.912\). Since the galaxies in our spectroscopic sample are confirmed with NIRSpec/MSA whose target selection and detection completenesses are not well-known, the estimate of the effective volume for the luminosity function is not straightforward. We use two methods to estimate the effective volume, as detailed below.
The first method is using the effective volume estimates published in the literature. Since sometimes all of the galaxies in a bin of a luminosity function are spectroscopically confirmed, we can use the effective volume and the number density estimated in the previous studies. For example, among the number density bins at \(z\sim 10\) in Oesch et al. (2018), the brightest bin and the two faintest bins are composed of GN-z11 and GS-z10-0 and A2744-JD1, respectively, all of which are spectroscopically confirmed. Thus we can use the effective volume estimates in the brightest and two faintest bins in Oesch et al. (2018) for spectroscopic constraints on the number densities. Similarly, the brightest bin in Castellano et al. (2022a) is composed of Gz9p3 at \(z_{\rm spec}=9.313\) (DHz1 in Castellano et al., 2022a). We also constraint the upper limit of the number density at \(z\sim 16\) using the effective volume estimate in Harikane et al. (2023a)
without the Stephan's Quintet field where one \(z\sim 16\) candidate (S5-z16-1) is identified.
The second method is using the field of view of the NIRSpec. Since the other galaxies in our sample are part of photometric samples that are not completely observed with spectroscopy, we can place a conservative lower limit on the number density. The faint galaxies at \(z_{\rm spec}=11.58-13.20\), GS-z11-0, GS-z12-0, and GS-z13-0, are confirmed in one pointing observation with NIRSpec in the JADES (Curtis-Lake et al., 2022). Thus we assume the effective survey area of 9 arcmin\({}^{2}\) (=1 field of view of NIRSpec) and calculate the survey volume at the redshift range of \(z=11.0-13.5\). Since we cannot put slits on all of the high redshift galaxy candidates with NIRSpec MSA due to the overlap of the spectra and the galaxy selection incompleteness, this survey volume is an upper limit. We also estimate the upper limit of the survey volume for GS+53.11243-27.77461 at \(z_{\rm spec}=9.437\) similarly assuming the survey area of 9 arcmin\({}^{2}\). For the sources selected with the CEERS NIRCam images (e.g., Finkelstein et al., 2022, 2023; Donnan et al., 2023; Harikane et al., 2023; Bouwens et al., 2022) that were confirmed in the CEERS and the DDT NIRSpec observations, we assume the survey area of 45 arcmin\({}^{2}\), which is an approximate estimate of the area of the regions where the NIRSpec pointings are overlapping with the NIRCam images. For the other sources selected with HST images in the CEERS field (Finkelstein et al., 2022), we use the survey area of the EGS field in Finkelstein et al. (2022), 205 arcmin\({}^{2}\). We do not use some sources in the lensing fields, MACS0647-JD, 11027, and MACS1149-JD1, because the survey area estimates for these sources are not straightforward due to the lensing magnification. We discuss that our constraints are consistent with these spectroscopically-confirmed galaxies in Section 5.
Based on the effective survey volumes estimated with these two methods, we obtain the constraints on the number density of galaxies at \(z\sim 9\), 10, 12, and 16. The \(1\sigma\) uncertainty of the number density is calculated by taking the Poisson confidence limit (Gehrels, 1986) and cosmic variance into account. We estimate the cosmic variance in the number densities following the procedures in Somerville et al. (2004). To obtain the large-scale bias parameter needed for the cosmic variance calculation, we estimate the dark matter halo mass of the galaxies using the simple abundance matching technique described in Harikane et al. (2016, Equation (66)) assuming a unity duty cycle and no satellite galaxies. We use the double-power law luminosity functions in Harikane et al. (2023) and the halo mass function in Behroozi et al. (2013), which is a modification of the Tinker et al. (2008) mass function. Then we calculate the bias parameters from the estimated halo masses using a redshift-dependent relation between the bias and the halo mass presented in Tinker et al. (2010).
### Results
Figure 11 shows our constraints on the number densities of galaxies at \(z\sim 9\), 10, 12, and 16, and Table 2 summarizes them. Our spectroscopic constraints are consistent with previous estimates of the number densities in the literature at \(z\sim 9-12\) based on the photometric samples, except for the brightest bin at \(z\sim 9\), which is higher than most of the estimates but consistent with that of Bagley et al. (2022). This bin includes Gzp3 at \(z_{\rm spec}=9.313\) in the Abell 2744 field and CEERS_1019 and EGS_z910.44164 at \(z_{\rm spec}=8.679\) and 8.610, respectively, in the CEERS field. As discussed in Larson et al. (2022) and Castellano et al. (2022), this high number density may originate from possible galaxy overdensities at \(z\sim 9\) in these fields. Although further spectroscopic observations in other fields are needed to distinguish whether the high number density is real or is due to the cosmic variance, we do not use these galaxies possibly in the overdensities for the cosmic SFR density estimate in Section 5.
In Figure 11, we also plot the double-power law functions,
\[\Phi(M_{\rm UV})=\frac{\ln 10}{2.5}\phi^{*}\] \[\times\left[10^{0.4(\alpha+1)(M_{\rm UV}-M_{\rm UV}^{*})}+10^{0. 4(\beta+1)(M_{\rm UV}-M_{\rm UV}^{*})}\right]^{-1} \tag{1}\]
and the Schechter functions,
\[\Phi(M_{\rm UV}) = \frac{\ln 10}{2.5}\phi^{*}10^{-0.4(M_{\rm UV}-M_{\rm UV}^{*})( \alpha+1)} \tag{2}\] \[\times\exp\left(-10^{-0.4(M_{\rm UV}-M_{\rm UV}^{*})}\right).\]
The parameters in each function are calculated from the interpolation and extrapolation of the results at \(z\sim 9\) and 12 in Harikane et al. (2023),
\[M_{\rm UV}^{*} = -0.09(z-9)-19.33 \tag{3}\] \[{\rm log}\phi^{*} = -0.28(z-9)-3.50\] (4) \[\alpha = -2.10\] (5) \[\beta = 0.15(z-9)-3.27 \tag{6}\]
for the double-power law function, and
\[M_{\rm UV}^{*} = 0.32(z-9)-21.24 \tag{7}\] \[{\rm log}\phi^{*} = -0.08(z-9)-4.83\] (8) \[\alpha = -2.35 \tag{9}\]
for the Schechter function. As shown in Figure 11, these functions are consistent with our spectroscopic constraints on the number densities within \(1\sigma\) errors.
### Comparison with Model Predictions
In Figure 12, we compare our constraints on the number densities at \(z\sim 9-16\) with theoretical model predictions in Dayal et al. (2014, 2019), Behroozi et al. (2019, 2020), Wilkins et al. (2023), Mason et al. (2023), and Yung et al. (2023). These model predictions agree with our spectroscopic constraints at \(z\sim 9-10\). However, at \(z\sim 12\), the number densities of Mason et al. (2023) and Yung et al. (2023) are lower than our lower limit around \(M_{\rm UV}=-20\) mag, implying mild redshift evolution compared to rapid evolution predicted by these models. Similarly, Figure 13 shows the predicted number of bright galaxies at \(z>11.0\) with \(M_{\rm UV}<-19.8\) mag. Figure 13 indicates that most of the models underpredict the number of galaxies compared to that of the spectroscopically-confirmed ones in this study, although
Figure 11: UV luminosity functions at \(z\sim 9\) (upper-left), \(z\sim 10\) (upper-right), \(z\sim 12\) (lower-left), and \(z\sim 16\) (lower-right). The red diamonds represent the number densities of galaxies with spectroscopic redshifts derived in this study. The errors include the cosmic variance (see text). The gray open symbols are estimates based on photometric samples by previous studies (Harikane et al., 2023; Pérez-González et al., 2023; Donnan et al., 2023; Bouwens et al., 2021, 2022, 2022; Téefanou et al., 2019; Bowler et al., 2020; McLeod et al., 2016; Bagley et al., 2022; Castellano et al., 2022; Morishita et al., 2018; Morishita and Stiavelli, 2022; Oesch et al., 2018; Finkelstein et al., 2022a, b; Naidu et al., 2022b). The gray solid and dashed lines are double power-law and Schechter functions, respectively, interpolated and extrapolated using the results at \(z\sim 9-12\) in Harikane et al. (2023, Equations (3)-(9)). The brightest bin at \(z\sim 9\) (the open red diamond) could be affected by the overdensities (see text).
the significance is small, and more data are needed to obtain the conclusion. This difference between the observations and models would suggest that the feedback effects in the models may be too strong to produce abundant bright galaxies, lower dust obscuration in these bright galaxies than the model assumptions, and/or that there exist hidden AGNs that produce radiation comparable with or more than stellar components of the galaxies, although there is a possibility that this difference may be caused by other physical processes, as discussed in Section 6.1. Further spectroscopic observations will improve the statistics and allow us to distinguish these models, important to understand star formation and feedback in these early galaxies.
We also compare our constraints with models assuming different star formation efficiencies, \(f_{\rm SF}\), which is defined as the ratio of the SFR to the baryon accretion rate (see also, e.g., Bouche et al., 2010; Behroozi et al., 2013, 2019; Mason et al., 2015; Harikane et al., 2018, 2022b; Tacchella et al., 2018; Moster et al., 2018; Inayoshi et al., 2022). In the models, the SFR is expressed as,
\[{\rm SFR}=f_{\rm SF}\times f_{\rm b}\times\frac{dM_{\rm h}}{dt}(M_{\rm h},z), \tag{10}\]
Figure 12: Comparison of the luminosity functions with theoretical predictions in the literature at \(z\sim 9\) (upper-left), \(z\sim 10\) (upper-right), \(z\sim 12\) (lower-left), and \(z\sim 16\) (lower-right). The red symbols show observational results based on the spectroscopically-confirmed galaxies obtained in this study. The blue lines show the theoretical and empirical models of Dayal et al. (2014, 2019, solid line), Behroozi et al. (2019, 2020, dotted-dashed line), Vijayan et al. (2021, double-dotted dashed line at \(z\sim 9\)), Wilkins et al. (2023, double-dotted dashed line at \(z\sim 10-16\)), Mason et al. (2015, 2023, dashed line; their model with dust extinction), and Yung et al. (2023, dotted line),. At \(z\sim 12\), our spectroscopic constraints are higher than the number densities of some models predicting rapid redshift evolution.
where \(f_{\rm b}=\Omega_{\rm b}/\Omega_{\rm n}=0.157\) is the cosmic baryon fraction, and \(\frac{dM_{\rm h}}{dt}(M_{\rm h},z)\) is the matter accretion rate in Behroozi and Silk (2015). The SFR is converted to the UV luminosity using the following equation assuming the Salpeter (1955) IMF,
\[L_{\rm UV}(\rm ergs^{-1}Hz^{-1})=SFR(M_{\odot}yr^{-1})/(1.15\times 10^{-28}). \tag{11}\]
Then we calculate the UV luminosity function, \(\frac{dn}{dM_{\rm UV}}\), using the halo mass function, \(\frac{dn}{dM_{\rm h}}\),
\[\frac{dn}{dM_{\rm UV}}=\frac{dn}{dM_{\rm h}}\left|\frac{dM_{\rm h}}{dM_{\rm UV }}\right|, \tag{12}\]
assuming a 0.2 dex scatter in the halo mass, \(\sigma_{\log M_{\rm h}}=0.2\), following Harikane et al. (2018).
We plot these models in Figure 14 with our spectroscopic constraints. At \(z\sim 9-10\), our constraints on galaxies with \(-21\lesssim M_{\rm UV}\lesssim-18\) mag are consistent with the models assuming \(f_{\rm SF}=2\%\), which is the maximum value of the star formation efficiency inferred from observations at \(z\sim 7\) (see Figure 19 in Harikane et al., 2022). At \(z\sim 12\), our spectroscopic constraints are consistent with a high star formation efficiency with \(f_{\rm SF}\gtrsim 5\%\) for galaxies with \(-20\lesssim M_{\rm UV}\lesssim-19\) mag, which is in contrast to lower redshift results. The physical origin of the high star formation efficiency is discussed in Section 6.1.
## 4 Properties of galaxies at \(z_{\rm SPEC}>8.5\)
To understand the physical properties of spectroscopically-confirmed galaxies, we estimate the stellar mass and SFR of the galaxies. We conduct SED fittings using prospector(Johnson et al., 2021) for galaxies confirmed in the JADES (Curtis-Lake et al., 2022) and galaxies in the CEERS field, including the two brightest galaxies, CEERS2_588 and Maisie's Galaxy at \(z_{\rm spec}=11.04\) and \(z_{\rm spec}=11.40\), respectively. The photometric measurements are taken from Finkelstein et al. (2023) and Robertson et al. (2022). In the SED fitting, we change the optical depth in the \(V\)-band, metallicity, star formation history, and total stellar mass as free parameters while fixing the redshift to the spectroscopically-determined value. We assume a continuity prior for the star formation history, and flat priors for other parameters in the range of \(0<\tau_{\rm V}<2\), \(-2.0<\log(Z/Z_{\odot})<0.4\), and \(6<\log(M_{\star}/M_{\odot})<12\). For other parameters, we adopt the same assumptions
\begin{table}
\begin{tabular}{c c} \hline \hline \(M_{\rm UV}\) & \(\Phi\) \\ (ABmag) & Mpc\({}^{-3}\) mag\({}^{-1}\) \\ \hline \(z\sim 9\) (\(z=8.5-9.5,z_{\rm ave}=8.93\)) \\ \(-22.0\) & \(6.6^{+7.1}_{-4.7}\times 10^{-6}\) \\ \(-21.0\) & \(>5.1^{+7.0}_{-3.8}\times 10^{-6}\) \\ \(-20.0\) & \(>2.9^{+3.2}_{-2.2}\times 10^{-5}\) \\ \(-19.0\) & \(>3.5^{+3.7}_{-2.4}\times 10^{-5}\) \\ \hline \(z\sim 10\) (\(z=9.5-11.0,z_{\rm ave}=10.24\)) \\ \(-21.6\) & \(1.0^{+2.3}_{-0.9}\times 10^{-6}\) \\ \(-20.6\) & \(>8.7^{+20.5}_{-8.4}\times 10^{-6}\) \\ \(-19.6\) & \(>2.6^{+2.8}_{-1.8}\times 10^{-5}\) \\ \(-18.6\) & \(1.9^{+4.7}_{-1.9}\times 10^{-4}\) \\ \(-17.6\) & \(6.3^{+15.8}_{-6.3}\times 10^{-4}\) \\ \hline \(z\sim 12\) (\(z=11.0-13.5,z_{\rm ave}=11.98\)) \\ \(-20.1\) & \(>8.9^{+9.1}_{-5.5}\times 10^{-6}\) \\ \(-18.7\) & \(>6.6^{+6.0}_{-4.6}\times 10^{-5}\) \\ \hline \multicolumn{2}{c}{\(z\sim 16\)} \\ \(-21.9\) & \(<9.8\times 10^{-6}\) \\ \hline \end{tabular} Note. – Errors and upper limits are \(1\sigma\).
\end{table}
Table 2: Spectroscopic Constraints on the Luminosity Function at Each Redshift
Figure 13: Theoretical predictions for the number of bright galaxies at \(z\geq 11.0\) with \(M_{\rm UV}<-19.8\) mag. These numbers are based on the theoretical models of Dayal et al. (2014, 2019), Behroozi et al. (2020), Wilkins et al. (2023), Mason et al. (2023), and Yung et al. (2023). The red horizontal line with the shaded region indicates the number of the spectroscopically-confirmed galaxies at \(z_{\rm spec}\geq 11.0\) with \(M_{\rm UV}<-19.8\) mag (\(N_{\rm obs,spec}=2\)) and its uncertainty including both the Poisson error and the cosmic variance. Most of the models predict a lower number of bright (\(M_{\rm UV}<-19.8\) mag) galaxies at \(z\geq 11.0\) than the observation.
as those in the spectral fitting in Section 2.2. Table 3 summarizes the results of the SED fittings.
Figure 15 shows the stellar mass as a function of the redshift in the same manner as Harikane et al. (2023). The SED fittings suggest that the spectroscopically-confirmed galaxies are massive with stellar masses of \(10^{8}\lesssim M_{*}\lesssim 10^{9}\ M_{\odot}\). These stellar masses are well below the mass prohibited by the standard \(\Lambda\)CDM cosmology, calculated from the maximum halo mass that can be observed with the survey volume using the cosmic baryon fraction. CEERS2_588 has a large stellar mass of \(M_{*}\sim 10^{9}\ M_{\odot}\) at \(z_{\rm spec}=11.04\), which is higher than the expected value with the maximum stellar-to-halo mass ratio (\(M_{*}/M_{\rm h}\)) in Behroozi et al. (2020). The galaxies identified in the JADES also have large stellar masses of \(M_{*}\sim 10^{8}-10^{9}\ M_{\odot}\) at \(z_{\rm spec}=11.58-13.20\), higher than the expected values with the maximum \(M_{*}/M_{\rm h}\) ratio. These stellar mass estimates provide rough lower limits that miss the contribution from old stellar populations beyond the Balmer break, given high specific SFRs of these galaxy candidates, SFR/\(M_{*}\sim 10^{-8}\ \rm yr^{-1}\). These results indicate that the galaxies at \(z_{\rm spec}\gtrsim 11\) are brighter than the expectations of the model of Behroozi et al. (2020). Physical interpretations of the high stellar masses are discussed in Section 6.1.
## 5 Cosmic SFR density
Using our spectroscopic galaxy sample, we calculate the lower limits of the cosmic SFR densities at \(z\sim 9\), 10, and 12, using the effective volume estimate in Sec
Figure 14: Comparison of the luminosity functions with models assuming various star formation efficiencies (the black curves), \(f_{\rm SF}=2\%\), \(5\%\), \(15\%\), \(40\%\), and \(100\%\), which are defined as the ratio of the SFR to the baryon accretion rate (see also Inayoshi et al., 2022). The red symbols show observational results based on the spectroscopically-confirmed galaxies. The number densities at \(z\sim 12\) suggest a high star formation efficiency of \(f_{\rm SF}\gtrsim 5\%\).
tion 3.1. We convert the observed UV luminosity to the SFR using Equation (11) assuming the Salpeter (1955) IMF. In this calculation, we do not use CEERS_1019 and GN-z11, because possible AGN activity is suggested in these galaxies (Larson et al., 2023; Bunker et al., 2023). We also exclude G29p3 and EGS_z910_44164, galaxies possibly in the overdense regions. Since we only use spectroscopically-confirmed galaxies and do not correct for dust extinction, these constraints are firm lower limits.
Figure 16 shows our spectroscopic lower limits based on galaxies brighter than \(M_{\rm UV}=-18.0\) mag, corresponding to the SFR of SFR\({}_{\rm UV}=0.8\)\(M_{\odot}\) yr\({}^{-1}\). We also plot estimates based on the photometric samples in the literature (Harikane et al., 2023; Donnan et al., 2023; Perez-Gonzalez et al., 2023; Bouwens et al., 2020, 2022a, 2022b; Finkelstein et al., 2015; Ellis et al., 2013; Coe et al., 2013). Since some of these studies calculate the SFR densities with different integration limits from \(M_{\rm UV}=-18.0\) mag, we have corrected their results based on the difference between the SFR density integrated down to their limit and that down to \(M_{\rm UV}=-18.0\) mag using their fiducial luminosity function, in the same manner as Bouwens et al. (2022b). Our lower limits are consistent with these photometric estimates at \(z\sim 9-12\), especially those in Ellis et al. (2013) and Coe et al. (2013), which are based on the photometric candidates at \(z\sim 11-12\) that are confirmed with JWST, GS-z11-0 and MACS0647-JD. Our
\begin{table}
\begin{tabular}{c c c} \hline \hline \multicolumn{1}{c}{ Redshift} & \multicolumn{1}{c}{log\(\rho_{\rm UV}\)} & \multicolumn{1}{c}{log\(\rho_{\rm SFR,UV}\)} \\ & (erg s\({}^{-1}\) Hz\({}^{-1}\) Mpc\({}^{-3}\)) & (M\({}_{\odot}\) yr\({}^{-1}\) Mpc\({}^{-3}\)) \\ \hline \(z_{\rm ave}=8.89\) & \(>24.81^{+0.28}_{-0.42}\) & \(>-3.13^{+0.28}_{-0.42}\) \\ \(z_{\rm ave}=10.04\) & \(>24.56^{+0.38}_{-0.39}\) & \(>-3.38^{+0.38}_{-0.39}\) \\ \(z_{\rm ave}=11.97\) & \(>24.35^{+0.23}_{-0.31}\) & \(>-3.59^{+0.23}_{-0.31}\) \\ \hline \end{tabular} Note. – Errors are \(1\sigma\). \(\rho_{\rm SFR,UV}\) is the SFR density based on the Salpeter (1955) IMF without dust extinction correction. Galaxies with possible AGN signatures (GN-z11 and CEERS_1019) and possibly in the overdensities (G29p3 and EGS_z910_44164) are excluded.
\end{table}
Table 4: Spectroscopic Constraints on Cosmic UV Luminosity Density and SFR Density
Figure 15: Stellar masses of spectroscopically-confirmed galaxies. The red-filled diamonds are stellar masses of bright galaxies at \(z_{\rm spec}>8.5\) identified in the CEERS field, including the brightest galaxy at \(z_{\rm spec}>11.0\), CEERS2_588 at \(z_{\rm spec}=11.04\). The orange-filled squares are stellar masses of the spectroscopically-confirmed galaxies at \(z_{\rm spec}>10.0\) in the JADES, GS-z13-0, GS-z12-0, GS-z11-0, and GS-z10-0. The blue open star denotes the stellar mass of GN-z11 taken from Tacchella et al. (2023). The red and orange shaded regions indicate the stellar masses that are prohibited by the standard \(\Lambda\)CDM cosmology for the CEERS and JADES galaxies, respectively, calculated from the maximum halo mass that can be observed with the survey volume in Harikane et al. (2023) including the CEERS field, and with the volume of the JADES NIRSpec observation estimated in this study, using the cosmic baryon fraction, \(\Omega_{\rm b}/\Omega_{\rm m}=0.157\). The red and orange solid curves are the stellar masses calculated from the maximum \(M_{*}/M_{\rm h}\) value in Behroozi et al. (2020, B20) with the maximum halo mass for the survey volumes of Harikane et al. (2023), including CEERS) and the JADES, respectively. While there are no spectroscopically-confirmed galaxies with very large stellar masses violating the \(\Lambda\)CDM model, some galaxies at \(z_{\rm spec}=11-13\) have higher stellar masses than the model predictions.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{1}{c}{ ID} & \(z_{\rm spec}\) & \(SFR\) & \(M_{*}\) \\ & & (M\({}_{\odot}\) yr\({}^{-1}\)) & (M\({}_{\odot}\)) \\ \hline GS-z13-0 & 13.20 & \(0.5^{+0.7}_{-0.1}\) & \((5.1^{+21.2}_{-1.2})\times 10^{7}\) \\ GS-z12-0 & 12.63 & \(0.5^{+1.3}_{-0.1}\) & \((4.3^{+1.8}_{-0.6})\times 10^{8}\) \\ GS-z11-0 & 11.58 & \(1.8^{+0.4}_{-0.5}\) & \((1.2^{+0.1}_{-0.3})\times 10^{9}\) \\ Maisie’s Galaxy & 11.40 & \(0.8^{+0.1}_{-0.1}\) & \((6.1^{+23.5}_{-1.4})\times 10^{7}\) \\ CEERS2\_588 & 11.04 & \(12.7^{+0.7}_{-4.9}\) & \((9.9^{+23.9}_{-4.0})\times 10^{8}\) \\ GS-z10-0 & 10.38 & \(1.6^{+0.6}_{-0.2}\) & \((2.0^{+0.6}_{-0.4})\times 10^{8}\) \\ CEERS2\_7929 & 10.10 & \(5.9^{+1.6}_{-2.6}\) & \((4.6^{+5.5}_{-1.7})\times 10^{8}\) \\ CEERS-24 & 8.998 & \(1.9^{+1.9}_{-0.4}\) & \((9.8^{+51.2}_{-0.1})\times 10^{7}\) \\ CEERS-23 & 8.881 & \(1.1^{+2.9}_{-0.3}\) & \((7.6^{+5.0}_{-1.5})\times 10^{7}\) \\ CEERS1\_6059 & 8.876 & \(8.2^{+2.4}_{-2.1}\) & \((9.6^{+1.7}_{-1.1})\times 10^{8}\) \\ CEERS1\_3858 & 8.807 & \(3.6^{+1.9}_{-0.4}\) & \((2.3^{+2.7}_{-0.3})\times 10^{8}\) \\ \hline \end{tabular} Note. – Errors are \(1\sigma\). Assuming the Chabrier (2003) IMF. The SFR is averaged over the past 50 Myr. See Section 4 for the details of the SED fitting.
\end{table}
Table 3: SFRs and Stellar Masses of Spectroscopically-Confirmed Galaxies
constraint at \(z\sim 12\) is \(\sim 5\) times higher than the model predictions assuming the constant star formation efficiency in Harikane et al. (2018, 2022b), Mason et al. (2015, 2023), and Sun & Furlanetto (2016) at \(\sim 2-3\sigma\) (see also Bouwens et al. 2022a), supporting earlier suggestions of the slow redshift evolution from \(z>10\) based on the photometric samples. This indicates a higher star formation efficiency in galaxies at \(z>12\) or other physical properties different from galaxies at \(z<10\), which will be discussed in Section 6.1.
## 6 Discussion
### High Cosmic SFR Density at \(z>10\)
As presented in Section 5, this study using spectroscopically-confirmed galaxies suggests that the cosmic SFR density at \(z\sim 12\) is \(\sim 5\) times higher
Figure 16: Cosmic SFR density evolution. The red diamonds represent the spectroscopic constraints on the cosmic SFR densities obtained in this study integrated down to \(M_{\rm UV}=-18.0\) mag (corresponding to SFR\({}_{\rm UV}=0.8\)\(M_{\odot}\) yr\({}^{-1}\), based on the Salpeter (1955) IMF with a conversion factor of SFR/\(L_{\rm UV}=1.15\times 10^{-28}\)\(M_{\odot}\) yr\({}^{-1}\)/(erg s\({}^{-1}\) Hz\({}^{-1}\))). These measurements are firm lower limits because 1) only spectroscopically-confirmed galaxies without AGN signatures are used, 2) galaxies possibly in the overdensities are excluded, and 3) the measurements are not corrected for dust extinction. The error includes both the 1\(\sigma\) Poisson error and the cosmic variance. The blue curves are predictions of the constant star formation (SF) efficiency models of Harikane et al. (2018, 2022b, solid), Mason et al. (2015, 2023, dashed), and Sun & Furlanetto (2016, dotted). The obtained lower limit of the SFR density at \(z\sim 12\) is higher than the model predictions. Note that the predictions of Harikane et al. (2018, 2022b) and Mason et al. (2015, 2023) are integrated down to \(M_{\rm UV}=-18.0\) mag, while that of Sun & Furlanetto (2016) is down to \(M_{\rm UV}=-17.7\) mag. The gray open symbols are estimates of previous studies using photometric samples: Harikane et al. (2023a, diamond), Donnan et al. (2023b, circle), Pérez-González et al. (2023, hexagon), Bouwens et al. (2020, left-pointong triangle), Bouwens et al. (2022b, cross), Bouwens et al. (2022a, square), Finkelstein et al. (2015, pentagon), Coe et al. (2013, plus), and Ellis et al. (2013, star). Our spectroscopic constraints are consistent with these photometric estimates, especially those in Ellis et al. (2013) and Coe et al. (2013), which are based on the photometric candidates at \(z\sim 11-12\) that are confirmed with JWST, GS-z11-0 and MACS0647-JD.
than the predictions of the constant star formation efficiency models, although the models can reproduce the observed SFR densities at \(z<10\). Similarly, the stellar masses of some spectroscopically-confirmed galaxies are also higher than model predictions (Section 4). Here we discuss the following five possibilities that explain the observed high SFR densities at \(z>10\).
1. _High star formation efficiency._ Since the constant star formation efficiency models predict lower SFR densities than that we have obtained at \(z\sim 12\), one of the natural interpretations is a high star formation efficiency in galaxies at \(z>10\). The comparisons of the UV luminosity functions also suggest a high star formation efficiency of \(f_{\rm SF}\gtrsim 5\%\) as shown in Figure 14. Efficient star formation in the early universe can be achieved with several physical mechanisms, such as no suppression of the UV background feedback (e.g., Barkana & Loeb 2000; Susa & Umemura 2004), compact star formation (e.g., Fukushima & Yajima 2021), and the feedback-free starbursts (Dekel et al. 2023). Regarding the UV background feedback, galaxies and AGN produce UV radiation by their star formation and nuclear activity and make strong UV background radiation at the epoch of reionization (EoR; \(z\sim 6-10\)) and the epoch of post-reionization (post-EoR; \(z\lesssim 6\), Ouchi et al. 2020; Robertson 2021). The UV background radiation heats up Hi gas in low-mass halos of \(M_{\rm h}\lesssim 10^{8}-10^{9}M_{\odot}\) with negligible Hi self-shielding, suppressing star-formation at the EoR and post-EoR (Barkana & Loeb 2000; Susa & Umemura 2004; Hoeft et al. 2006; Pawlik & Schaye 2009; Mesinger et al. 2009; Sawala et al. 2010; Bland-Hawthorn et al. 2015). Before the EoR (\(z\gtrsim 10\)), galaxies in the low-mass halos are not expected to be affected by the UV background feedback, resulting in a high star formation efficiency at \(z\gtrsim 10\) compared to one at \(z\lesssim 10\), as discussed in Harikane et al. (2023). Also, high redshift galaxies are expected to be compact and dense. Several simulations predict that such galaxies form stars efficiently, with star formation efficiencies sometimes higher than 10% (e.g., Fukushima & Yajima 2021). Dekel et al. (2023) also discuss that high densities and low metallicities in galaxies at \(z\gtrsim 10\) result in a high star formation efficiency with feedback-free starbursts.
2. _Presence of AGN activity._ Another possibility is that a part of the observed UV luminosity densities at \(z>10\) is produced by AGN, and there are no excessive SFR densities at \(z>10\) beyond the constant star-formation efficiency model. Although the quasar luminosity function shows a very rapid decline at \(z>4\) compared to that of galaxies (e.g., Harikane et al. 2022), recent spectroscopic studies report faint AGNs in galaxy samples at \(z>4\)(Kocevski et al. 2023; Ubler et al. 2023; Larson et al. 2023; Harikane et al. 2023). Indeed, two bright galaxies in our spectroscopic sample, CEERS_1019 and GN-z11, could have AGNs (Larson et al. 2023; Bunker et al. 2023), although they are removed from the sample used for the SFR density calculations. Harikane et al. (2023) discuss that the contribution of the AGN to the total UV light is \(\sim 50\%\) on average in these faint AGNs. Given an increasing AGN fraction from \(z\sim 0\) to \(z\sim 4\)(Harikane et al. 2023), the hidden AGN contribution may ease the tension in the observed vs. predicted SFR densities at \(z>10\).
3. _A top-heavy IMF._ In the early universe, the IMF is theoretically expected to be more top-heavy than that in the lower redshift universe. Low metallicity in the gas makes the Jeans mass higher, resulting in the formation of many massive stars, especially for Pop III stellar populations (e.g., Hirano et al. 2014, 2015). Even if the metallicity is moderate due to a possible short timescale of the metal enrichment, a high CMB temperature makes a top-heavy IMF whose slope is flatter than that of the Salpeter (1955) IMF (e.g., Omukai et al. 2005; Chon et al. 2022; Steinhardt et al. 2022). As discussed in Harikane et al. (2023), such a top-heavy IMF, especially with Pop-III, reduces the UV-to-SFR conversion factor by a factor of \(3-4\) (Figure 20 in Harikane et al. 2023), which will explain the observed high UV luminosity densities at \(z\sim 12\) compared to the constant efficiency models.
4. _A large scatter in the \(M_{\rm h}-SFR\) relation at \(z>10\)._ Some of the constant star formation efficiency models compared in Section 5 assume the redshift-independent scatter in the \(M_{\rm h}-SFR\) relation. Harikane et al. (2018, 2022) assume the 0.2 dex scatter in the halo mass, corresponding to the 0.4 dex scatter in the UV luminosity, while Sun & Furlanetto (2016) assume the 0.2 dex scatter in the UV luminosity. Mason et al. (2023) discuss that the majority of the JWST-observed galaxies at \(z\gtrsim 10\) lie above the median \(M_{\rm h}-M_{\rm UV}\) relation due to a large scatter in the relation. We recalculate the SFR densities using the constant efficiency model in Harikane et al. (2018, 2022) using a
larger scatter, and find that a 0.4 dex scatter in the halo mass (0.8 dex scatter in the UV luminosity or SFR) increases an SFR density by a factor of 3 at \(z\sim 12\), which makes the model prediction consistent with the observed lower limit. Thus a large scatter in the \(M_{\rm h}-{\rm SFR}\) relation at \(z>10\), probably due to bursty star formation histories in the early galaxies, may explain the observed high SFR density at \(z\sim 12\).
* _Cosmic Variance._ Since the survey volume of the JWST dataset used in this study is still not large, the derived cosmic SFR densities suffer from the cosmic variance. We have evaluated the effects of the cosmic variance using the large-scale bias estimated with the abundance matching and included them in the errors of the number densities and the cosmic SFR densities. Thus, our results indicate that the SFR density at \(z\sim 12\) is higher than the model predictions beyond the uncertainty of the cosmic variance, although a large spectroscopic survey is needed to conclude this.
### Strategies for removing low-redshift interlopers in the future JWST surveys
As shown in Section 2.1, the previously-claimed \(z\sim 16\) galaxy candidate, 93316, firstly identified in Donnan et al. (2023b), is found to be a galaxy at \(z=4.912\) with strong [Oiii]\(\lambda\lambda\)4959,5007 and H\(\alpha\) emission lines. Many studies independently obtained the best-fit photometric redshift to be \(z_{\rm phot}\sim 16\) for 93316 based on the red color of \(\rm F200W-F277>1.0\) and the flat continuum at \(\lambda_{\rm obs}>3\ \mu\)m (Donnan et al., 2023b; Harikane et al., 2023a; Zavala et al., 2023; Naidu et al., 2022a; Finkelstein et al., 2023; Bouwens et al., 2022a). However, as shown in Figure 17, the red color and flat continuum of 93316 are actually mimicked by a red continuum and the strong [Oiii]\(\lambda\lambda\)4959,5007 and H\(\alpha\) emission lines in the F277W, F356W, F410M, and F444W bands, as discussed in Zavala et al. (2023) and Naidu et al. (2022a). The F277W band flux is significantly contributed by the [Oiii]\(\lambda\lambda\)4959,5007 lines, while the fluxes in the F356W, F410M, and F444W bands are boosted by the H\(\alpha\) line, which happens in a very narrow redshift window of \(\Delta z\lesssim 0.1\)(Naidu et al., 2022a). Such a strong-line emitter with the tuned redshift is expected to be rare, but this result may indicate that a bright \(z\sim 16\) galaxy is comparably rare (see the bottom-right panel in Figure 11).
Imaging observations with multiple medium-band filters can remove such low-redshift interlopers with strong emission lines. Figure 18 shows SEDs of the F150W, F200W, and F277W-dropouts. For example, the F150W-dropout has a high redshift solution at \(z\sim 12\), where the observed red color of \(\rm F150W-F200W\) is due to the Lyman break, and a low redshift solution, where the red color and flat continuum are made by strong [Oiii]\(\lambda\lambda\)4959,5007 and H\(\alpha\) emission lines, which enter in the F200W and F277W bands, respectively. Such low redshift interlopers can be removed with the medium-band filter F182M or F250M images, free from strong emission lines. Similarly, the F250M and F335M band observations are useful to distinguish \(z\sim 16\) and \(z\sim 5\) solutions for the F200W-dropout that shows a flat continuum with the F277W, F356W, F410M, and F444W bands, like 93316 (Figure 17). For the F277W-dropout that has a possibility of a \(z\sim 22\) galaxy, the F335M and F410M images (or possibly F250M) can efficiently remove a low redshift interloper at \(z\sim 6\). Instead of the multiple medium-band images, short NIRCam Wide-Field Slitlless Spectroscopic observations may also be useful to eliminate low redshift interlopers by detecting strong emission lines mimicking the Lyman break at high redshifts. Future JWST surveys aiming to identify high redshift galaxies, especially bright galaxies at \(z>10\) whose number density is low, should have good strategies such as using medium-band filters or slitless spectroscopy, to remove low redshift interlopers including low redshift galaxies with strong emission lines like 93316.
## 7 Summary
Figure 17: SED of 93316 (CR2-z16-1), a \(z\sim 16\) galaxy candidate that is found to be \(z_{\rm spec}=4.912\). The blue circles are magnitudes calculated from the NIRSpec spectrum multiplied by a factor of 1.4 (the thin blue line) to correct for the slit loss. The red open diamonds are magnitudes from the photometry in Harikane et al. (2023a), which are consistent with those from the spectrum. The upper limits are 3\(\sigma\). This agreement indicates that the strong [Oiii]\(\lambda\lambda\)4959,5007 and H\(\alpha\) lines boost the fluxes in the F277W, F356W, F410M, and F444W bands, and mimic the Lyman break at \(z\sim 16\), as discussed in Zavala et al. (2023) and Naidu et al. (2022a).
In this paper, we present the spectroscopic constraints on the UV luminosity functions and cosmic SFR densities at \(z\sim 9-12\). We have independently confirmed spectroscopic redshifts of 16 galaxies at \(z_{\rm spec}>8.5\) including new redshift determinations, and a bright interloper at \(z_{\rm spec}=4.91\) that was claimed as a photometric candidate at \(z\sim 16\). Based on the 25 galaxies at \(z_{\rm spec}=8.61-13.20\) (Figure 1, Table 1), we calculate the UV luminosity functions and the cosmic SFR densities, which are the firm lower limits based only on the spectroscopically-confirmed galaxies. Our major findings are summarized below:
1. With the conservative treatments of the effective volumes and completeness, we have obtained the constraints on the UV luminosity functions at \(z\sim 9-12\) (Figure 11). Our spectroscopic constraints are consistent with the previous estimates based on the photometric data. The observed luminosity functions agree with the theoretical model predictions at \(z\sim 9-10\) but are higher than some models at \(z\sim 12\), implying a mild redshift evolution (Figure 12). The lower limits of the number densities at \(z\sim 12\) suggest that the star formation efficiency of galaxies at \(z\sim 12\) is high, \(f_{\rm SF}\gtrsim 5\%\) (Figure 14).
2. We have estimated the stellar masses of the spectroscopically-confirmed galaxies (Figure 15). The estimated stellar masses are \(M_{*}\sim 10^{8}-10^{9}\ M_{\odot}\), and some of the galaxies have higher stellar masses than the model predictions. However, these high stellar masses do not violate the standard \(\Lambda\)CDM cosmology.
3. We have derived the lower limits of the cosmic SFR densities at \(z\sim 9-12\) considering only the spectroscopically-confirmed galaxies without \(\Lambda\)GN signatures (Figure 16). The obtained lower limit is higher than the model predictions assuming the constant star formation efficiencies, supporting earlier suggestions of the slow redshift evolution at \(z>10\) based on the photometric data. We discuss that the physical origin of the high SFR density at \(z>10\) is a high star formation efficiency, AGN activity, a top-heavy IMF, a large scatter in the \(M_{\rm h}-{\rm SFR}\) relation, and/or the cosmic variance, although the face value of the SFR density is larger than the models beyond the uncertainty from the cosmic variance.
4. The previous \(z\sim 16\) galaxy candidate, 93316, is found to be a galaxy at \(z_{\rm spec}=4.912\). The strong [Oiii]\(\lambda\lambda 4959\),5007 and H\(\alpha\) emission lines
Figure 18: Importance of the medium-band filter observations in the survey of \(z\gtrsim 12\) galaxies. The top, middle, and bottom panels show SEDs of the F150W-, F200W-, and F277W-dropouts, respectively. Each dropout has possibilities of a high redshift solution (\(z>10\)), in which the observed break is made by the high redshift Lyman break, and a low redshift solution (\(z<8\)), where the break is mimicked by strong [Oiii]\(\lambda\lambda 4959\),5007 and H\(\alpha\) emission lines. The red diamonds and blue circles are magnitudes for the high redshift and low redshift solutions, respectively, calculated from the spectra for the two cases (the red and blue curve), which are made from the \(z=16.3\) model SED of 93316 and its observed spectrum (Figure 17). We cannot distinguish the two solutions only with the broad-band magnitudes (the filled symbols) but can eliminate the low redshift solution with the medium-band observations with F182M, F250M, F335M, and/or F410M (the open symbols).
in 93316 boost the fluxes in the F277W band and the F356W, F410M, and F444W bands, respectively, mimicking the Lyman break at \(z\sim 16\) (Figure 17). Such a strong line emitter can be significant contamination in a galaxy selection at \(z\sim 16\). We discuss that short NIRCam spectroscopic observations or multiple medium-band imaging observations would be useful to remove such low-redshift interlopers with strong lines in the future JWST survey to search for galaxies at \(z\gtrsim 12\) (Figure 18).
This study demonstrates that the JWST/NIRSpec spectroscopic data are useful to obtain robust constraints on the UV luminosity function and the cosmic SFR densities at \(z\gtrsim 9\). Future large spectroscopic surveys will allow us to obtain more precise measurements and discover the first-generation galaxies at \(z>15\), crucial to understand the first galaxy formation.
## Acknowledgments
This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with programs ERS-1324 (GLASS), ERS-1345 (CEERS), GO-1433, ERO-2736, and DDT-2750. The authors acknowledge the ERO, GLASS, CEERS, GO-1433, and DDT-2750 teams led by Klaus M. Pontoppidan, Tommaso Treu, Steven L. Finkelstein, Dan Coe, and Pablo Arrabal Haro, respectively, for developing their observing programs with a zero-exclusive-access period. This publication is based upon work supported by the World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan, KAKENHI (20H00180, 21J20785, 21K13953, 21H04467) through Japan Society for the Promotion of Science, and JSPS Core-to-Core Program (grant number: JPJSCCA20210003). This work was supported by the joint research program of the Institute for Cosmic Ray Research (ICRR), University of Tokyo.
|
2302.13865
|
AI-Driven Container Security Approaches for 5G and Beyond: A Survey
|
The rising use of microservices based software deployment on the cloud
leverages containerized software extensively. The security of applications
running inside containers as well as the container environment itself are
critical infrastructure in the cloud setting and 5G. To address the security
concerns, research efforts have been focused on container security with
subfields such as intrusion detection, malware detection and container
placement strategies. These security efforts are roughly divided into two
categories: rule based approaches and machine learning that can respond to
novel threats. In this study, we have surveyed the container security
literature focusing on approaches that leverage machine learning to address
security challenges.
|
Ilter Taha Aktolga, Elif Sena Kuru, Yigit Sever, Pelin Angin
|
2023-02-27T15:05:53Z
|
http://arxiv.org/abs/2302.13865v2
|
# Ai-Driven Container Security Approaches for 5G and Beyond:
###### Abstract
The rising use of microservices based software deployment on the cloud leverages containerized software extensively. The security of applications running inside containers as well as the container environment itself are critical infrastructure in the cloud setting and 5G. To address the security concerns, research efforts have been focused on container security with subfields such as intrusion detection, malware detection and container placement strategies. These security efforts are roughly divided into two categories: rule based approaches and machine learning that can respond to novel threats. In this study, we have surveyed the container security literature focusing on approaches that leverage machine learning to address security challenges.
- container, machine learning, survey, intrusion detection, anomaly detection
## 1 Introduction
Containers are lightweight and portable abstractions that contain the binary of an application as well as the necessary and sufficient minimal dependencies to run them. Using containers to deploy software on the cloud has replaced the bare metal installations as the industry standard [1] due to microservices based architecture's demand for scalable and lightweight computation environments. Companies such as Amazon, Netflix, Spotify and Twitter are using microservices architecture in their products [2]. Compared to the other contemporary isolation mechanism that is virtual machines, containers are faster to initialize and more lightweight since they do not need an extra virtualization layer to operate [3].
The widespread adoption of 5G networks has led to an increase in the use of container technology to support the deployment and management of applications. Containers are a straightforward answer for running the services required by 5G, they are portable and lean in terms of size requirements and lightweight in terms of preparation and startup times. The use of containers in 5G networks can provide a number of benefits for Virtual Network Functions (VNFs), including improved scalability, flexibility, and efficiency. Furthermore, the portability and flexibility of containers enable them to be deployed on-demand, making it easier to manage the lifecycle of VNFs and to adapt to changing network conditions. Additionally, the use of container orchestration platforms such as Kubernetes allow for automated scaling and management of containerized VNFs, further improving the agility and scalability of 5G networks. This makes it easy to deploy and run VNFs on any infrastructure, and to scale them up or down as needed. Furthering the security and reliability of containers will, in turn, allow their rapid adoption in 5G VNFs [4].
With the increasing adoption of container technology, there is a growing concern about the security of containerized applications and networks. Containers are found to be less secure than virtual machines which is a detriment to their adoption [5]. The use of containers can introduce new vulnerabilities and risks that need to be addressed to ensure the security and integrity of 5G networks. In this study, we conducted a survey of container security literature and focused our attention specifically on machine learning approaches on container security.
Machine learning (ML) techniques for container security are important and developing. They have been investigated in various fields of research and implementation. For example, Nassif et al. [6] conducted a systematic review that analyzes machine learning models for anomaly detection. They reviewed ML models from four perspectives: the application of anomaly detection, the type of ML technique, the ML model accuracy estimation, and the type of anomaly detection, whether they are supervised, semi-supervised or unsupervised. A review conducted by Mohan et al. [7] focused on the applications of various ML and deep learning methods in the implementation of defensive deception. They summarized the classification of several deception categories, new machine learning and deep learning techniques in defensive deception, including the models, common datasets, key contributions, and limitations. Moreover, Zhong et
al. [8] introduced a taxonomy of the most common machine learning algorithms used in the field of container orchestration. The authors presented ML-based container orchestration approaches, classified the orchestration methods, and demonstrated the evaluation of ML-based approaches used in recent years. Also, the authors discussed machine learning approaches for anomaly detection. Another survey conducted by Wong et al. [9] is a systematic review of containers, covering vulnerabilities, threats, and existing mitigation strategies, to provide information on the landscape of containers. The authors also discussed some machine learning methods, and the papers used ML techniques to improve container security. The current survey is different from those described above in various aspects, such as: Artificial intelligence solutions are included, such as artificial neural networks, machine learning, and deep learning solutions. Supervised, semi-supervised, and unsupervised detection models and used datasets are included. Focused on container security solutions, namely, intrusion detection, malware detection, attack detection, anomaly detection, and inter-container security included.
## 2 Preliminaries
We will base our discussion with the most prevalent container architecture for academia as well as commercial space; Linux containers. Containers leverage two important Linux kernel features: control groups (cgroups) and namespaces. A namespace is a layer of abstraction that covers the processes inside that namespace. Wrapped processes get a private and isolated view of system resources. The processes inside the namespace are also isolated from the changes that happen to global resources, allowing developers to prepare environments for binaries to run with defaults that the binaries expect and not disturb execution flow. There are different types of namespaces that correspond to different constrained views into system resources. There are a total of 8 different namespace types which constrain the view of either the cgroup root directory, message queues, network devices, mount points, process ID space, clocks, user and group IDs and hostnames.
As namespaces wrap processes with an isolated view of system resources, the level of allocation of said resources are controlled through cgroups. cgroups are another Linux kernel feature which limits and monitors the resource usage of processes. When a process is put into a group hierarchy, it's access to system memory, CPU, priority of network communication, network bandwidth it can use etc. are controlled. Control groups are used to allocate system resources fairly between different containers in the same host system.
All in all, containers are Linux processes that are constrained and isolated through aforementioned kernel features. Through constraining and isolating the process or a bundle of related processes with files relevant to their operation, we get lightweight containers that can be packed with their dependencies. Since setting up containers with cgroups and namespaces can get cumbersome, there are container management frameworks and container runtimes to assume these tasks. Well known examples of these container technologies are Docker, Podman, Linux Containers (LXC), RKT and CRI-O. A full-fledged cluster needs additional management and tooling as well. These include service discovery within the cluster, container orchestration and networking among others [10]. Docker is favored as the main container technology [11] with thanks in no small part of Docker Hub, a public container library, playing a major role in its popularity.
Although the Linux kernel provides the ease of use for the isolation framework we have discussed, there is a drawback. Since every container share the single kernel running in the host and there is no need for a separate hypervisor layer as in virtual machines, the isolation guarantees for containers are brittle. Vulnerabilities and mismanaged containers can cause this isolation to be broken. The result has been aptly titled "container escapes" [12].
Virtual machines are slower to start up and get running compared to containers. One canonical question can arise at this point of the discussion: why do we use virtual machines if containers are more lightweight? The security of containerized applications have been challenging researchers while the stronger isolation offered by virtual machines are better. Furthermore, the kernel features that allow containers as we know them today have been matured much later than the framework to support virtual machines.
### Intrusion Detection Systems (IDS)
An intrusion in the context of computer security is attempted or successful access to confidential data or resources by unauthorized parties. Network engineers use intrusion detection systems (IDS) which monitor a system and its resources to detect and report intrusions [13]. Monitoring system resources involves either placing sensors on host systems that analyze machine behavior or placing sensors on the network to monitor traffic. IDSs are categorized according to these sen
Figure 1: An overview of Virtual Machines and Containers
sors: host-based based IDS (HIDS) and network-based IDS (NIDS), respectively. Machine behavior that HIDS leverage can involve CPU, RAM and disk usage and network traffic that NIDS monitor involves individual packets that flow through the network and analytics that are derived from them [14].
Another categorization we can apply to IDSs is whether they detect anomalous behavior by comparing against a set of predefined malicious behavior signatures or learning benign and malicious behavior to detect new behavior online. The former is named signature-based IDS and the latter is named anomaly-based IDS. Since developing signature-based IDS involves collecting and collating a large dataset which is not readily transferable from system to system [15], academia often focuses on anomaly-based IDS research.
### System Calls
System calls are an interface between the hardware and the user space processes. Processes interact with the kernel and request privileged actions, such as interacting with hardware resources or performing network operations. These actions are restricted to certain processes and the kernel implements security policies to determine which processes can make certain system calls. Since system calls are always present whenever a process performs a worthwhile action, it offers a valuable source of information. Hence, system call monitoring is a common technique for detecting suspicious behavior in compromised applications because malicious code has to use system calls to perform malicious operations. Tools like strace and frace are used to show the sequence of system calls made by a particular command or process [16]. Monitoring system calls can help to identify and mitigate problems caused by compromised applications.
Bag of system calls (BoSC) [17] is a method for using system call data in machine learning applications. The method involves creating a frequency list \(S=\{s_{1},s_{2},\ldots,s_{n}\) where \(s_{i}\) is the number of times the system call during that time window is observed [18]. BoSC representation has seen frequent use in container intrusion detection literature [19, 20] often paired with the Sysdig 1 tool [21, 22] to directly stream system calls from running containers with low overhead.
Footnote 1: [https://sysdig.com/](https://sysdig.com/)
Frequency lists are not the sole method for using system call traces in machine learning applications For instance, Srinivasan et al. [16] used sequence of system calls with preserved order to create \(n\)-grams with Maximum Likelihood Estimator for anomaly detection in containers. Karn et al. [23] used n-gram representation as well during detecting malicious processes inside containers. Iacovazzi and Raza [24], on the other hand, represented system calls in a sequence in a graph representation to preserve dependencies between system calls. In a similar vein, Chen et al. [25] represented remote procedure calls with a graph to monitor microservice behaviour.
## 3 Cyberatacks on Containers
As previously mentioned, security concerns regarding containers are the major drawback against their adoption. These security concerns have been categorized to lead the research community to study them on a common framework. Sultan et al. [5] investigated the threat model for containers across the literature and suggested 4 general use cases; (i) protecting containers from the applications inside, (ii) protecting containers from each other, (iii) protecting hosts from containers, (iv) protecting containers from hosts Tomar et al. [26] extended these use cases by including the Docker client as a potential target.
For our discussion, we will handle the container security challenges from two perspectives. First, the security of the application running in the container should be considered. If an application has vulnerabilities or bugs, running it in an isolated setting will not prevent the loss of availability we will experience upon those vulnerabilities getting exploited. Furthermore, we should mention the second area of interest before discussing the other consequences of application vulnerabilities. We also need to consider the security of the containerization mechanism itself. Securing the isolation and restriction of the runtime environment results in reliable systems. Failure to do so can result in loss of confidentiality when containers operate in a multi-tenant environment through data leakages. Another class of container vulnerability emerges when the isolation mechanism of the container is broken. These vulnerabilities have been aptly named as "container escapes". Container escapes often abuse the interface offered to container development and runtime to access the host system. In turn, those interfaces can be made accessible through the vulnerabilities in the applications themselves, allowing for arbitrary command execution inside the container environment. Misconfiguration of containers or container runtime as well as the default privileges container runtimes have lead to privilege escalation in the host system and the eventual compromise of it. The namespace feature of the kernel can also be exploited through namespace injection [27], which allows a malicious container to piggyback the hosts' isolation process and see the victim container's PID space just as the host can. In this section, we will delve into one case of container escape in detail and analyze some attacks targeting the containerization process. We will base our discussion around Common Vulnerabilities and Exposures (CVE), a public effort for collecting and publishing software vulnerabilities.
CVE-2018-15664 is a vulnerability which leads to a container escape where the attacker gains free read-write access in the host system with root privileges. The vulnerable API regarding this flaw in the docker engine is
the docker cp call, which leverages FollowSymlinkInScope function which allows developers to resolve paths in containers. However, Docker versions from 17.06.0-cc through 18.06.1-cc-rc2 suffer from a time-of-check to time-of-use vulnerability in FollowSymlinkInScope. Since the resolution step of the path and actually using the path are not performed sequentially, there exists a time frame where the attackers can symlink a resolved path to an arbitrary place, which includes root owned directories in the host machine [28].
CVE-2019-5736 is a vulnerability that stems from the runc binary up to version 1.0.0:rc6. runc is a container runtime that Docker as well as CRI-O, containerd and Kubernetes uses. The flaw in effected runc binary versions allow an attacker to use a malicious container to overwrite the runc binary in the host system and gain root access and privileges [29]. The only prerequisite the vulnerability requires is any command to be run as root from the container where said container creates a new container using an attacker controlled image or running docker exec to get a shell from an already running container which gave the attacker write access previously. Prior to proper patching, this vulnerability could be prevented by using namespaces correctly and mapping the root of the host system and the container's user into different namespaces.
## 4 Machine Learning Approaches for Container Security
Container security is handled through rule based matching utilities where known vulnerabilities and common misconfiguration errors are colluded through human effort [30]. These utilities are adequate for catching known attacks and configuration mistakes developers make. However, they cannot detect attacks or vulnerabilities missing from their rule set. To tackle this issue, machine learning based container security solutions have been developed. In this section, we will survey the container security approaches that leverage machine learning.
### Intrusion Detection
Zhang et al. [31] proposed an intrusion detection system for Digital Data Marketplace (DDM). The presented system utilizes One-Class Support Vector Machine (OC-SVM) algorithm. The OC-SVM algorithm is an unsupervised learning method that finds a decision boundary with maximum distance from data points, making it suitable for anomaly detection where training data is unbalanced. Similar to SVM's hyperplane, it uses spherical boundary to separate data. They capture system calls using fixed size window, apply preprocessing and then feed into the ML model. Besides intrusion detection, they match output of the detection module with attack database to decide whether the anomaly is linked to other anomalies. Their dataset contains system call data from database applications and machine learning applications which are running in containers. For database containers, they have used Sysdig. They generated traffic with Apache JMeter2. In addition, for unusual traffic, Metasploit for Nmap is used. For machine learning containers, they have also used Sysdig to detect adversarial attacks during the training. The trained model successfully detected 100% of the arbitrary code executions and brute force attacks with a low false positive rate. They state ROC curve values reach up to 0.995. Machine learning containers, all the models except TPGD, attacks detected with 100% This work is limited with system calls does not consider any other criterion.
Footnote 2: [https://jmeter.apache.org/](https://jmeter.apache.org/)
El Khairi et al. [15] proposed a HIDS that relies on monitoring system calls. The authors used Sysdig to collect the system calls. The novelty of their work comes from their usage of context information alongside system calls to build a graph structure to train and test their IDS. Context information includes system call arguments and recently observed system calls. The authors report that they were motivated to use context information due to the shortcomings of existing HIDS approaches. They used LID-DS dataset [38] and extended the dataset using their contributed dataset: CB-DS, which consists of container escapes.
Here, we will explain their feature selection in detail. First, they built a graph representation of the system calls with argument information. This graph representation natively includes the recently seen system calls as well. A graph constructed for a timeframe \(t\) under benign conditions can be then used in the set of normal behavior expected during container's normal operation. When the training is over and testing begins, any unseen vector is classified against the previously constructed benign dataset of graphs. The authors evaluated their framework on different classes of vulnerabilities and compared their approach against CDL and STIDE-BoSC.
Sever et al. [22] tackled a research gap in container IDS literature. The authors realized that previous work focused solely on HIDS approaches that monitor system calls to train and evaluate anomaly-based IDS. This leaves anomaly-based NIDS which can leverage network traffic features such as network flow out of the picture. In order to answer whether this omission is justified or not, the authors set up an experiment environment with JMeter as the benign traffic source, a web application running in a container as the victim and the Metasploit tool as the malicious traffic source. The authors used Sysdig tool to gather system call traces and tcpdump to capture network traffic between the attacker machine and the victim container. They used system call data with BoSC as the feature and network flow data derived from network pcap captures. In order to
evaluate intrusion detection performance, the authors selected 4 machine learning algorithms found commonly in the container IDS literature: REPTree, random tree, random forest and SMO. After evaluating both BoSC and network flow based monitoring with those 4 algorithms, the authors found that network flow data yielded better performance than BoSC. However, the authors used only one victim application with only 3 different attacks for their dataset, putting a detriment on the generalizability of their study.
Flora et al. [32] evaluated intrusion detection performance by monitoring system calls on a containerized application by using attack injection. Their approach to this comparison is twofold: they evaluated the instruction detection performance between Docker containers, LXC and the application running on bare metal while using three classifiers: BoSC, STIDE, and HMM. First, the authors decided on an application: MariaDB, running in a container for the Docker and LXC settings and standalone for the bare metal case. As is the case with attack injection approaches, they decided on the TPC-C workload for the benign traffic source. For malicious traffic, they picked 5 CVEs and used their implementations from exploit-db.com. The authors decided
\begin{table}
\begin{tabular}{p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline \hline Work & ML Method & Feature Collection & Dataset & Victim Machine & Attack Type & Monitoring \\ \hline
[31] & OC-SVM & n-gram & custom & CoughDB, & Container Escalation, Brute Force Execute Arbitrary Code, Adversarial ML attacks & JMeter, nmap, Sysdig \\
[15] & auto-encoder & system call sequence graph & LID-DS, CB-DS & Flash-python web app & Sprocket Information Leak, MySQL Auth Bypass, Release Agent Abuse, Dirty Pipe & Sysdig \\
[22] & REPTree, Random Forest, SMO & BoSC, network flow & custom & rConfig & OS Command Injection, SQL Command Injection & Sysdig, tcpdump \\
[32] & STIDE, BoSC, HMM classifiers & STIDE, BoSC & custom & MariaDB & Overflow, Bypass, Privilege Escalation, DoS & Sysdig \\
[33] & Decision Tree, Random Forest, Isolation & BoSC & custom & MySQL & Authentication Bypass, DoS, Privilege Escalation, Integer Overflow & Sysdig \\
[24] & Random Forest, Isolation Forest & anonymous walk embedding & custom & Hadoop cluster, NGINX, dataset and CUI-2020 & Cryptomining, backdoor & perf \\
[34] & semi-supervised learning & process graph, node2vec & auditd & _contribution_ & DoS, privilege escalation & auditd \\
[35] & variational autoencoder & time series performance & Container Performance Event Dataset & Container based big data platform & Spectre, Meltdown & ptrace, perf \\
[36] & auto-encoder, GAN & network traffic, system and network level performance & VM Migra-cloudSim & 2 host with 3 VM in total & Net Scan, DoS & Not mentioned \\
[37] & Random Forest & n-gram & ADFA-LD & Django, Httpd, MySQL, Tomcat & XSS Attack, SQL Injection, Security Policy Bypass, Remote Command Injection, Identity Bypass, Arbitrary File Read/Write & Sysdig \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of surveyed intrusion detection in container approaches
to capture every system call emitted by the containers using sysdig tool during the experiments while they captured only MariaDB and it's children's system calls for the bare metal case. Running the TPC-C workload for 24 hours with 30 minutes of malicious traffic during benign traffic period yielded the data required for the analysis. The authors then used to classifiers to discern between malicious and benign traffic.
During their analysis, the authors found that intrusion detection by using the methods mentioned above yielded the best overall results for the application running in the Docker container. While detection on Docker gave the highest recall across all three algorithms, BoSC performed marginally better than STIDE and wholly better than HMM. The authors also concluded that using lower epochs resulted in better detection performance and interpreted it as the models learning how to discern between the malicious and the benign traffic without learning unnecessary details. On the other hand, the author's analysis is constrained on only one database application: MariaDB.
Cavalancati et al. [33] compared the performance of intrusion detection systems for containers. They framed their observations under two categories: the effect of the classifier architecture and the performance of different machine learning algorithms. The authors set up an attack injection scenario where they subjected a MySQL Docker image to TPC-C benchmark for benign traffic and 4 different attacks from exploit-db.com with CVEs for malicious traffic.
Overall, the authors used three classifier architectures: label encoding and one-hot encoding, sliding window with label encoding and one-hot encoding, and sliding window with BoSC. All in all, the authors used AdaBoost, Decision Tree, Gaussian Naive Bayes, K-Nearest Neighbors, Multi-layer Perceptron, Multinomial Naive Bayes, Random Forest and Support Vector Machine. Gaussian Naive Bayes performed the best in terms of recall while K-Nearest Neighbors had the best precision out of all machine learning algorithms for the first classifier architecture. In terms of F-Measure, Support Vector Machine had the highest performance with 83.2%. Due to the number of algorithms and the measures involved, we will continue our discussion constrained to F-Measure. The second classifier architecture achieved the highest F-Measure of 99.4% with the Random Forest algorithm when the window size was 30. Both decision tree and random forest had the highest F-Measure with 99.8% for the final classifier architecture when the sliding window size was set to 30 again, albeit not much higher than other algorithms. The important takeaway from the results obtained by the authors is that the context of which the system calls appear contributes more to the detection performance than the specific machine learning algorithm chosen.
Iacovazzi and Raza [24] present a machine learning-based solution for intrusion detection in cloud containers. The proposed solution combines supervised and unsupervised learning methods, and it is designed to work at the host operating system level, using data observable at the kernel level. The solution uses a mix of random forests and isolation forests to classify container workload behaviors and detect adverse behavior within the containers. Note that random forests are supervised learning methods while isolation forest are unsupervised. First, a graph representation of the sequence of system calls are collected at the host machine's kernel level. This graph is then processed using random walks and anonymous walks algorithms to extract the features. This representation is fed into a random forest classifier, which is trained on normal classes and outputs a set of probabilities for whether the input belongs to each class. The probabilities are passed to a third stage, where they are used for generating anomaly scores using an ensemble of isolation forest modules, one for each normal class. Isolation forest modules are trained on datasets containing samples from the respective normal class and contaminated with samples from other normal classes. The final decision about the class of the input sample is based on the outcomes of the anomaly scores. If all anomaly scores are below a threshold, the input is classified as the class with the highest score or as an anomaly if all scores are under the threshold. If more than one score is higher than the threshold, the input is classified as an anomaly. In order to effectively capture dependencies among adjacent system calls in a sequence, which are not considered in the bag-of-system-calls approach, they use a graph-based representation. This graph representation and feature extraction process enables the effective classification of container workload behaviors and the detection of malicious behavior within the containers. Despite the results of the EoF method outperforms SVM and LOF alternatives, there were some limitations to this approach, as it was not able to detect all attacks with a true positive rate above 0.7 namely Backdoor and SQL Injection. Moreover, the work has been done on Docker containers and possible attacks during container migration haven't been discussed.
In their work, Pope et al. [34] introduce a new dataset derived from the Linux Auditing System, which contains both malicious and benign examples of container activity. This dataset is the first of its kind to focus on kernel-based container escapes and includes attacks such as denial of service and privilege escalation. The data was generated using the autoCES framework and includes partial labels identifying benign and malicious system calls over specific time intervals. However, the dataset has some limitations, including incomplete annotations and a limited number of container escape scenarios. Additionally, the selection of benign background activity in the dataset may not be comprehensive. The goal of this dataset is to be used in a semi-supervised machine learning context. For the machine learning process, they began by converting the audit data into a process graph, which illustrated the relationships between processes. This graph was then transformed into
vectors using a node embedding technique. The resulting vectors were used to train a logistic regression classifier, which was able to accurately predict whether a process was benign or malicious with a F1 score of 97%. The authors also mention that the dataset could potentially be utilized for other applications, such as training an autoencoder for anomaly detection. These results demonstrate the effectiveness of the dataset in a semi-supervised learning context.
Another work by Wang et al. [35] propose a real-time intrusion detection system. They focus on detecting meltdown and Spectre attacks in the container environments. Spectre and Meltdown are vulnerabilities that can be exploited using cache-based side-channel attacks to access sensitive data. These vulnerabilities allow attackers to access data that is temporarily stored in the cache, which can then be extracted using cache-based side-channel attacks. In this work, to satisfy conditions for Spectre and Meltdown attacks, the scenario is designed as the containers were co-resident (i.e. sharing the same hardware). They designed the ContainerGuard service to watch the workflows. By monitoring, they capture hardware and software performance time-series data. After data collection, they distribute data to corresponding variational autoencoders considering the performance data category which are hardware CPU events, hardware cache events and software events. For the purpose of evaluating a method for detecting the Meltdown and Spectre attacks, a dataset called the container performance event dataset which includes 400,000 benign and 60,000 malicious data was created. The method's highest AUC score ranges from 0.90 to 0.99. In addition to the detection performance, there is no significant runtime performance overhead which is measured as approximately 4.5%.
Chakravarthi et al. [36] focus is on assessing the effectiveness of anomaly detection during service and virtual migrations in the cloud environments.
The authors trained the autoencoder and SVM on the generated dataset. The performance of two different classifiers, an autoencoder and a support vector machine, were evaluated using ROC curves.
They state that autoencoder performs well during VM migrations with a false positive rate below 15%. They used the reconstruction error of the AE model as the anomaly score. One limitation of their work is that there is no benchmarked dataset available to test the resilience of cloud infrastructure. They have generated data samples from a simulated network and balanced them using the GAN network. These samples have been classified as either anomalous or normal using the AE model. However, their trained model is only able to detect anomalous traffic in a cloud environment that is similar to the one simulated in their experiments.
The clustering algorithms aim to divide the provided unlabeled data into clusters that achieve high inner similarity and outer dissimilarity. They do not rely on signatures, a description of attack classes, or labeled data, therefore; for the purpose of detecting anomalies in unlabeled data, unsupervised IDS and clustering approaches are used.
To increase the effectiveness of anomaly detection in the edge computing environment, Shen et al. [37] suggested an anomaly detection framework combining cluster algorithms. The proposed framework initially identifies and classifies containers before building anomaly detection for each group. Also, they use system calls to inspect containers' behavior and perform classification and intrusion detection. They looked into eight real-world vulnerabilities, and the experiment result shows that the framework increased the True Positive Rate (TPR) from 90.3% to 96.2%, and False Positive Rate (FPR) reduced from 0.61% to 0.09% compared to the traditional method.
The framework utilizes Sysdig to collect system call data generated by containers, the DBSCAN cluster algorithm to classify containers in an unsupervised way, and the RandomForest classifier for each application category to detect anomalies. Also, they use their approach against two different detection methods. First, they use one detector for all containers. This method collects system calls from all applications without distinguishing the application. The other method is using one detector for each container. Even though the second approach achieves better results, it incurs a remarkable performance cost.
### Malware Detection
Wang et al. [39] designed and implemented a malware detection framework for containerized applications. The novelty of their work comes from their approach to extract executables from containers with respect to the container's storage driver type. The authors decided to support overlay2 and aufs, the current and past recommended storage drivers respectively. With the executable in hand, the suggested framework first uses disassembled code and binary itself for fast path coarse detection using a multichannel CNN. The slow path detection is done using a LSTM-CNN with API-call sequences as the features.
The authors have evaluated their implementation on 3000 malware samples acquired from VirusShare and 300 container specific attacks against 2000 benign binaries. Even though the authors compared their framework against previous work under metrics such as precision and recall, the previous work they opted to compare to are not from container security domain but deal in general software security. Hence, the 300 container specific attacks are mixed in with the rest of the 3000 malware samples and there is no particular insight presented for regular software shipped in containers and vulnerable containers.
Cryptomining malware has become a significant threat in Kubernetes, with hidden executables that uses server resources for mining. To detect and classify pods that
hold cryptomining processes, Karn et al. [23] proposed that machine learning can be used together with system calls. They used several types of cryptominer images, namely Bitcoin, Bytecoin, Vertcoin, Dashcoin, and Litecoin. Also, they included healthy pods, that are MySQL, Cassandra, Hadoop, Graph, Analytics and Deeplearning. They captured system calls with a period of 1 minute for each pod. Then they leveraged n-grams to extract features. After numerous experiments they decided to set n as 35 due to its high recall rate. Following the feature extraction, four ML models which are decision tree, ensemble learning, feed-forward vanilla artificial neural network and feedback recurrent neural network have been selected to train with the data collected. The accuracy of ensemble learning model from Python-XgBoost library was similar on training and validation sets, 89.3% and 89.4% respectively. For feed-forward Vanilla ANN, they have used the combination of Keras and Tensorflow, with autokeras tool to tune hyperparameters. Overall performance was 81.1% on training set, and 79.7 % on validation. Due to the nature of the system calls, it is suitable to use it as time-series data. Therefore, they implemented LSTM RNN. The accuracy on the training set was 79.99 and 78.90% on validation set. Decision tree implementation with default parameter values using python's SKLearn library achieved 99.6% accuracy on training and 97.1% on validation by beating all other models. In addition, for better model explainability and visual representation, they have used SHAP and LIME tools.
### Attack Detection
Lin et al. [40] proposed an attack detection framework which consists of different layers in a pipeline in an attempt to increase detection rate while addressing false positive and lack of labelled training data issues. Their proposal has 3 different modules; first, they employ an unsupervised anomaly detection layer which uses an autoencoder neural network. The authors claim that the encoder and the subsequent decoder will generate results with a high reconstruction error for anomalous samples. The second layer in the pipeline uses random forest algorithm to cross validate edge cases and potentially eliminate false positives. On the final layer, the authors employ an isolation forest, a self-supervised model in order to detect outliers and generate training labels automatically. This pipeline is fed with system call frequency vectors, acquired using Sysdig with a sampling rate of 100 milliseconds.
In order to evaluate their proposed framework, the authors applied 7 minutes worth of benign traffic onto the containers using JMeter, where applicable. At the start of the 5th minute, the authors started the attack, some attacks caused the container to crash which ended the experiment but for the rest, the attack completed and the experiment ran until the JMeter is finished at the 7th minute. Lin et al. compared their proposed framework against CDL [41], self-patch, a supervised random forest approach and a supervised CNN. They used 41 real world attacks with assigned CVEs, encompassing 28 applications. They used containerized applications with application vulnerabilities, not container specific attacks.
Lin et al. [41] presented a classified distributed learning framework, namely CDL, to detect anomalies in containerized applications. The framework achieves anomaly detection in four major steps: System Call Feature Extraction, Application Classification, System Call Data Grouping, Classified Learning, and Detection. They process the raw system call trace into a stream of frequency vectors, and these extracted feature vectors are used to identify applications. For the identification of applications, they utilize random forest learning scheme [42]. Random forest classifier uses numerous decision trees, then chooses the most voted result among individual decision trees. Hence, the random forest model gives the predicted application classification result. When this process has identified the containers of the same application, the framework makes a system call data grouping to append the frequency vector traces of different containers and use them for model training and attack detection. Lastly, for anomaly detection, the unsupervised model uses autoencoder neural networks. The authors investigated 33 real-world vulnerabilities documented in Common Vulnerabilities and Exposures (CVE) database, and the results show that CDL can detect 31 out of 33 attacks. Also, they inspected the system run time, and the data indicates that CDL is lightweight and suitable for detecting attacks in real time under real-world circumstances.
### Anomaly Detection
Gantikow et al. [43] investigated the behavior of containers by using neural networks to detect anomalies. The authors present two approaches for anomaly detection based on system call traces. First, system call distributions are used to detect anomalies. One layer Long Short Term Memory (LSTM) network is trained to predict the system call distribution at time \(t+1\) based on distribution at time \(t\). The second approach is a neural network using file/directory paths for anomaly detection. Their method is based on training a neural network to predict the following file system path based on the most recent file system path used by a system call. The proposed neural network consists of a Word Embedding Layer, followed by LSTM layers which are designed to learn to predict the following file system path based on the vector representation of the current one. After a prediction was made by this neural network, the actual file path and predicted path were compared to detect anomalies. Wang et al. [44] proposed an unsupervised anomaly detection framework. The authors initially acquired system call sequences from the ptrace tool. Then, they used word2vec technique to map each system call within
their context from the sequences into a fixed size vector. These vectors are used sequentially for the rest of the author's instruction detection framework. A BiLSTM variational auto encoder is used. At the final layer of their framework, the authors detected anomalies through reconstruction error. For evaluation, the authors employed the UNM system call sequences dataset. They have also extended it with system call sequences gathered during a sqlmap attack on a container running MySQL as well as 3 different container escape attacks. The dataset they used for evaluation consisted of 0.63% anomalous traces with benign samples as the rest. Overall, their approach yielded 90% accuracy and an F1 score of 90.75%.
Castanhel et al. [45] presents an approach for using system calls to detect anomalies in containerized systems. The authors focus on how the size of the window impacts the results through the implementation of a sliding window technique. In the paper, the authors first discuss the challenges of monitoring containers and the importance of detecting anomalies in order to ensure the security and stability. In their implementation, they collected a dataset of system calls by running strc on the host machine, outside the container, from a variety of containerized applications and used machine learning techniques to train a model to classify normal and anomalous system calls based on this dataset.
The dataset used in the study consisted of 50 traces of system calls, with half representing normal behavior and the other half representing anomalous behavior. The normal behavior traces consisted of five different types of expected interactions with the WordPress application, while the anomalous behavior traces consisted of five different types of attacks focused on cross-site scripting (XSS) and remote code execution (RCE).
The experiments in the study were conducted on a Linux host using Docker, and the application used for testing was WordPress, popular open-source content management system that is served by the Apache web server. The collected system calls were divided into four groups, with the first group containing the most dangerous system calls that alter system behavior. The last group contained harmless system calls that primarily query to get system behavior rather than issuing commands.
A sliding window technique was used to analyze data from various sources and four algorithms (KNN, RF, MLP, and AB) were applied using seven different window sizes. The data was split into training and testing sets and ten executions were run for each classifier to evaluate the results and prevent overfitting. The random seed was changed during the split phase to generate different sets.
They tested both with all data and the data without harmless system calls and found that the model was able to accurately detect anomalies in the system calls of containerized applications, with an average accuracy of over 90%. Overall, the paper concludes that system calls can be an effective means of detecting anomalies in containerized systems but also mentions the fact that their work does not contain all calls available in current systems. Also, as mentioned, by completing tasks using a variety of containers with different applications instead of just the WordPress application with additional plug-ins, the dataset would have been more diverse, allowing for a more comprehensive evaluation of the system's stability.
Cui and Umphress [46] come up with open-source dataset for the observation of system calls. They have used sliding window with a fixed size. They have chosen to use a classic Long Short-Term Memory (LSTM) au
\begin{table}
\begin{tabular}{l l l l} \hline \hline \multirow{2}{*}{Work} & ML & Data & Collecting \\ & Model & Used & Method \\ \hline
[43] & LSTM & system call, file/directory path & Sysdig \\
[44] & BiLSTM & system call & ptrace \\
[45] & KNN, RF, MLP, AB & system call & strace \\
[46] & LSTM auto-encoder & system call & Sysdig \\
[47] & KNN, k-means SOM & system call & CoreOS clair, Sysdig, JMeter \\
[48] & KNN, SVM, NB, RF & performance monitoring data & cAdvisor, Heapster \\
[25] & DCRNN & RPC traffic & RPC chain clustering \\
[49] & Restricted Boltzman Mahcine & user and system defined security profile, automated NIST violations, run-time security profile & python script \\
[50] & _contributed_ & system call, Network, I/O activities & JMeter, Sysdig \\
[51] & LR, NB, SVM, RF, XGB & security related config documents & BeautifulSoup, NLTK \\
[52] & SARIMA, HMM, LSTM, auto-encoder & system metrics (streaming data) & Prometheus \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overview of surveyed anomaly detection in container approaches
toencoder as the baseline classifier for the unsupervised classification task.
Autoencoders are a type of neural network model that were first introduced by Rumelhart et al. [53] for unsupervised learning of compact representations, or encoding of input data. These models consist of two main parts: an encoder, which maps the input data to a lower-dimensional latent space, and a decoder, which maps the latent representation back to the original input space. The encoder and decoder are trained together by minimizing a reconstruction loss function that measures the difference between the original input and the reconstructed output.
The idea of using Long Short-Term Memory (LSTM) units in autoencoder architectures likely emerged after the introduction of LSTMs by Hochreiter and Schmidhuber for modeling sequential data, and the development of autoencoders for feature learning and dimensionality reduction.
The reason behind LSTM selection is that it has the ability to remember and use knowledge from previous batches which makes it suitable for anomaly detection. In this experiment, a total of 42 models were trained using different combinations of configurations, including 7 different window sizes, 3 different feature sets, and 2 normalization methods. These models were then tested on 7 different attacks, with 6 different confidence levels applied, resulting in a total of 1764 entries. Overall model predicts with over 90% accuracy for brute force login, meterpreter, malicious script and remote shell attacks. However, it's accuracy on docker escape attacks was only 76.27%. Moreover, it was observed that the proposed framework was only evaluated using an offline dataset and a single application, and it is planned to conduct a comprehensive online evaluation. Besides, while the current work has successfully demonstrated the potential for unsupervised introspection, it is necessary to expand the dataset to include multiple applications to see its potential in different contexts.
Tunde et al. [47] presented a combination of static and dynamic anomaly detection schemes to detect security vulnerabilities for containers. They conducted a study on static and dynamic vulnerability detection strategies using 28 common real-world security vulnerabilities discovered in Docker Hub images. Firstly, they used CoreOS Clair, an open-source static analysis engine that scans containers layer-by-layer for known vulnerabilities using Common Vulnerabilities and Exposures (CVE) databases. Afterward, they investigate dynamic detection schemes using different unsupervised machine learning algorithms. These machine learning algorithms are selected to address the following unique challenges of container security:
1. Containers are short-lived, so the detection algorithms can not use large amounts of training data.
2. Containers are highly dynamic; thus, the detection algorithms cannot make any assumptions about the application or attack behavior in advance.
3. The detection algorithms should be able to detect vulnerabilities with low overhead.
Properties above of container exploit detection lead to using light-weight unsupervised anomaly detection schemes such as K-Nearest Neighbor (k-NN) Algorithm, K-Means Clustering, KNN combined with Principal Component Analysis (PCA), and Self-Organizing Map (SOM). Their comparison between different exploit detection schemes was based on four metrics: detection coverage, false positive rate, and lead time. The metrics indicate if each approach can detect vulnerabilities, how accurately they can achieve detection and how quickly they can detect attacks, respectively. The k-NN algorithm is used to perform outlier detection. Because the presence of noise in the feature data prevents the k-NN algorithm from achieving high- accuracy, they used k-NN with PCA. While the k-NN algorithm can detect 32.14%, PCA + k-NN succeeds in a slightly better detection with 35.71%. The k-means approach achieves a 67.86% detection coverage rate. The Self-Organizing Map (SOM) approach over system call time vectors (SOM time) detects 75% of vulnerabilities, while the SOM approach over system call frequency vectors (SOM frequency) detects 79% of vulnerabilities. Therefore, the SOM approach accomplishes the highest detection coverage. At the false positive rate comparison, again SOM approach achieves the lowest false positive rate, with 1.7% for SOM frequency and 1.9% for the SOM time. It is followed by the K-means clustering approach with 7.67%. Moreover, k-NN and k-NN with PCA obtain the highest false positive rates with 9.92% and 9.88%, respectively. Finally, the SOM approach attained the largest detection lead time, with 28.7 for SOM frequency and 25.8 for SOM time. Nevertheless, k-NN, k-NN with PCA, and K-means achieve 0.57, 1, and 0.36 seconds respectively. Furthermore, the paper states that combining static and dynamic schemes can increase the detection coverage rate to 86%. In conclusion, the authors show that static analysis for container security is insufficient. In contrast, using unsupervised machine learning algorithms, dynamic anomaly detection schemes can succeed high detection coverage rate with a low false positive rate. The study demonstrates that Self Organizing Map algorithm is better than the other mentioned algorithms in terms of all three metrics.
Du et al. [48] used different supervised machine-learning algorithms to detect and diagnose anomalies in container-based microservices. They proposed an anomaly detection system (ADS) by analyzing real-time performance data for anomaly detection and diagnosis. The proposed ADS consist of three modules: the monitoring module, the data processing module, and the fault injection module. First, the monitoring module is used to collect real-time performance monitoring data
from the target system. In this paper, the authors focused on container and microservice monitoring, and the term "container" was used to refer to a collection of containers constituting one complete microservice. Secondly, the data processing module is used to analyze this data and detect anomalies. They determine whether a container performs well by gathering and processing its performance data, just as they determine whether a microservice is abnormal by gathering and processing the performance data of all related containers. After classifying if a microservice experience an anomaly, the ADS finds the anomalous container. In order to detect anomalies, they use supervised machine-learning algorithms such as Support Vector Machines (SVM), Random Forest (RF), Naive Bayes (NB), and Nearest Neighbors (NN). Also, to find the container that caused an anomaly, they used time-series analysis. Lastly, the fault injection module simulates service faults (CPU consumption, memory leak, network package loss, and network latency increase) and collects datasets of performance monitoring data. These datasets are used to train machine learning models to validate the anomaly detection performance. In their experiments, three datasets are structured according to this module, and the three services they choose using Clearwater, an open-source virtual IP Multimedia Subsystem.
The validation results show that Random Forest and Nearest Neighbors classifier gives satisfying results using each dataset. Furthermore, SVM performs the worst since it does not work well on datasets with multiple classes. All in all, if the dataset is created using only one service, the authors recommend the NN classifier.
Remote procedure calls (RPC) allow components in a distributed cluster of applications to invoke each other's functions (procedures) seamlessly, as if those functions are owned by the invoking application. The network layer between the components is abstracted away as a result. Chen et al. [25] suggested using RPCs as an alternative to monitoring system calls, since RPCs are required for meaningful interaction between a distributed cluster's components just like how system calls are required for worthwhile operations within an application. They handled RPCs as RPC chains, a sequence of RPCs that depend on each other and appear in order during common operations. The authors found that representing RPC chains as directed weighted graphs suits their use case well. They represented nodes as RPCs, edges and weights as the dependency between different RPCs, and labelled nodes with the number of times that particular RPC was invoked.
To learn regular RPC traffic and predict anomalous RPC traffic, the authors first trained DBSCAN [54] clustering algorithm to acquire the RPC chains. The authors then trained a DCRNN model to predict the traffic model from previously observed RPC traffic. By using mean absolute error and variants, they managed to label anomalous traffic when observed RPC chains deviated from the expected traffic in their case study which was performed on a Kubernetes cluster with "billions of daily active users" [25] and RPC traffic that spans 2 weeks.
Kamthania [49] presents a deep learning-based algorithm for detecting malicious patterns in individual container instances. The algorithm is designed to be easily applied to any container platform that adheres to the Open Container Initiative (OCI) standard. The algorithm utilizes a Gaussian-Bernoulli restricted boltzmann machine.
A restricted boltzmann machine (RBM) is a type of neural network that is used for unsupervised learning. RBMs are composed of a visible layer, which encodes the input data, and a hidden layer, which learns features from the input data. RBMs are trained using an energy-based model, where the energy of a configuration of the visible and hidden units is minimized. RBMs are often used for tasks such as dimensionality reduction and collaborative filtering. Gaussian-Bernoulli RBMs are a variant of RBMs that can handle continuous-valued data, rather than just binary data, in the visible layer. By using RBM, they create a container profile based on the configuration of the containers and extract behavioral statistics at runtime. The algorithm then uses automated NIST container security rules to identify any security violations for the container under test and applies a machine learning algorithm to build a complete security profile for the container. In their results they mention classification rate for some attack types: unbounded network access from containers, insecure runtime configurations, rogue containers, improper user access rights, embedded clear texts. However, the details of the classification rate is not mentioned.
KubAnomaly, a system that offers security monitoring capabilities for anomaly detection on the Kubernetes orchestration platform was suggested by Tien et al. [50]. The aim of this system is to improve Docker security which is compatible with Kubernetes. KubAnomaly provides a security-monitoring module with customized rules in Sysdig and observes the internal activities of containers, system calls, I/O activities, and network connections. Since monitoring too many events would result in large overhead, they selected four system call categories which are file I/O, network I/O, scheduler and memory. Furthermore, to identify hackers and insider intrusion events, it performs anomaly detection using machine learning classification. A neural network model was created to classify multiple types of anomalous behavior, such as injection attacks and denial-of-service (DoS) attacks. This machine learning model uses supervised learning and three different datasets had been used to train the model. These three different datasets are: private data, a public data called CERT and real-world experimental data to evaluate the system accuracy and performance. Further explanation about the datasets are in the Section 4.6
The proposed anomaly classification model is organized into four steps. They begin by monitoring log data from
their agent service, which collects monitor logs from Docker-based containers. After obtaining the raw monitor logs, they extract features to train their models. The next step is data normalization for fast convergence and improved accuracy. To apply data normalization StandardScaler, MinMaxScaler, and Normalizer provided by the machine learning framework sklearn. Lastly, they construct the anomaly classification model using four fully connected layers, and for the backend, they use Keras and Tensorflow. For the purpose of apply the classification model to the real world they designed the KubAnomaly system. In addition, they developed an online web service with vulnerabilities and tested the system. The results show that KubAnomaly is able to detect many abnormal behaviors.
Mubin et al. [51] focuses on configurations of container orchestrators. Container orchestrator itself should be correctly configured to provide security for all other managed containers. They introduce a new method that uses keywords and learning to capture knowledge about configurations which was not studied before. The module created, namely KGSecConfig aims to create a Knowledge Graph for Configuration (KGCConfig) of various platforms, cloud providers, and tools used in CO to organize scattered data. They extracted information from documentation files and created entities. Between these entities, several relationships such as "hasDefault", "hasArgument", "hasType", "hasOption", and "hasDescription". This representation is used to identify the configuration syntax and formulate keyword-based rules for estimating the relevancy of security documents with configuration.
In order to train a supervised learning model to extract configuration concepts from documents, a labeled dataset was needed. Since no labeled dataset existed, 3,300 sentences were labeled by two authors according to the four configuration concepts. 3,032 sentences that were agreed upon by both labellers were used for training the model to reduce labeling bias. Five machine learning classifiers, Logistic Regression (LR), Naive Bayesian (NB), Support Vector Machines (SVM), Random Forest (RF), and Extreme Gradient Boosting (XGB) were selected for the learning-based models, and various features such as TF-IDF-based word level, character level, and combination of word and character level, NLP features, were considered. The optimal traditional ML models were selected using Bayesian optimization and average Matthews Correction Coefficient with early stopping criteria. Breadth-First-Search algorithm was used to identify the configuration argument and update the KGCConfig. Accuracies of the LR, NB, SVM, RF, XGB were 0.94, 0.82, 0.88, 0.76, 0.93 respectively. The model's results showed that KGSecConfig is effective in automating the mitigation of misconfigurations.
Kosinska and Tobiasz [52] proposed a system, namely, Kubernetes Anomaly Detector (KAD), for detecting an anomaly in a Kubernetes cluster. KAD uses various machine learning models to achieve high accuracy. Their solution differs from other solutions in using different machine learning models that facilitate detecting different types of anomalies. The KAD system chooses the appropriate model for detection; thus, different models can be matched to different data types. These models are SARIMA, HMM, LSTM and Autoencoder. SARIMA and HMM are derived from traditional time series and statistical models; LSTM and HMM are deep learning models. In their experiment, they trained models the models trained on the Numenta Anomaly Benchmark (NAB) dataset. They selected two types of data streams: the first stream is artificially generated, and the second contains data presenting CPU utilization collected from AWS Cloudwatch. The results show that statistical models (SARIMA and HMM) achieve higher results on the artificial data and the LSTM and autoencoder perform better on AWS Cloudwatch data. Furthermore, the experiments demonstrate that the real-time anomaly detection capabilities of the KAD system can be successfully deployed in a Kubernetes cluster. However, the KAD system allows anomaly detection to be performed on one metric at a time. Hence, for more complex cases, multivariate models can be needed.
### Inter-container Security
Deng et al. [55] tackled the secure placement of containers on cloud setting where multiple residents share the same host. The motivation for their work comes from the author's findings that container placement strategies do not consider the security of co-resident containers. As mentioned in Section 3, container vulnerabilities also pose risks to the container running on the same host as well as the host system. Deng et al. considered the whole placement challenge as a series of placement decisions. This allowed the authors to reframe the problem as an optimization task. The authors then used deep reinforcement learning (DRL) with the encoded placement policy as the input. DRL is a branch of machine learning that uses reinforcement learning and deep learning principles. Nguyen et al. [56] state that DRL is suited to solve complex, dynamic, and high-dimensional cyber-defense problems. Hence, DRL is used for container security. Deng et al's model output the placement decision for the given container at every time step and the reward function is a formula with a trade-off between security and workload balancing. With their evaluation, the authors find their model to perform better than existing strategies in terms of workload balance while keeping security in mind.
Li et al. [57] proposed a defensive deception framework for container-based clouds. Their approach generates an adversarial model, decoy placement strategy, and decoy routing tables using a DRL Algorithm. First, they developed an adversarial model, namely the System Risk Graph (SRG). SRG extracts risks and threats in the container-based cloud and includes overall risks and vulnerabilities from the application to the visualization
layer. Secondly, \(SRG_{t}\), which is the system risk graph of the cloud at time slot-\(t\), is sent to input neurons of the DRL agent. DRL Algorithm generates an ideal decoy placement strategy to decide optimal topologically locations and types on digital decoys assets. Moreover, the performance of the placement strategy is used as the reward data to train and update the DRL agent. This feature enables the DRL agent to evolve with the dynamic cloud. Therefore, their method is adaptive and fully interacts with the dynamic environment. Lastly, the determined placement strategy and deceptive routing for the decoy are sent to the orchestration platform. As a result, the proposed framework increases the detection ratio on the random-walker attack by 30.69% and the persistent attack by 51.10%.
Using genetic algorithm, Kong et al. [58] suggested a Secure Container Deployment Strategy (SecCDS) to defend against co-resident attacks in container clouds. SecCDS reduce co-residency by 50% compared with existing strategies by coordinating the placement and migration of containers to separate attackers and victims on different physical machines (PMs). In their paper, they define two metrics to describe the deployment and co-residency of container clouds. Deployment Matrix (DM) represents the correspondence between container and PM, and Coresidency Matrix (CM) describes co-residency among different tenants in the cloud. Later, they develop a deployment strategy by Genetic Algorithm that can detect relational aggression among different tenants in real-time and dynamically migrate containers, effectively preventing co-residency. In this implementation, a container must only be deployed in a unique position and belong to a real tenant. Thus, the authors offered a genetic mechanism with altered crossover and mutation operations of traditional genetic algorithm by changing some mutation operation steps and proposed a new individual learning mechanism. In addition, they utilize Simulated Annealing (SA) to do a neighborhood search for each individual in GA, which helps the algorithm reach the optimal global solution.
### Dataset for Container Security
Chakravarthi et al. [36] used the CloudSim 5.0 environment and collected traffic data. They augmented the data using a generative adversarial network (GAN). The dataset contains TCP traffic. Since the work focuses on anomaly detection in scenarios where VM migration occurs, to detect particularly volume-based attacks, by analyzing the traces of TCP streams is beneficial since these types of attacks tend to consume an excessive amount of bandwidth compared to normal traffic. The collected data contains wide variety of features, some of which are CPU, network and memory usages. In addition, they gathered information about the status of the network flow. To solve data imbalance problem, the authors decided to augment data based on the collected samples. For this task, they selected GAN [36] rather than restricted boltzman machines or variational autoencoders. Creating new samples becomes a challenging task when the data has many variables or features. During simulation, they have introduced Net Scan and DoS attacks, but the proposed work does not include information about the statistical properties of the dataset, such as the percentage of data collected for each metric, which would be useful in understanding the characteristics of the data.
Cui and Umphress [46] come up with open-source dataset for the observation of system calls. Scripts for data generation are available in their public repository. The work aims to improve upon previous datasets used for detecting anomalies in computer systems by addressing some of their limitations. These include focusing on network traces rather than internal system behaviors, limited scope and coverage in system call-based datasets, and a lack of clear descriptions of benign behaviors and indications of system activity. Additionally, at that time, the authors state that none of the previous datasets are explicitly designed for containerized systems. In response, the current work has been created using Docker. Brute force login, simple remote shell, malicious python script, SQL misbehavior, SQL injection, docker escape and other selected malware types were included in the dataset. Besides, they captured 7,144,780 benign system calls which constitutes the majority of the dataset. As a limitation, the sample application used in this study was MySQL database. Therefore, scenarios during the experiment may not be identical to all real-world environments.
Tien et al. [50] use three different datasets to train the supervised machine learning model for the security system called KubAnomaly. These datasets are private data, public data called CERT, and real-world experimental data to evaluate the system's accuracy and performance. First, they used a simple dataset and a complex dataset. These datasets contain two parts: 80% for training and 20% for testing, and both datasets include normal and abnormal samples. Normal samples include several types of web services run in containers. They used JMeter to simulate user login behavior. The abnormal samples include two types of attacks aimed at compromising the web service. They used Owasp Zap to simulate a hacker's attempt to attack the container and JMeter to simulate a DoS attack. The complex dataset also contains both normal and abnormal sample types. Also, the complex dataset includes other hacker tools such as sqlmap. KubAnomaly achieves over 98% accuracy with the simple dataset and 96% accuracy with the complex dataset. CERT contains various types of log data, inclusive of email and device data., but it does not include system call log data. This dataset does not have any labeling; therefore, they used feature extraction and unsupervised learning to classify abnormal user behavior. Finally, in order to attract hackers, the authors developed an online web service with vulnerabilities and used KubAnomaly to identify abnormal behaviors and
record the attack events.
In their experiment for the selection of anomaly detection model, Kosinska and Tobiasz [52] used the Numenta Anomaly Benchmark (NAB) dataset to train the models. They chose two different sorts of data streams: the first is artificially generated, and the second contains data presenting CPU utilization collected from AWS Cloudwatch. The results show that statistical models (SARIMA and HMM) achieve higher results on the artificial data while the LSTM and autoencoder perform better on AWS Cloudwatch data.
## 5 Conclusion
In conclusion, this survey has provided a comprehensive overview of container security in 5G environments and the potential of AI-based methods to address the challenges posed by increased connectivity. The integration of AI into security systems has the potential to enhance intrusion and malware detection, anomaly detection, attack detection, and inter-container security within container clusters, making it a powerful tool in the fight against cyberattacks. It is important to note that challenges such as interpretability, explainability, and bias need to be addressed when integrating AI in container security. Nevertheless, the use of AI-based methods for container security in 5G environments has the potential to revolutionize the way we protect and secure our digital assets. Further research and development in this field is needed to fully realize the potential of AI-based approaches for container security in 5G environments. This survey aims to contribute to the growing body of knowledge in this field and provide a valuable resource for researchers, practitioners, and decision-makers working in container security and 5G networks.
## Acknowledgement
This research has been supported by the TUBITAK 3501 Career Development Program under grant number 120E537. However, the entire responsibility of the publication belongs to the owners of the research. The financial support received from TUBITAK does not mean that the content of the publication is approved in a scientific sense by TUBITAK.
|
2307.07016
|
Towards Energy Efficiency in RAN Network Slicing
|
Network slicing is one of the major catalysts to turn future
telecommunication networks into versatile service platforms. Along with its
benefits, network slicing is introducing new challenges in the development of
sustainable network operations. In fact, guaranteeing slices requirements comes
at the cost of additional energy consumption, in comparison to non-sliced
networks. Yet, one of the main goals of operators is to offer the diverse 5G
and beyond services, while ensuring energy efficiency. To this end, we study
the problem of slice activation/deactivation, with the objective of minimizing
energy consumption and maximizing the users quality of service (QoS). To solve
the problem, we rely on two Multi-Armed Bandit (MAB) agents to derive decisions
at individual base stations. Our evaluations are conducted using a real-world
traffic dataset collected over an operational network in a medium size French
city. Numerical results reveal that our proposed solutions provide
approximately 11-14\% energy efficiency improvement compared to a configuration
where all the slice instances are active, while maintaining the same level of
QoS. Moreover, our work explicitly shows the impact of prioritizing the energy
over QoS, and vice versa.
|
Hnin Pann Phyu, Diala Naboulsi, Razvan Stanica, Gwenael Poitau
|
2023-07-13T18:35:23Z
|
http://arxiv.org/abs/2307.07016v1
|
# Towards Energy Efficiency in RAN Network Slicing
###### Abstract
Network slicing is one of the major catalysts to turn future telecommunication networks into versatile service platforms. Along with its benefits, network slicing is introducing new challenges in the development of sustainable network operations. In fact, guaranteeing slices requirements comes at the cost of additional energy consumption, in comparison to non-sliced networks. Yet, one of the main goals of operators is to offer the diverse 5G and beyond services, while ensuring energy efficiency. To this end, we study the problem of slice activation/deactivation, with the objective of minimizing energy consumption and maximizing the users quality of service (QoS). To solve the problem, we rely on two Multi-Armed Bandit (MAB) agents to derive decisions at individual base stations. Our evaluations are conducted using a real-world traffic dataset collected over an operational network in a medium size French city. Numerical results reveal that our proposed solutions provide approximately 11-14% energy efficiency improvement compared to a configuration where all the slice instances are active, while maintaining the same level of QoS. Moreover, our work explicitly shows the impact of prioritizing the energy over QoS, and vice versa.
5G, Network Slicing, Energy Efficiency, QoS +
Footnote †: This work was supported by the National Natural Sciences and Engineering Research Council of Canada (NSERC) through research grant RGPIN-2020-06050 and by the CHIST-ERA ECOMOME project, through the Fonds de Recherche du Quebec – Nature et Technologies (FRQNT).
## I Introduction
The telecommunication industry accounts for approximately 2% of total global carbon emissions [1]. By 2030, 8% of the projected global electricity demand will come from the information and communications technology sector as a whole, even in the best case scenario [2]. Energy consumption will continue increasing in beyond 5G and 6G networks, where computationally intensive services will be largely deployed. Although 5G equipment is more energy efficient than 4G [3], with the data traffic volume increasing tremendously along with 5G services, overall energy consumption will increase too. In fact, the energy consumption of a 5G base station is three times higher than that of a 4G base station, when both are considered at a full load [4].
5G is envisioned to serve a wide variety of services, with heterogeneous traffic, through network slicing [5]. This is done by forming, on one physical network, multiple virtual networks on a per-service basis, i.e., slices. That said, slices requirements need to be met, including performance isolation. Guaranteeing these requirements and the additional virtualization layer come with some overhead, which produces higher energy consumption with respect to non-sliced network deployments [6]. One of the key objectives in the field is to offer this service differentiation, while reducing the associated CO\({}_{2}\) emissions. Indeed, energy efficiency in networks is no longer an option but a necessity. When delving into this topic, we observe that, today, the highest amount of energy is consumed in the radio access network (RAN), approximately 70% of the overall network energy utilization [7].
To deal with this, several research works consider base station sleep schemes to further optimize the energy consumption in 5G networks [8, 9]. While such techniques show effective results, they are more challenging to be applied directly in the case of multi-services network slicing environments. That is mainly because slice instances can exhibit quite different temporal traffic patterns. Completely shutting down or putting the entire base station into sleep mode could notoriously impact the quality of service (QoS) of users in specific slice instances. This motivates us to introduce a new approach, in which slice instances are dynamically activated and deactivated, according to their traffic patterns, thereby enhancing the overall base station energy efficiency. However, deactivating some slice instances to minimize the energy consumption can potentially degrade the QoS of users. On the other hand, activating all slices all the time, so as to maximize QoS, significantly increases energy consumption. Accordingly, the energy minimization objective shall be coupled with a QoS maximization objective [10].
To manage the trade-off between the two objectives, operators may consider using an EcoSlice, which is a slice instance with bare minimum resources and network functions. By that, it incurs much lower energy consumption than typical slice instances. The EcoSlice is up and running \(24/7\) to provide a bare-minimum service. In some conditions, e.g., low traffic demand, operators may switch the users of other slices to this
specific EcoSlice, without a significant QoS impact.
In this regard, we study the problem of slice activation/deactivation, with the objective of minimizing the energy consumption while satisfying the user QoS. The contributions of our work are twofold. First of all, we propose two different approaches for solving the problem, namely a Deep Contextual MAB (DCMAB) algorithm and a Thompson Sampling Contextual (Thompson-C) algorithm. These approaches allow to derive solutions dynamically over time, while considering traffic patterns of individual slice instances deployed at a base station. Moreover, our proposed agents enable operators to navigate users to and from an EcoSlice, if their requested slice instance is activated/deactivated. Second, we evaluate the performance of the proposed approaches and their computational cost using a real-world traffic dataset.
The rest of the paper is organized as follows. Section II discusses the related work of energy efficiency in network slicing. Then, the network model and problem statement are laid out in Section III. In Section IV, we present the detailed design of our proposed solutions. We articulate the results in Section V and conclude the paper in Section VI.
## II Related Work
With the aim of enabling energy efficiency, several research works consider optimizing the allocation of network slice resources (i.e., radio, CPU, transmission bandwidth and power) in the different domains (i.e., RAN, Edge Computing (EC), Core Network (CN) and end-to-end network). At the RAN slicing level, [11] combines deep learning (DL) and reinforcement learning (RL) on a distributed framework to efficiently allocate radio and transmission power resources over base stations. They use stacked and bidirectional Long-Short Term Memory (SBiLSTM) to predict the per slice resources demand on a large time scale and rely on asynchronous advantage actor-critic (A3C) to allocate resources to users on a small time scale. Their proposed framework achieves higher energy efficiency than baselines using static power allocation.
In [12], the authors optimize the energy consumption and computation cost in a network slicing based Cloud-RAN (C-RAN) setting, using a twin-delayed double-Q soft Actor-Critic (TDSAC) approach. Their agent performs the up/down scaling of computing and beamforming power resources. Their work outperforms other baseline RL models in terms of overall network energy and computing cost. Similarly, [6] designs a slice energy consumption model based on the C-RAN architecture. An optimisation problem is solved per-slice, with the objective of minimizing the overall network energy cost, jointly considering communication and computation resources. This approach improves energy efficiency over a baseline focusing only on radio resources.
Focusing on CN slicing, the authors in [13] formulate a security-aware network slicing optimization problem to enhance the energy efficiency of CN nodes. They limit themselves to static resource allocation. Their proposed solution provides more power savings than a greedy approach.
Considering an obvious trade-off, it is sensible to couple QoS maximization and energy consumption minimization. Therefore, focusing on end-to-end network slicing, the authors in [14] aim to maximize the energy efficiency while respecting service level agreement (SLA) constraints. To this end, they rely on statistical federated learning (stFL). Their federated local agents coordinate and predict per slice network metrics, without transferring datasets to a central unit, and largely outperform other federated learning and centralised solutions.
While prior works attempt to achieve the energy efficiency as well as ensuring QoS, we believe there is still room to further optimize the energy efficiency by switching off some of the underutilized slice instances, under some conditions. As of our knowledge, there is no contemporary work studying this problem. In this light, we introduce the RAN slice activation/deactivation problem with the aim of minimizing energy consumption and maximizing QoS. To this end, we rely on the fully decentralized state-aware MAB approaches to enable decisions for slice instances at each base station, while considering the impact on energy and QoS factors.
## III System Model and Problem Statement
We study the problem of slice instance activation and deactivation at individual base stations with the aim of minimizing energy consumption and maximizing QoS. We thus explicitly lay out the system model, energy consumption and user QoS model, needed as part of our defined problem. We then formalise the main objective of our problem. We formulate the latter as a Markov decision process (MDP), and re-design it after that as a contextual MAB problem.
### _System Model_
We consider a time-slotted system. Accordingly, we define \(\tau\) as the slice activation/deactivation interval (SADI), where slices are active or inactive continuously over the period of that interval. Accordingly, activation/deactivation decisions are made at the end of every \(\tau\), for the upcoming \(\tau+1\). The period of the SADI is defined based on the operator policies. Besides, we define \(T\) as a time interval of interest, such that \(\tau\in T\). Figure 1 illustrates the time frame consideration in our proposed model. In this example figure, we consider four different types of slices: three of them, denoted as Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communication (URLLC) and Massive Machine-Type Communication (mMTC), provide services with different QoS levels and they can be activated/deactivated as needed, as well as the EcoSlice which is always up and running.
We define \(\mathcal{I}_{b}\) as a set of slice instances attached to base station \(b\in\mathcal{B}\). We define \(\mathcal{U}_{b}^{\tau}\) as the set of users served by base station \(b\) at \(\tau\). Accordingly, \(U_{i,b}^{\tau}\) is the set of users that can be served by slice instance \(i\) of base station \(b\) at \(\tau\). Thus, \(\mathcal{U}_{b}^{\tau}=\bigcup_{i\in\mathcal{I}_{b}}U_{i,b}^{\tau}\). Each slice instance \(i\in\mathcal{I}_{b}\) is characterized by a specific QoS class identifier (QCI) [15] and its energy consumption \(E_{i,b}^{\tau}\).
Without loss of generality, we consider user delay as an indicator of the QoS, but any other metric could be easily
integrated instead. In the optimal slice instance activation scheme, underutilized slice instances are switched off to save energy when certain conditions are met. Consequently, the user-perceived delay is prone to deteriorate. In this light, we deliberately study the impact of optimal slice instance activation on both energy and delay. Hence, we define \(\delta_{i}\) as a predefined achievable delay for a slice instance \(i\). We also consider an EcoSlice instance \(i_{e}\in\mathcal{I}_{b}\), to which users are switched when their requested slice instances are inactive. The EcoSlice instance \(i_{e}\) also has a predefined achievable delay \(\delta_{i_{e}}\) and its energy consumption over \(\tau\) is denoted as \(E^{\tau}_{i_{e},b}\).
At every \(\tau\), each user \(u\in\mathcal{U}^{\tau}_{b}\) makes a specific request, characterized by a delay requirement \(d^{\tau}_{u}\) and a traffic flow demand \(l^{\tau}_{u}\). Each base station \(b\) has a set of possible configurations \(\mathcal{K}_{b}\). Each configuration \(k\in\mathcal{K}_{b}\) implies activation/dacactivation decisions for some slices. In detail, \(k\) contains \(\{c^{\tau}_{i}|\!\in\mathcal{I}_{b}\}\). Here, \(c^{\tau}_{i}=1\) if slice instance \(i\) is active at \(b\) during \(\tau\), and 0 otherwise. Needless to say, \(c^{\tau}_{i_{e}}=1\) for any \(\tau\).
### _Energy Consumption Model_
We define the function \(f(.)\) to refer to the overall energy consumption of a base station. This energy model can be further fine-tuned based on specific operator resource management policies and activated RAN energy saving features. It is composed of the energy consumption resulting from its individual deployed slice instances \(E^{\tau}_{i,b}\) and the static energy consumption of the base station \(P^{static}_{b}\) (i.e cooling and circuit power):
\[f_{b}(c^{\tau}_{i},\mathcal{I}_{b})=\sum_{i\in\mathcal{I}_{b}}c^{\tau}_{i} \cdot E^{\tau}_{i,b}+P^{static}_{b} \tag{1}\]
with
\[E^{\tau}_{i,b}=\rho^{\tau}_{i,b}\psi_{i}P^{dynamic}_{b}+\psi_{i}P^{fixed}_{b} \tag{2}\]
As indicated in the above equation, the energy consumption of slice instances for a base station \(b\) encompasses a load-dependent power consumption component, \(P^{dynamic}_{b}\), and a load-independent power consumption component, \(P^{fixed}_{b}\). Specifically, as the name implies \(P^{dynamic}_{b}\) depends on the traffic load of the base station. It is worth stressing that a slice instance consumes some power even at zero load traffic, in order to run its corresponding network functions. Thereupon, \(P^{fixed}_{b}\) is independent of the traffic load, but related to the energy consumption of associated network functions of specific slice instances.
We note here that not all slice instances are designed equally [16]. Their required network functions and signalling traffic are different [17]. For instance, an URLLC service potentially consumes higher energy, because it requires specific network functions to offer high reliability and very low latency [18]. In short, the more stringent the latency requirements of the service, the higher its consumed energy [19]. Regardless of their requirements for energy, both mMTC and eMBB have flexibility in terms of latency. It is fair to say that, even under the same amount of traffic load, the energy consumption of each service is different. Meanwhile, the EcoSlice is deployed with relatively low energy consumption. It is sensible to conclude that different slice types consume different amounts of energy not only because of their traffic portion but also because of their service attributes [20].
In this vein, \(\psi_{i}\) denotes the power consumption impact factor of slice instance \(i\) on both \(P^{dynamic}_{b}\) and \(P^{fixed}_{b}\). We let the operators define the value of \(\psi_{i}\) based on their slice instances configurations and analytics. Having said that, based on the prior explanation, it is pragmatic to assume that URLLC has larger \(\psi\) value than eMBB and mMTC. Needless to say, \(\psi\) value of the EcoSlice is the lowest. Besides, \(\rho^{\tau}_{i,b}\) is the traffic load portion of associated slice instance \(i\) of base station \(b\) during \(\tau\). For this, we can simply obtain \(\rho^{\tau}_{i,b}\) by dividing the traffic load over slice \(l^{\tau}_{i,b}\) by the total base station traffic load, as below:
\[\rho^{\tau}_{i,b}=\frac{l^{\tau}_{i,b}}{\sum_{i\in I_{b}}l^{\tau}_{i,b}} \tag{3}\]
### _User QoS Model_
As explained, our objective is coupled with ensuring the user QoS. In this regard, we define the user satisfaction factor \(\eta^{\tau}_{u}\) for each user \(u\) at \(\tau\) associated to slice instance \(i\) as follows:
\[\eta^{\tau}_{u}=\begin{cases}1&\text{if }\delta_{i}\leq d^{\tau}_{u}\\ 0&\text{otherwise}\end{cases},u\in U^{\tau}_{i,b},i\in\mathcal{I}_{b},\tau\in T \tag{4}\]
where \(d^{\tau}_{u}\) is the delay requirement of user \(u\) at time \(\tau\). Consequently, the average per slice QoS at base station \(b\) is defined as:
\[\eta^{\tau}_{i,b}=\frac{\sum_{u\in U^{\tau}_{i,b}}\eta^{\tau}_{u}}{|U^{\tau}_ {i,b}|} \tag{5}\]
We then compute the average QoS of the base station \(b\) considering all the associated slice instances during \(\tau\):
\[\eta^{\tau}_{b}=\frac{\sum_{i\in\mathcal{I}_{b}}\eta^{\tau}_{i,b}}{|\mathcal{I }_{b}|} \tag{6}\]
### _Objective Function_
Given the system model and utility models mentioned in the preceding sections, the objective of our slice activation/deactivation problem can be expressed as:
\[max\ \sum_{\tau=1}^{T}\left[\frac{1}{f_{b}(c^{\tau}_{i},\mathcal{I}_{b})}+\eta^{ \tau}_{b}\right] \tag{7}\]
As one can see in Equation 7, the objective function is influenced by the time-varying user demand and active slice
Fig. 1: Slice Activation-Deactivation Interval (SADI) illustration.
instances. Thus, we believe that RL-based approaches are best-suited for this problem, as they enable complex decision-making without requiring an explicit modeling of the network environment [21]. In what follows, we formulate the problem as MDP and contextual MAB.
### _Markov Decision Process (MDP)_
An MDP is defined by a tuple \((\mathcal{S},\mathcal{A},P,\mathcal{R})\). \(\mathcal{S}\) denotes the set of states in the system. Representing states as feature vectors helps the RL agent converge to a near-optimal reward value, but one can also simplify a problem with a less complex state representation (which might converge to a similar reward, with less computation). In this light, we explore two definitions for a state \(s\) in this problem: _i)_ energy consumption and QoS of base station in the previous SADI: \(s=\left\{f_{b}(c_{i}^{r-1},\mathcal{I}_{b}),\eta_{b}^{r-1}\right\}\) and _ii)_ simply the SADI identifier: \(s=\left\{\tau\right\}\). The reason for the latter definition is that, since the user demand depends on time and it shows a significant periodicity, a simple state representation based only on the SADI might already contain enough information for the RL agent. Each action \(a\in\mathcal{A}\) is a configuration \(k\), as defined in Section III-A, and \(\mathcal{A}\) is the set of possible configurations: \(\mathcal{A}=\mathcal{K}_{b}\).
In the MDP model, \(P:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\) captures the stochastic transition probability function to transition to \(s^{\prime}\) from state \(s\) based on action \(a\), with \(\sum_{s^{\prime}\in\mathcal{S}}P(s^{\prime}|s,a)=1\) for all \(s\in\mathcal{S}\) and \(a\in\mathcal{A}\). \(P\) is unknown to an RL agent. However, it occurs that if an agent knows the current state and the reward obtained in each iteration, it can still converge to the optimal solutions through RL approaches [22]. Accordingly, the formulation of reward function is critical for the RL agent to be able to learn the optimal policy and it usually boils down to the main objectives of the problem. We therefore define reward \(r\in\mathcal{R}\) as:
\[r(s,a,s^{\prime})=\frac{1}{f_{b}(c_{i}^{r},\mathcal{I}_{b})}+\beta\cdot\eta_{ b}^{\tau} \tag{8}\]
As shown in Equation 8, our reward function is aligned with our objective function. Besides, we define \(\beta\) as a QoS impacting factor on the reward. In short, the larger the \(\beta\) value is, the more the QoS is emphasized with respect to the energy consumption of the base station. If \(\beta=1\), the objective is to find the trade-off between energy and QoS.
Due to the nature of our defined problem, we observe two key points in our MDP representation: _i)_ the transition probability of MDP can be simplified, such as \(P(s^{\prime}|s,a)\equiv P(s^{\prime}|a)\), where states are independent of each other, and _ii)_ unlike typical MDP [12], our reward function depends only on current state and action, but not on the successor state. With this, the reward function could be simplified as \(r(s,a,s^{\prime})\equiv r(s,a)\). Based on these observations, we present an equivalent formulation as a state-aware MAB in the following.
### _Multi-armed Bandit (MAB)_
In this section, we formulate our slice instance activation/deactivation problem as a state-aware MAB. Formally, state-aware MAB is tupled with \((\mathcal{S},\mathcal{A},\mathcal{R})\). Same as in our MDP model, we use the two different state definitions for \(s\in\mathcal{S}\): energy consumption and QoS observed over a SADI, or simply the SADI identifier. Similarly, the set of actions \(a\in\mathcal{A}\) is the set of available configurations.
We then define the associated reward set \(\mathcal{R}\). Since we consider the state-aware MAB (i.e., involving multiple states), our reward distribution is non-stationary, and changes based on the state \(s\) (also called context in the following). With this, the reward set can be defined as \(\mathcal{R}=\left\{r(s,a)|\ a\in\mathcal{A},s\in\mathcal{S}\right\}\). In this regard, we rely on the same reward calculation as Equation 8. Needless to say, the objective here is to maximize the expected reward \(E\left[\sum r(s,a)\right]\).
To evaluate our reward function, one standard approach is to compete with the best-action benchmark. On the other hand, we compute the regret resulting from not selecting the optimal action at each iteration. That said, one would define the cumulative regret incurred by an agent over a total of \(J\) time steps as [23]:
\[Regret(J)=\sum_{j=1}^{J}(r_{j}^{*}(s,a)-r_{j}(s,a_{j})) \tag{9}\]
where \(r_{j}^{*}(s,a)\) is the best-action benchmark at round \(j\) and can be obtained via \(r_{j}^{*}(s,a)=\underset{a}{max}\ r_{j}(s,a)\), and \(a_{j}\) is the action selected by the agent in round \(j\). It is worth mentioning that the regret function is non-negative, as it compares the optimal reward to the actual reward obtained by the agent.
## IV Proposed Solution
In this section, we lay out our two fully decentralized approaches, based on DCMAB and Thompson-C agents.
### _DCMAB Agent_
As explained, we use two different context/state definitions and thus we have two types of DCMAB agents: _i)_ DCMAB-EQ where the state is the overall energy and QoS over the base station: \(s=\left\{f_{b}(c_{i}^{r-1},T_{b}),\eta_{b}^{r-1}\right\}\), and _ii)_ DCMAB-SADI when the state is the SADI: \(s=\left\{\tau\right\}\). Due to space limitation, we outline them together in Algorithm 1. However, we make sure the main differences (occuring at Line 3 and Line 11 of Algorithm 1) are clearly outlined.
The inputs of the DCMAB agent consist of the probability of selecting a random action \(\epsilon\), the learning rate \(\alpha\) for the deep neural network (DNN) model, and maximum time steps \(J\) to train the DCMAB agent. The output is a trained model, which can predict a reward distribution \(\widehat{R}(\tilde{w})\) for the available actions of the associated states. With this, the algorithm begins by the initialization of the weights \(\tilde{w}\) with arbitrary values and the variable \(j\) referring to an iteration and starting with zero (Line 1). At each step, the agent observes a context \(s\): for DCMAB-EQ, \(s\) is the overall energy consumption and QoS factor and for DCMAB-SADI, \(s\) is a SADI identifier (Line 3). Then, it predicts the reward distribution for all the actions (Line 4) for a given context. After that, an action is selected by considering the exploration and exploitation tradeoffs (Line 5 - Line 9). Precisely, the random action is selected with probability \(\epsilon\), and otherwise the action giving the maximum reward is selected. Next, the chosen action is
applied to the defined network environment, which, after the potential reconfiguration, returns an actual reward (calculated using Equation 8) (Line 10). Accordingly, the new state \(s^{\prime}\) is obtained based on the action \(a\) for DCMAB-EQ, or new state \(s^{\prime}\) is simply the next SADI (which is independent of the previous action \(a\)) for DCMAB-SADI (Line 11). Then, the loss between the predicted reward and the actual reward is computed for the purpose of model training (Line 12). We rely on a gradient descent method to update the weights matrix of the DNN with a learning rate \(\alpha\) (Line 13). Then, we go for another iteration (Line 14) and the above process is repeated for a maximum number of time steps \(J\) (Line 15).
```
Input: Probability of selecting a random action \(\epsilon\), learning rate \(\alpha\), maximum time steps \(J\) Output:\(\widehat{R}(\tilde{w})\)
1 Initialize \(\tilde{w}\) randomly and \(j=0\) repeat
2 Observe context/state: \(s\) - based on state definition \(s=\left\{f_{b}(c_{i}^{\tau-1},\mathcal{I}_{b}),\eta_{b}^{\tau-1}\right\}\) for DCMAB-EQ or \(s=\left\{\tau\right\}\) for DCMAB-SADI
3 Predict Reward Distribution for each action: \(\left[\widehat{R}_{\tilde{w}}(a|s)\right]_{a\in A}\)
4ifgenerate random probability: \(rand()\ <\ \epsilon\)then
5 Select a random action \(a\)
6
7else
8 Select \(a=\underset{a\in A}{argmax}\left[\widehat{R}_{\tilde{w}}(a|s)\right]\)
9 endif
10 Evaluate reward \(r(a|s)\)
11 Update new state \(s^{\prime}\)
12 Calculate the loss: \(\mathcal{L}(\tilde{w})\buildrel\Delta\over{=}(r(a|s)-\left[\widehat{R}_{ \tilde{w}}(a|s)\right]_{a})^{2}\)
13 Update the weights: \(\tilde{w}\leftarrow\tilde{w}-\alpha\bigtriangledown\mathcal{L}(\tilde{w})\)
14\(j\gets j+1\)
15until\(j>J\);
```
**Algorithm 1**DCMAB-EQ and DCMAB-SADI
### _Thompson-C Agent_
Unlike DCMAB, the Thompson-C agent adopts a statistical approach with the goal of achieving a proper estimation of the posterior distribution of expected reward for each action. The Thompson-C agent (Algorithm 2) operates as follows. We note that we only consider the SADI identifier as context/state for the Thompson-C agent. Accordingly, the inputs of the algorithm are the context data for all the actions: \(\mathbb{C}=(d_{k})_{|T|\times|\mathcal{K}_{k}|}\), where \(d_{k}\) is a context vector for configuration \(k\), the number of dimensions of the context vector \(z=|T|\), the tunable parameters \(\varphi\) and \(M\) (that can be tuned by the operator as needed) and the maximum number of time steps \(J\) to run the Thompson-C. The output is a posterior distribution \(\mathcal{N}\left(\hat{\mu},\sigma^{2}D^{-1}\right)\) of having the optimal parameter \(\hat{\mu}\). For better understanding, \(\hat{\mu}\) can be regarded as a weighted vector for a z-dimensional context/state. The parameter \(\sigma\) can be obtained via \(\sigma=M\sqrt{9zln(\frac{J}{\varphi})}\).
The algorithm begins by setting the parameter \(D\) as the identity matrix with a z-dimensional vector. \(\hat{\mu}\) is initialized with zeros as a z-dimensional vector and \(j=0\) (Line 1). In each time step \(j\) (Line 2), the Thompson-C agent samples a parameter \(\hat{\mu}\), from the posterior distribution \(\mathcal{N}\left(\hat{\mu},\sigma^{2}D^{-1}\right)\) (Line 3). It then selects an action that yields the best sample (Line 4) and observes an associated reward (Line 5). Finally, the parameters \(D\) and \(\hat{\mu}\) are updated (Line 6 and Line 7). Then, we go for another iteration (Line 8). The above process is repeated for a maximum number of time steps \(J\) (Line 9).
## V Evaluation
In this section, we evaluate the performance of our proposed solutions. We start with a description of the dataset and simulation environment. Afterwards, we explain the benchmark approaches and implementations that we use. Finally, we discuss the overall results.
\begin{table}
\begin{tabular}{|l|c|} \hline Parameter & Value \\ \hline \(P_{b}^{static}\)[24] & 18 Watts \\ \hline \(P_{b}^{fixed}\)[25] & 139 Watts \\ \hline \(P_{b}^{dynamic}\)[25] & 742 Watts \\ \hline \(\psi_{i}\) [Facebook, YouTube, Google, EcoSlice] & [1.2, 1.6, 1.4, 1] \\ \hline \(\delta_{i}\) [Facebook, YouTube, Google, EcoSlice] & [10, 1, 15, 11] ms \\ \hline Number of users per slice & [11-30] \\ \hline \multirow{3}{*}{Users delay requirement \(d_{u}^{r}\)} & Facebook: [11-20]ms \\ & YouTube: [6-17]ms \\ & Google: [16-25]ms \\ \hline Loss function & MSE \\ \hline Learning rate \(\alpha\) & 0.001 \\ \hline \(\beta\) & [5,1,0.8] \\ \hline Maximum episodes \(J\) & 1000 \\ \hline \(\varphi\) & 0.5 \\ \hline \(M\) & 0.01 \\ \hline \end{tabular}
\end{table} TABLE I: LIST OF PARAMETERS
### _Dataset and Simulation Setup_
We evaluate the proposed solutions using a real-world dataset collected from the Orange 4G network, in Poitiers, France. The dataset includes mobile data traffic demand of different mobile applications at a base station level. We assume slice instances are deployed on an application-basis, i.e., one application maps to one slice instance. More precisely, Facebook, YouTube, and Google have been considered as three different types of slice instances attached to each base station. It is worth mentioning that those three different applications exhibit very different traffic demands, which we consider appropriate for the simulation of network slicing [26]. On the user side, we assume stochastic delay requirements of users for each application, as indicated in Table I.
The granularity of the dataset is 10 minutes, for 10 days in May 2019. With this, for our simulation purposes, we consider \(T\) is 10 days and SADI \(\tau=10\ minutes\). Thus \(T\) includes 1440 \(\tau\). We analyse 10 base stations from the Poitiers city center, where different slice instances are associated. We then apply our proposed solutions using an action set where each action implies the activation/deactivation of one of the slices: [Facebook, YouTube, Google, EcoSlice].
### _Benchmarks and Implementation Setup_
We compare the performance of our proposed DCMAB and Thompson-C solutions with three counterparts: Thompson Sampling Non-Contextual (Thompson-NC), AllActive and Random. Unlike Thompson-C, no state/context information is considered in Thompson-NC [27]. For AllActive, as the name implies it, all the slice instances are active. And a random action is selected at each iteration for the Random approach.
For the implementation, we rely on the Pytorch framework for the DCMAB agents. All the models (i.e. DCMAB, Thompson-C, Thompson-NC, AllActive and Random) are implemented using a Python environment and trained on the high-performance Linux server provided by Digital Research Alliance of Canada. The DNN of a DCMAB agent is composed of three fully-connected layers of 100 neurons, followed by a RELU activation function for each. The detailed parameters are summarized in Table I.
### _Results_
#### Iv-C1 Overall Agents Performances
First of all, to fully comprehend the performance of our agents, we study the trends of reward, regret, QoS and energy for \(\beta=[5,1,0.8]\). We note that agents focus on QoS when \(\beta=5\), search for a trade-off when \(\beta=1\), and stress on energy when \(\beta=0.8\). Therewith, we compare the reward trends of our proposed solutions and their peers in Figure 2. The curves are smoothed by averaging within a rolling window of 50 iterations.
In Figure 1(a), the DCMAB-EQ and Thompson-C agents exhibit the superior reward, even significantly better than AllActive, followed by DCMAB-SADI and Thompson-NC. Random approaches failed our proposed solutions in every scenario. Noticeably, AllActive shows inferior performance to that of the other agents in general. We also perceive the same behavior for the regret trends for all the agents in Figure 1(b).
#### Iv-C2 Roles of Agents in Energy and QoS
We then explicitly verify if our proposed solutions are feasible for energy optimization in RAN slicing by comparing them with the baselines. In this regard, as depicted in Figure 3, compared with the AllActive strategy (currently the standard approach), the energy improvement of Thompson-C is approximately 24%, 18% and 14% respectively, for \(\beta\) equals 0.8, 1 and 5. As expected, the energy gain deteriorates when \(\beta\) value increases. The prior phenomenon holds for all the agents (except Random). Also, we saw a great deal of energy gains over AllActive for DCMAB-EQ, DCMAB-SADI, Thompson-NC and even Random as well. Overall, we notice the Thompson-C agent is quite superior to others in terms of energy gain. Conversely, Thompson-NC has slightly lower energy gains than the DCMAB agents for all \(\beta\) values.
We are all aware that optimizing energy consumption means compromising QoS to an extent. In this light, we visualize the QoS of all the agents in Figure 4. We note that Thompson-C exhibits a comparable performance with its fellows in terms of QoS, but a slightly lower performance when \(\beta=0.8\). This is linked to Thompson-C showing highest energy gain at \(\beta=0.8\). It is observed in Figure 4 that our proposed solutions satisfy almost 100% QoS at \(\beta=5\) (slightly lower for DCMAB-SADI). Therewith, it is at \(\beta=5\) where our agents make themselves stand out, as they deliver the same QoS as AllActive, while providing significant improvement in energy efficiency. Regardless of showing acceptable performance, the Thompson-NC agent shows lower performance than our proposed solutions, comforting our modelling choices. We can not help but stress that: _i)_ state-aware agents outperform the one in which context/state is not considered, and _ii)_ a DNN approach is constantly surpassed by a statistical approach.
#### Iv-C3 Impact of EcoSlice
To grasp the benefits EcoSlice, we examine in Figure 5 the performance of the Thompson-C agent with and without an EcoSlice, in terms of reward, regret, QoS and energy utilization. We select the Thompson-C algorithm here as it shows the best results in most of the scenarios compared to its fellows. As we can observe, Thompson-C demonstrates better performance under the different metrics when compared to Thompson-C (w/o EcoSlice). Therefore, an EcoSlice significantly enhances the overall energy efficiency of the network by allowing operators to switch off the underutilized slice instances and yet ensure QoS. Without the assistance of an EcoSlice, one can not reach the level of energy efficiency that we have accomplished.
#### Iv-C4 Computing Time Comparisons
The detailed computing time comparisons conducted on the Digital Research Alliance of Canada servers are shown in Figure 6. Paying the price for its performance, Thompson-C takes much longer computing time than all the other agents. Despite Thompson-C being considered to be a better agent than DCMAB-EQ, it is not a good option for the real-time decision-making process, at least not with a SADI of 10 minutes, as we consider. On the other side, the DCMAB agents, who also outperformed the baselines
in terms of energy efficiency, are on par with Thompson-NC in terms of computing power. There is no single answer here, and Thompson-C can be a favourable solution for a system without computing time constraints. However, all the solutions we proposed in this work offer avenues for operators to optimize the energy efficiency by slice activation/deactivation while controlling the impact on QoS. Operators can also opt between different design choices by varying \(\beta\), based on their targets and limitations.
#### V-B5 MSE of DCMAB
Last but not least, we also compare the MSE of the two DNN-based solutions, DCMAB-EQ and DCMAB-SADI, in Figure 7. As we can see, DCMAB-EQ has a stable training process with lower MSE than DCMAB-SADI. This explains the superior performance of DCMAB-EQ in the prior results, as DCMAB-EQ predicts better reward distribution of associated actions. In Figure 7, MSE results are shown for \(\beta=5\) only, but we noticed similar behaviour for other tested values.
## VI Conclusion
In this paper, we focus on the slice activation/deactivation problem, to further enhance the energy efficiency in RAN slicing. To this end, we advocate the state-aware MAB approaches (i.e., DCMAB and Thompson-C), where an agent attempts to
Fig. 4: QoS based on different \(\beta\) values
Fig. 3: Energy improvement over AllActive
Fig. 2: Reward and regret obtained for different \(\beta\) values.
activate the optimal slice instances while providing guaranteed QoS. More than anything else, we investigate the important aspect of the compromise between energy consumption and user QoS. The results are derived based on a real-world datasets and demonstrate that the MAB approach in general, and DCMAB and Thompson-C in particular, are appropriate for the slice activation/deactivation problem. They significantly alleviate the energy consumption at the base station level while ensuring a satisfactory QoS level.
|
2301.04849
|
Exit options sustain altruistic punishment and decrease the second-order
free-riders, but it is not a panacea
|
Altruistic punishment, where individuals incur personal costs to punish
others who have harmed third parties, presents an evolutionary conundrum as it
undermines individual fitness. Resolving this puzzle is crucial for
understanding the emergence and maintenance of human cooperation. This study
investigates the role of an alternative strategy, the exit option, in
explaining altruistic punishment. We analyze a two-stage prisoner's dilemma
game in well-mixed and networked populations, considering both finite and
infinite scenarios. Our findings reveal that the exit option does not
significantly enhance altruistic punishment in well-mixed populations. However,
in networked populations, the exit option enables the existence of altruistic
punishment and gives rise to complex dynamics, including cyclic dominance and
bi-stable states. This research contributes to our understanding of costly
punishment and sheds light on the effectiveness of different voluntary
participation strategies in addressing the conundrum of punishment.
|
Chen Shen, Zhao Song, Lei Shi, Jun Tanimoto, Zhen Wang
|
2023-01-12T07:24:05Z
|
http://arxiv.org/abs/2301.04849v2
|
Exit options sustain altruistic punishment and decrease the second-order free-riders, but it is not a panacea
###### Abstract
The emergence and maintenance of altruistic punishment remains an open question and this conundrum is shared across diverse fields. In this study, we evaluated the evolution of altruistic punishment in a two-stage prisoner's dilemma game in which cooperators and defectors interact with another two actors called altruistic punishers and exiters. Traditionally cooperators and defectors, in the first stage, choose to cooperate and defect with their opponent, respectively, but they do not punish in the second stage; the altruistic punishers cooperate in the first stage and punish defectors in the second stage, and the exiters who simply exit the game in favor of a small payoff. We found that exiters did not provide any substantial assistance to altruistic punishment in well-mixed populations, they destabilize defection and finally replace them. In the finite population, although the exit option enables the coexistence of altruistic punishers, defectors, and exiters through cyclic dominance. Altruistic punishers never dominate the finite population and the exit option provides another alternative cyclic dominance route for the emergence of non-punishing cooperators. In networked populations, however, adding the exit option allows for the establishment of altruistic punishment, and enables the coexistence of altruistic punishers, defectors, and exiters through cyclic dominance. However, this type of cyclic dominance is not always stable, with adjustments to the exit payoff, this type of cyclic dominance is replaced by the cyclic dominance of non-punishing cooperators, defectors, and exiters or a bi-stable state between these two types of cyclic dominance. Our results indicate that although the exit option can help explain altruistic punishment, it is certainly not a panacea.
Evolutionary game theory; Cooperation; Coexistence; Cyclic dominance; Bi-stable
## I Introduction
Costly punishment is ubiquitous in many animal species including humans [1; 2; 3]. Unlike other animals, humans often show altruistic traits, i.e., humans punish other individuals who have harmed others even at the expense of their own interest [3; 4], however, the emergence and maintenance of altruistic punishment is an evolutionary conundrum as costly punishment is unlikely to evolve according to natural selection. Costly punishment reduces the payoff for both the punisher and the punished. If it is the fitest who survive, the second-order free riders that cooperate but do not punish are better off than punishers, and defectors should eventually take over the whole population. Therefore, the understanding of whether and how costly punishment can evolve is a crucial issue in the study of human cooperation. Fehr and Gachter pointed out that the evolutionary study of human cooperation in large groups of unrelated individuals should include a focus on explaining altruistic punishment [4]. In addition, they argued that negative emotions may be a potential explanation for the emergence of costly punishment.
To resolve this evolutionary puzzle, many scholars have explored how and why costly punishment can emerge in humans both from a theoretical and experimental perspective. Egas Martijn and Riedl Arno experimentally explored the boundary conditions that altruistic punishment can promote cooperation. They found that the maintenance of cooperation is subject to the cost-to-effect ratio of altruistic punishment, and cooperation is maintained if the conditions for altruistic punishment are relatively favorable [5]. It has been well established that voluntary participation plays a vital role in sustaining the prevalence of costly punishment both in finite and infinite populations [6; 7; 8; 9; 10; 11]. The main idea behind established altruistic punishment is that a loner itself is sufficient to maintain cooperation through cyclic dominance even in a one-shot game. Other reciprocity mechanisms including indirect reciprocity [12; 13; 14; 15; 16], group selection [17; 18; 19], spatial interaction [20; 21; 22; 23], prior commitment [24; 25; 26; 27], and so on [28], that can explain the emergence of cooperation have been applied to explain costly punishment, and its effect on costly punishment has previously been widely explored.
To avoid the exploitation of defectors, exiters simply exit the game in favor of a small-but-positive payoff and generate nothing for their opponent. While loners can receive a small-but-positive payoff by opting out but generates the same payoff for its opponent. Although these
|
2304.00229
|
Identifying the Gamma-ray Emission of the Nearby Galaxy M83
|
We report on the detection of a gamma-ray source at the position of the
nearby star-forming galaxy (SFG) M83, which is found from our analysis of 14
years of the data obtained with the Large Area Telescope (LAT) on-board {\it
Fermi Gamma-ray Space Telescope (Fermi)}. The source is weakly detected, with a
significance of $\sim 5\sigma$, and its emission can be described with an
exponentially cutoff power law. At a distance of 4.61\,Mpc, the source's
gamma-ray luminosity is $\sim 1.4\times 10^{39}$\,erg\,s$^{-1}$, roughly along
the correlation line between the \gr\ and IR luminosities determined for nearby
SFGs. Because of the weak detection, the source spectrum can not be used for
checking its similarity with those of other SFGs. Given the positional matches
and the empirical expectation for gamma-ray emission from M83 due to the
galaxy's star-forming activity, we conclude that the gamma-ray source is the
likely counterpart to m83. The detection thus adds another member to the group
of approximately a dozen SFGs, whose \gr\ emissions mostly have a cosmic-ray
origin.
|
Yi Xing, Zhongxiang Wang
|
2023-04-01T05:14:53Z
|
http://arxiv.org/abs/2304.00229v2
|
# Identifying the Gamma-ray Emission of the Nearby Galaxy M83
###### Abstract
We report on the detection of a \(\gamma\)-ray source at the position of the nearby star-forming galaxy (SFG) M83, which is found from our analysis of 14 years of the data obtained with the Large Area Telescope (LAT) on-board _Fermi Gamma-ray Space Telescope (Fermi)_. The source is weakly detected, with a significance of \(\sim 5\sigma\), and its emission can be described with an exponentially cutoff power law. At a distance of 4.61 Mpc, the source's \(\gamma\)-ray luminosity is \(\sim 1.4\times 10^{39}\) erg s\({}^{-1}\), roughly along the correlation line between the \(\gamma\)-ray and IR luminosities determined for nearby SFGs. Because of the weak detection, the source spectrum can not be used for checking its similarity with those of other SFGs. Given the positional matches and the empirical expectation for \(\gamma\)-ray emission from M83 due to the galaxy's star-forming activity, we conclude that the \(\gamma\)-ray source is the likely counterpart to m83. The detection thus adds another member to the group of approximately a dozen SFGs, whose \(\gamma\)-ray emissions mostly have a cosmic-ray origin.
Gamma-ray sources (633); Starburst galaxies (1570) 0000-0002-2181-8888]Yi Xing
## 1 Introduction
Among more than 6000 \(\gamma\)-ray sources detected with the Large Area Telescope (LAT) on board _the Fermi Gamma-ray Space Telescope (Fermi)_ in all sky, the dominant class is active galactic nuclei (AGN; Abdollahi et al., 2022), whose high-energy emission is mostly radiated from their jets. Non-active galaxies thus do not show such emission. However there are approximately a dozen of the galaxies, either within the local group or nearby, have been detected at \(\gamma\)-rays (Ajello et al., 2020; Xi et al., 2020). While there are complications in the production of the \(\gamma\)-ray emissions observed from these galaxies, for example AGN possibly hiding in some of them (Peng et al., 2019) and the \(\gamma\)-ray emission of the local-group galaxy M31 being considered consisting of different components (Li et al., 2016; Pshirkov et al., 2016; Ackermann et al., 2017; Karwin et al., 2019; Zimmer et al., 2022; Xing et al., 2023), \(\gamma\)-ray luminosities of most of them well correlate with infrared (IR) or radio 1.4 GHz luminosities (Abdo et al., 2010; Ackermann et al., 2012; Ajello et al., 2020; Xi et al., 2020). This correlation is considered as an indicator for the cosmic ray (CR) origin of the gamma-ray emissions. Supernova remnants (SNRs) produce CRs as their shock fronts serve as particle accelerators (e.g., Bykov et al., 2018), and the accelerated particles emit at radio frequencies through the synchrotron process and at high energies through the proton-proton collisions and/or leptonic processes (i.e., bremsstrahlung or inverse Compton scattering; e.g., Dermer, 1986). On the other hand, SNRs are the results of massive stars (\(M\gtrsim 8\,M_{\odot}\)) evolving to the end of their lives on relatively short, \(\sim 10^{7}\) yr timescales, and their densities are thus closely related to star-formation of a galaxy. Given that the IR luminosities are an indicator of star-formation rates of a galaxy, a correlation between them and corresponding \(\gamma\)-ray luminosities is naturally expected for star-forming galaxies (see, e.g., Domingo-Santamaria and Torres, 2005; Lacki et al., 2010; Ackermann et al., 2012, and references therein).
Along this expected correlation, efforts have been made to detect nearby star-forming galaxies at \(\gamma\)-rays. Thus far, approximately a dozen of them have been reported with the detection (see, e.g., Ajello et al., 2020; Xi et al., 2020, and references therein). Here we report on the likely detection of another one, the nearby galaxy M83 (also known as NGC 5236 or the Southern Pinwheel).
M83 is often referred to as a grand-design spiral galaxy. It is nearly face-on to us (\(i\sim 25^{\circ}\), Sofue et al., 1999), at a distance of 4.61 Mpc (Saha et al., 2006). There have been extensively studies of the galaxy
over the whole wavelength range. Its star formation is relatively active, having a total star-formation rate of \(5\,M_{\odot}\,\mathrm{yr}^{-1}\)(Kennicutt, 1998). Thus it was listed in the star-forming galaxy sample selected by Ackermann et al. (2012), to be searched for CR-induced \(\gamma\)-ray emission. A flux upper limit was reported in Ackermann et al. (2012), while only 3 years of the _Fermi_-LAT data were used.
Now with 14 years of the data having been collected with _Fermi_-LAT, we conducted a search for M83's \(\gamma\)-ray emission. We found a likely counterpart and report the results. Below we describe the details of our analysis and provide the corresponding results in Section 2. The results are discussed in Section 3.
## 2 Analysis and Results
### Fermi-LAT Data and Source Model
We selected 0.1-500 GeV LAT events from the updated _Fermi_ Pass 8 database in a region of interest (RoI) that has a size of 20\({}^{\circ}\)\(\times\) 20\({}^{\circ}\) and the center at the central position of M83. Since the galaxy appears to have a size \(\sim 12^{\prime}\) in the sky, which is not resolvable in the LAT data due to its large point spread function (PSF), we treat M83 as a point source through this paper. The time period of the LAT data was from 2008-08-04 15:43:39 (UTC) to 2022-09-26 23:16:35 (UTC), slightly more than 14 yrs. The _CLEAN_ event class was used. We included the events with zenith angles less than 90 deg and excluded the events with quality flags of 'bad'. Both these are recommended by the LAT team1.
Footnote 1: [http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/](http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/)
We constructed the source model by including all sources within 20 deg from M83. The positions and the spectral parameters of these sources are provided in the _Fermi_ LAT 12-year source catalog (4FGL-DR3; Abdollahi et al., 2022). We set the spectral parameters of the sources within 5 deg from M83 free, and froze the other parameters at their catalog values. The spectral model gll_iem_v07.fit was used for the Galactic diffuse emission, and the spectral file iso_P8R3_CLEAN_V3_v1.txt for the extragalactic diffuse emission. The normalizations of these two diffuse components were set free in the following analyses.
### Likelihood Analysis
We performed the standard binned likelihood analysis to the whole data in 0.1-500 GeV and updated the parameter values for the sources within 5 deg from M83. With the obtained results, we calculated a 0.1-500 GeV
\begin{table}
\begin{tabular}{l c c c} \hline Model & Best-fit parameters & log(\(L\)) & TS \\ \hline PL & \(\Gamma=2.0\pm 0.2\) & 265522.5 & 14 \\ PLEC & \(\Gamma=0.00\pm 0.03\) & 265526.9 & 22 \\ & \(E_{c}=2.8\pm 0.8\) & & \\ LP & \(\alpha=0.00\pm 0.06\) & 265526.1 & 20 \\ & \(\beta=0.67\pm 0.14\) & & \\ \hline \end{tabular}
\end{table}
Table 1: Likelihood analysis results with the PL, PLEC, and LP models
Figure 1: TS maps of a \(3^{\circ}\times 3^{\circ}\) region centered at M83 in the energy ranges of 0.1–500 GeV (_left_ panel) and 0.5–500 GeV (_middle_ panel). The only 4FGL-DR3 catalog source in the region, a blazar candidate (marked by green pluses), is removed in the maps. The green crosses mark the position of M83, which is within the 2\(\sigma\) error circle (marked by the green circles) determined for the residual excess emission at the location. There is also weak excess emission (whose 2\(\sigma\) error circle is marked by green dashed circles) NE to the center. _Right_ panel: the same as the middle panel, but the NE source is removed as a point source. For each TS map, the image scale is \(0\fdg 05\,\mathrm{pixel}^{-1}\), and the color bar indicates the TS value range.
residual Test Statistic (TS) map of a 3\({}^{\rm o}\times 3^{\rm o}\) region centered at M83. The field is rather clean, with only one catalog source (4FGL J1335.3\(-\)2949; see Section 2.3) in the region. This source was included in the source model and removed in the TS map, which is shown in the left panel of Figure 1. As can be seen, there is weak excess emission at the position of M83. The maximum TS value is approximately 14, corresponding to a \(\sim\)3.7\(\sigma\) detection significance. Because in the analysis below (Section 2.4 and Table 2) we have found that the low energy range for significant detection is \(\gtrsim\)0.5 GeV, we also calculated the 0.5-500 GeV residual TS map (shown in the middle panel of Figure 1). The maximum TS value of the excess emission at M83 is improved to be \(\sim\)25, now at a \(\sim\)5\(\sigma\) detection significance. We ran _gtfindsrc_ in the Fermitools to the 0.5-500 GeV data to determine the position, and obtained R.A.=20\(\fdg\)23, Decl.=\(-\)29\(\fdg\)83 (equinox J2000.0), with a 1\(\sigma\) nominal uncertainty of 0\(\fdg\)04. M83 is 0\(\fdg\)04 away from this position and thus within the 1\(\sigma\) error circle.
We then added this new source, the possible counterpart to M83, in the source model as a point source and repeated the likelihood analysis in 0.1-500 GeV. Given the low TS value of the source, we considered three models to fit its emission. One is a simple power law (PL), \(dN/dE=N_{0}E^{-\Gamma}\), where \(\Gamma\) is the photon index, and the other two are a PL with an exponential cutoff (PLEC), \(dN/dE=N_{0}E^{-\Gamma}\exp(-E/E_{c})\), where \(E_{c}\) is the cutoff energy, and a Log-Parabola (LP) function, \(dN/dE=N_{0}(E/E_{b})^{-[\alpha+\beta\log(E/E_{b})]}\), where \(E_{b}\) is a scale parameter and was fixed at 1 GeV in our analysis. The results obtained with the three models are given in Table 1. The PLEC model provided the largest TS value, \(\simeq\)22. By comparing the log-likelihood values, that is \(\sqrt{-2log(L_{i}/L_{j})}\), where \(L_{i/j}\) are the maximum likelihood values from model \(i\) and \(j\), the PLEC and LP models are found to be more favored than the PL at \(\sim\)3.0\(\sigma\) and \(\sim\)2.7\(\sigma\) significances respectively. Although \(\gamma\)-ray emissions of most star-forming galaxies can be well described with a PL model (or a LP model; e.g., Ajello et al., 2020), below we adopted the PLEC model because of the largest TS value resulting from it. The corresponding 0.1-500 GeV photon flux for the source was \(F_{0.1-500}\sim 1.2\pm 0.4\times 10^{-10}\,{\rm photon\,cm^{-2}\,s^{-1}}\).
### Analysis for Nearby Sources
Since the emission at the position of M83 was weak, we conducted extra-checks to ensure the detection. First as shown in the TS maps (Figure 1), another excess emission is present, which is north-east (NE) to M83. Although it does not appear like a point source, we ran _gtfindsrc_ to the 0.5-500 GeV data, and obtained a position of R.A.=205\(\fdg\)3, Decl.=\(-\)28\(\fdg\)9 (equinox J2000.0), with a 1\(\sigma\) nominal uncertainty of 0\(\fdg\)2. This position is 1.3 deg away from M83, nearly outside of the 68% containment angle of the LAT PSF in \(>\)0.5 GeV band2. We added this source in the source model and performed the likelihood analysis. The source could be totally removed in the TS map (shown in the right panel of Figure 1), and the results for the emission at M83 were nearly the same as above. These suggest that the NE excess emis
Figure 2: Same as the middle panel of Figure 1, but the blazar candidate is kept in the maps. The _left_ panel TS map is calculated from the whole LAT data, and the _right_ panel one from the time period of MJD 55042–59000 (see Section 2.3 and Figure 3).
sion did not affect our analysis results for the source at M83.
Second, the catalog source, 4FGL J1335.3\(-\)2949, is located very close to M83 (see Figure 1). It has an angular separation of 0\(\fdg\)35 from M83 and its 1\(\sigma\) positional uncertainty is 0\(\fdg\)02, given in 4FGL-DR3 (Abdollahi et al., 2022). Thus this source and M83's source are outside of the error circle of each other. Because it is relatively bright, it could contaminate the detection of the source at M83. We calculated the 0.5-500 GeV TS map with the sources (including the NE one) in the \(3^{\rm o}\times 3^{\rm o}\) region kept. 4FGL J1335.3\(-\)2949 is clearly seen as the brightest one (the TS value was \(\sim\)206) in the field (left panel of Figure 2).
This source is a blazar candidate but did not show significant \(\gamma\)-ray variations in 12 years of the _Fermi_-LAT data (Abdollahi et al., 2022). We extracted its 0.1-500 GeV light curve by setting 60-day time bins and performing the likelihood analysis to each time-bin data. In the extraction, only the normalization parameters of the sources within 5 deg from M83 were set free. As a check, we also extracted a light curve of the source at M83 with the same setup. The resulting light curves and TS curves for the two sources are shown in Figure 3, for which only flux data points with TS\(\geq\)4 were kept in the light curves. As can be seen, this blazar candidate had a high TS value in the beginning of the data at \(\sim\)MJD 55000 and likely showed a flaring event after MJD 59000 (note that the latter part was not covered in the LAT 12-yr source catalog). The source at M83 did not show any obvious variations.
To reduce any contamination possibly caused by the variations of 4FGL J1335.3\(-\)2949, we selected the LAT data during the time period of MJD 55042-59000 (marked as dotted lines in Figure 3). The 0.5-500 GeV TS map of this time period was calculated and is shown in the right panel of Figure 2. The sources in the TS-map region were kept. The TS value for the blazar candidate is reduced to \(\sim\)100, and the source at M83, appearing as an extension of the former to the east direction, is revealed. Thus the flaring activity of the blazar candidate could affect our analysis and weaken the visibility of the source at M83, but when the flares were filtered out, the detection of this source was proved to be true.
Figure 3: 60-day binned light curves (_upper_) and TS curves (_bottom_) of the blazar candidate (red) and the source at M83 (gray) in 0.1–500 GeV. For the latter, 2-yr binned light curve and TS curve (black data points) are also shown. Fluxes with TS\(\geq\)4 are kept in the light curves. Two dotted lines mark the time period of MJD 55042–59000, during which no obvious flares are seen from the blazar candidate.
### Spectral Analysis
We extracted the \(\gamma\)-ray spectrum of the source at M83 by performing maximum likelihood analysis to the LAT data in 10 evenly divided energy bands in logarithm from 0.1-500 GeV. In the extraction, the spectral normalizations of the sources within 5 deg from M83 were set as free parameters, while all the other parameters of the sources were fixed at the values obtained from the above maximum likelihood analysis. The emission was set to be a PL with \(\Gamma\) fixed to 2. For the result, we kept only spectral data points when TS\(\geq\) 4 (\(\geq\)2\(\sigma\) significance) and derived 95% flux upper limits otherwise, where for the latter a Bayesian approach implemented in the Python tool IntegralUpperLimit (provided in the _Fermi_ Science tools) was used. The obtained spectrum is plotted as black points in Figure 4, and the flux and TS values of the spectral data points are provided in Table 2. In the energy band of 0.2-0.5 GeV, the TS value is 5, but the flux has a large uncertainty, 0.42\(\pm\)0.62\(\times\)10\({}^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\), likely caused by the contamination of the nearby blazar candidate (because of large containment angles of the LAT PSF at the low energies; see also Figure 4). Thus for this data point, we report an upper limit instead. In addition, the spectrum of the nearby blazar candidate was obtained with the same setup, and is also shown in Figure 4 for comparison.
### Variability Analysis
We checked the source at M83 for any long-term variability in 0.1-500 GeV by calculating the variability index TS\({}_{var}\)(Nolan et al., 2012). We set 87 time bins with each bin consisting of 60-day data and derived the fluxes or flux upper limits for the source (i.e., shown in Figure 3). If the emission is constant, TS\({}_{var}\) would be distributed as a \(\chi^{2}\) distribution with 86 degrees of freedom. A variable source would be identified if TS\({}_{var}\) is larger than 119.4 (at a 99% confidence level). The computed TS\({}_{var}\) for the source is 74.8, lower than the threshold value. Since this source is faint, we also constructed its 2-yr binned light curve (see Figure 3) and checked for variability. For 7 time bins (i.e., 6 degrees of freedom), TS\({}_{var}>16.8\) is required for a variable source. We obtained TS\({}_{var}\simeq 11.3\). Thus there were no significant long-term variations found for this source.
## 3 Discussion
By analyzing the _Fermi_-LAT data for the M83 region, we have found a faint \(\gamma\)-ray source at the position of M83. The source's emission is preferably described with a curved-function model (a PLEC or a LP). Although the detection significance for the source is low, only \(\sim 4.7\sigma\), and a nearby blazar candidate complicates the detection (particularly at the \(\lesssim\)0.5 GeV low energies), our detailed analysis has ensured the existence of the source. Given the high positional coincidence between the \(\gamma\)-ray source and M83 and the expectation for \(\gamma\)-ray emission from this star-forming galaxy, we con
\begin{table}
\begin{tabular}{l r r} \hline Band & \(G_{M83}/10^{-12}\) & TS \\ (GeV) & (erg cm\({}^{-2}\) s\({}^{-1}\)) & \\ \hline
0.15 (0.1–0.2) & 0.61 & 0 \\
0.36 (0.2–0.5) & 0.74 & 5 \\
0.84 (0.5–1.3) & 0.34\(\pm\)0.19 & 7 \\
1.97 (1.3–3.0) & 0.20\(\pm\)0.10 & 6 \\
4.62 (3.0–7.1) & 0.17\(\pm\)0.09 & 5 \\
10.83 (7.1–16.6) & 0.31\(\pm\)0.14 & 13 \\
25.37 (16.6–38.8) & 0.31 & 0 \\
59.46 (38.8–91.0) & 0.89 & 0 \\
139.36 (91.0–213.3) & 1.58 & 0 \\
326.60 (213.3–500.0) & 3.69 & 0 \\ \hline \end{tabular} Note: Fluxes without uncertainties are the 95% upper limits.
\end{table}
Table 2: Flux Measurements
Figure 4: \(\gamma\)-ray spectra obtained for M83’s source (black circles) and the blazar candidate (red diamonds), for which the model fits from the likelihood analysis are also shown for comparison, where the black dashed and solid curves are the PLEC and PL model fits (cf., Section 2.2), respectively, for the former source and the red line is the PL model fit for the latter one. The model fit to the normalized spectra of 9 star-forming galaxies determined by Ajello et al. (2020) is shown as the blue curve, while its 1\(\sigma\) uncertainty range is marked by the gray region (here this model-fit curve is simply aligned with the first flux measurement of M83’s source).
clude that this \(\gamma\)-ray source is the likely counterpart to M83.
At a distance of 4.61 Mpc (Saha et al., 2006), the \(\gamma\)-ray luminosity (in 0.1-500 GeV) of the source is \(\sim 1.4\pm 0.5\times 10^{39}\) erg s\({}^{-1}\), higher than that of the Milky way (Ackermann et al., 2012) and lower than that of NGC 253 (a spiral galaxy with a central starburst region; Abdo et al., 2010; Ajello et al., 2020). The 2-1000 \(\mu\)m IR luminosity of M83 is approximately \(8.7\times 10^{43}\) erg s\({}^{-1}\)(Ackermann et al., 2012; note that a source distance of 3.7 Mpc was used in Ackermann et al., 2012). If we use the parameters obtained in Ajello et al. (2020) for the \(\gamma\)-ray-IR luminosity correlation, the predicted \(\gamma\)-ray luminosity for M83 would be \(\sim 7.9\times 10^{39}\) erg s\({}^{-1}\). Further considering a dispersion of 0.3 (\(\gamma\)-ray-luminosity residuals with respect to the correlation line in log space; Ackermann et al., 2012; Ajello et al., 2020), the luminosity range would be \(\sim 4.0\)-\(16\times 10^{39}\) erg s\({}^{-1}\). Thus the observed luminosity is slightly below the correlation line but can be considered to be consistent with it, given significant uncertainties such as on the distance and IR properties (Ajello et al., 2020 and references therein).
Because of the weak detection of M83, its \(\gamma\)-ray spectrum only contains 4 data points (Figure 4). The model fits we obtained in Section 2.2, either the PLEC or the LP, appear to be highly curved (for example, \(\beta\approx 0.67\) in the LP model), which are different from those of the other star-forming galaxies. The latter are mostly described with a PL without significant curvature required (e.g., Ajello et al., 2020). The difference could be due to the weak detection causing the emission property not well determined, and remains to be resolved when more LAT data for M83 are collected. Ajello et al. (2020) have normalized the \(\gamma\)-ray spectra of 9 star-forming galaxies by simply scaling the spectra to a common value at 1 GeV, and obtained a best-fit model that is in the form of a smoothly broken power law. In Figure 4, we show this best fit by aligning it with the first flux measurement of M83's spectrum, since the energy range of the data point is 0.5-1.3 GeV, approximately at 1 GeV. As can be seen, the spectrum of M83 is approximately consistent with the best fit. Given this and the \(\gamma\)-ray luminosity indicated above being approximately in the right range, the \(\gamma\)-ray emission of M83 likely has the same origin as the other nearby galaxies, arising from CRs and related to the star-formation activity (Ackermann et al., 2012). Thus M83 is likely another member of this gourp of the \(\gamma\)-ray-emitting, star-forming galaxies. Hopefully with more LAT data collected in the future, both the detection significance and the quality of the \(\gamma\)-ray spectrum will be improved, helping provide more information for the high-energy properties of M83.
This research is supported by the Original Innovation Program of the Chinese Academy of Sciences (E085021002), the Basic Research Program of Yunnan Province No. 202201AS070005, and the National Natural Science Foundation of China No. 12273033.
|
2306.16232
|
Non-local transport signatures of topological superconductivity in a
phase-biased planar Josephson junction
|
Hybrid Josephson junctions realized on a two-dimensional electron gas are
considered promising candidates for developing topological elements that are
easily controllable and scalable. Here, we theoretically study the possibility
of the detection of topological superconductivity via the non-local
spectroscopy technique. We show that the non-local conductance is related to
the system band structure, allowing probe of the gap closing and reopening
related to the topological transition. We demonstrate that the topological
transition induces a change in the sign of the non-local conductance at zero
energy due to the change in the quasiparticle character of the dispersion at
zero momentum. Importantly, we find that the tunability of the superconducting
phase difference via flux in hybrid Josephson junctions systems is strongly
influenced by the strength of the Zeeman interaction, which leads to
considerable modifications in the complete phase diagram that can be measured
under realistic experimental conditions.
|
D. Kuiri, M. P. Nowak
|
2023-06-28T13:53:50Z
|
http://arxiv.org/abs/2306.16232v2
|
# Non-local transport signatures of topological superconductivity
###### Abstract
Hybrid Josephson junctions realized on a two-dimensional electron gas are considered promising candidates for developing topological elements that are easily controllable and scalable. Here, we theoretically study the possibility of the detection of topological superconductivity via the non-local spectroscopy technique. We show that the non-local conductance is related to the system's band structure, allowing probe of the gap closing and reopening related to the topological transition. We demonstrate that the topological transition induces a change in the sign of the non-local conductance at zero energy due to the change in the quasiparticle character of the dispersion at zero momentum. Importantly, we find that the tunability of the superconducting phase difference via flux in hybrid Josephson junctions systems is strongly influenced by the strength of the Zeeman interaction, which leads to considerable modifications in the complete phase diagram that can be measured under realistic experimental conditions.
## I Introduction
Planar superconductor-normal-superconductor (SNS) Josephson junctions have been proposed as a promising platform for engineering and exploiting Majorana bound states due to the tunability of the topological transition by the superconducting phase difference [1] and the scalability of two-dimensional heterostructure systems [2]. For the realization of topological SNS devices, typically two separate superconducting electrodes proximitize the two-dimensional electron gas (2DEG), creating a SNS junction where the good quality of the normal-superconducting interfaces results in an induced gap close to that of the parent superconductor [3]. Upon application of an in-plane magnetic field, the Zeeman interaction leads to the splitting of Andreev bound states (ABS) in phase, resulting in the opening of the topological regime whenever the Fermion parity is odd [1]. The topological regime is already obtained for the vanishingly small Zeeman interaction energies at the phase difference \(\pi\). This becomes an important factor in achieving the topological superconductivity in SNS junctions realized on normal-superconductor hybrids as a strong Zeeman interaction can lead to the appearance of abundance of trivial in-gap states [4; 5] that decrease the induced gap and can obscure Majorana zero-energy modes.
Typically, in normal-superconductor nanostructures, such as proximitized nanowires [6], Majorana bound states were sought by tunneling spectroscopy, where the presence of a zero-bias peak was assigned to the appearance of topological zero-energy states [7; 8; 9]. However, zero-bias peaks can also result from disorder-induced trivial ABS [10; 11; 12] or be due to the specifics of the tunneling barriers used [13].
Tunneling measurements were made at a planar Josephson junction in the trivial regime, revealing the edge-dependent evolution of ABS in the perpendicular field [14; 15; 16]. Furthermore, zero-bias peaks were observed in planar SNS junctions [17; 18], but, as in the case of nanowire systems, single-edge conductance cannot be considered as a conclusive determinant of the topological character of a zero-energy state [19]. A promising alternative is a non-local measurement [20; 21], which has recently been the subject of intense research effort [22]. Local and non-local spectroscopy was recently performed on planar SNS junctions in the tunneling regime [23], but without a clear signature of the topological transition. In addition, alternative methods for the detection of Majorana bound states, such as scanning tunneling microscopy [24] or its spin-polarized variant [25] were also proposed.
In this work, we focus on the feasibility of the non-local measurements of the topological transition in phase-biased SNS junctions. We theoretically study the spectroscopic features of the junction both in the spectroscopy limit, i.e., tunneling measurements that are sensitive to the density of states in the junction and in an open regime, i.e., without tunneling barriers, where the transport features correspond rather to a band structure of the junction. The latter can elucidate the closing and reopening of the gap associated with the topological transition [26]. We find that the non-local conductance sign represents the electron- or hole-like character of the bands in the junction. The closing and reopening of the gap at \(k=0\) is associated with the meeting of the electron and hole bands at zero energy and, correspondingly, with the change in the sign of the non-local conductance. Furthermore, we discuss a serious caveat in the realization of the topological phase in SNS junctions. Namely, we show that in a realistic situation, where phase biasing is done by running a flux through a superconducting loop embedding the SNS junction, the phase slips result in skipping a large region of phase space close to \(\pi\)
prohibiting the creation and probing of Majorana bound states at a small field. We discuss the factors that allow to limit this obstacle.
The paper is structured as follows. In Sect. II we introduce the numerical model. In Sect. III A we discuss the non-local spectroscopy results in relation to the effective charge polarization of the bands. In Sect. III B we show how experimentally performed phase biasing limits the magnetic fields in which the topological phase can be observed. We discuss our results in Sect. IV and summarize them in Sect. V.
## II Model
We consider a planar SNS junction constituted by a semiconducting strip of length \(L_{j}\) connected to superconducting electrodes of width \(W_{j}\). The scheme of the considered structure is depicted in Fig. 1.
The Hamiltonian of the system written in the basis \(\Psi=(\psi_{e\uparrow},\psi_{h\downarrow},\psi_{e\downarrow},-\psi_{h\uparrow})^ {T}\) (where \(e\) and \(h\) correspond to electron and hole components with spin up \(\uparrow\) or down \(\downarrow\) respectively) is:
\[H =\left(\frac{\hbar^{2}{k_{x}}^{2}}{2m^{*}}+\frac{\hbar^{2}{k_{y}} ^{2}}{2m^{*}}-\mu\right)\sigma_{0}\otimes\tau_{z}+\frac{1}{2}g(x)\mu_{B}B \sigma_{y}\otimes\tau_{0}\] \[\quad+\alpha(x)(\sigma_{x}k_{y}-\sigma_{y}k_{x})\otimes\tau_{z}+ \Delta(x)\tau_{+}+\Delta^{*}(x)\tau_{-}. \tag{1}\]
where \(k_{x(y)}=-\iota\partial/\partial x(y)\), \(\sigma_{i}\) and \(\tau_{i}\) with \((i=x,y,z)\) are the Pauli matrices that act on the spin and electron-hole degree of freedom, respectively, with \(\tau_{\pm}=(\sigma_{0}\otimes\sigma_{x}\pm\sigma_{0}\otimes\sigma_{y})/2\) where \(\sigma_{0}\) is \((2\times 2)\) identity matrix.
We consider the non-zero pairing potential in superconducting contacts, which is modeled by spatial dependence of the gap parameter \(\Delta(x)\),
\[\Delta(x)=\begin{cases}\Delta_{0}&\text{if }x<-L_{j}/2\\ 0&\text{if }-L_{j}/2\leq x\leq L_{j}/2\\ \Delta_{0}e^{\iota\phi}&\text{if }x>L_{j}/2,\end{cases}\]
with \(\phi\) the superconducting phase difference. Accordingly, we neglect the Zeeman splitting and spin-orbit effects in the superconductor setting \(g(x)=\alpha(x)=0\) in them. The in-plane magnetic field is applied along the \(y\) direction. For concreteness, we adopt the material parameters corresponding to the InSb semiconductor and the Al superconductor, that is, \(m^{*}=0.014m_{e}\), \(\mu=5\) meV, \(\Delta_{0}=0.2\) meV, \(\alpha=50\) meVnm. We also assume typical dimensions for this type of structure, that is, \(L_{j}=80\) nm, \(W_{j}=2000\) nm [23; 27; 15].
For numerical simulations, we discretize the Hamiltonian on a square lattice with the lattice constant \(a=10\) nm. Since we use a uniform chemical potential in our calculations, for a proper description of Andreev scattering at the NS interface [28], we introduce an anisotropic mass in the superconducting leads with the effective mass in the direction parallel to the interface \(m_{\parallel}^{*}=10m^{*}\)[29]. The code used for the calculations presented is available online [30].
In this study, we consider three variants of the SNS system. The first is the _open_ system as shown in Fig. 1 used to study the transport properties. Here, the normal regions extend beyond the width of the superconducting contacts by length 100 nm, which includes 10 nm tunneling barriers of height 50 meV. They are connected to semi-infinite leads that allow the transport of in-gap electrons/holes into/from the junction. The normal region is connected to semi-infinite superconducting leads. Such geometries have recently been experimentally realized in InSbAs [15] or InAs [23] 2DEGs proximitized by Al. We calculate the non-local conductance considering the scattering properties of the quasiparticles injected and scattered back to normal leads using the scattering matrix approach implemented in the Kwant package [31] with the formula:
\[G_{ij}(E)=\frac{\partial I_{i}}{\partial V_{j}}=\frac{e^{2}}{h}(T_{ij}^{ee}- T_{ij}^{he}-\delta_{ij}N_{i}^{e}). \tag{2}\]
\(I_{i}\) is the current entering terminal \(i\) from the scattering region, and \(V_{j}\) the voltage at terminal \(j\) and \(N_{i}^{e}\) is the number of electron modes at energy \(E\) in terminal \(i\). \(T_{ij}^{ee}\) and \(T_{ij}^{he}\) are electron-to-electron and electron-to-hole transmission amplitudes (with \(j\) being the source and \(i\) the drain) calculated at energy \(E\) that represents the applied bias voltage \(V_{j}\) at zero temperature [21]. Part of the conductance maps was obtained with the help of Adaptive library [32].
To investigate the properties of the bound states that form in the junction, we consider a finite _isolated_ system by disconnecting the protruding normal segments and leads [33]. Finally, to study the properties of the band structure, we introduce the _translation-invariant_ system
Figure 1: A schematic diagram of the considered system. A semiconductor strip (yellow-green) is sandwiched between two superconducting electrodes (orange), whose pairing potential has phase difference \(\phi\). The gray regions denote the potential barriers, placed just above and below the SC region. The green segments in the semiconductor denote the top (1) and bottom (2) normal leads.
constructed by removing the normal leads and making the junction invariant in the \(y\) direction, where \(k_{y}\) is a good quantum number.
## III Results
### Non-local conductance as a measure of topological transition
The \(2\pi\)-periodic spectrum of isolated junction in a non-zero in-plane field is shown in Fig. 2(a). The evolution of the ABS of a single-mode spinful junction in the presence of the magnetic field is captured by the formula:
\[E_{\sigma}(\phi)=\Delta\sqrt{1-\tau\sin^{2}\left(\frac{\phi+\varphi_{\sigma}}{ 2}\right)}, \tag{3}\]
where \(\varphi_{\sigma}=2\sigma E_{z}L_{j}/\hbar v_{F}\), \(\tau\) is the junction transmission coefficient, \(E_{z}=g\mu_{B}B/2\) is the Zeeman energy, \(\sigma=\pm 1\) corresponds to positive and negative spin components and \(v_{F}=\sqrt{2\mu/m^{*}}\) is the Fermi velocity. Overlaying the numerically calculated spectrum with the analytical one in Fig. 2(a) we see that the cones made up of ABS are split in phase by the Zeeman interaction. Inspecting the mean spin polarization of ABS calculated as the expectation value of the operator \(\sigma_{y}\tau_{0}\), we observe that the edge of each cone is made of ABS with positive and negative spin polarization along the \(y\) direction. Upon increasing the magnetic field, for the negative \(g\) factor considered here, the positively (negatively) spin-polarized states move down (up) in energy. This in turn results in an increase in the distance between the positive-energy cones. The bottom tip of each cone sets out a phase point when the Fermion parity changes and the system undergoes a phase transition--with the topological phase being present in each \(\phi=[0,2\pi]\) (mod \(2\pi\)) segment only between the cones. Since the plot shows the spin polarization, the spinless Majorana bound states are not visible.
Calculating the non-local tunneling spectroscopy we obtain the map shown in Fig. 2(b) where the gap closing and reopening upon the increase of the superconducting phase is visible. An analogous result is obtained when the tunneling barriers are removed (Fig. 2(c)). Most importantly, we observe that in both plots the topological transition manifests itself as the sign change of the non-local signal at zero energy leading to the rectification of the current, similar to the case of an NS junction [21].
#### iii.1.1 Charge polarization of the bands
As we will show, the sign of the non-local conductance outlines the leading transport phenomenon in the junction. According to the formula, Eq. (2) a positive conductance signal is obtained when the dominant transport process involves electron transport through the proximitized region, while a negative signal is obtained when the electron is converted into a hole in a crossed Andreev reflection process.
To elucidate the change of the non-local conductance, we consider an invariant system. In Fig. 3(a) we plot the dispersion relation obtained for the phase difference set in the vicinity of the left cone, that is, \(\phi=0.84\pi\). We see the gap closing at \(k_{y}=0\) as the bands cross zero energy, causing the Fermion parity change, which in turn leads to the topological transition.
We introduce the quasiparticle polarization of the bands factor (\(P\)) which is calculated as \(P=vk_{y}\), where \(v=\frac{1}{\hbar}\frac{\partial E}{\partial k_{y}}\) and color the bands in the dispersion relation
Figure 2: (a) ABS energy spectrum of the isolated SNS junction. The colors denote the average spin polarization of ABS along the \(y\) direction. The analytical ABS spectrum is shown with black dashed lines. The non-local conductance with (b) and without tunneling barriers (c) versus the superconducting phase difference. The results are obtained for the in-plane field \(B=0.5\) T.
in Fig. 3(a) with it. We observe that the bands at positive energy mostly have an electron-like character, i.e. the sign of the Fermi velocity matches the sign of the wave vector \(k_{y}\). The situation for negative energy is the opposite, and the bands there are mostly of a hole-like character.
Positive polarization allows electrons with positive energies to flow between the top and bottom contacts with little Andreev reflection [see Fig. 3(b)]. On the other hand, the mostly hole character of the negative energy bands results in a blockade of the electron transport, and instead crossed Andreev reflection occurs [see Fig. 3(c)], which in turn results in a negative non-local conductance that is related to the Cooper pair splitting.
In Fig. 4(a) we show the \(P\) factor for the invariant system calculated by projecting the \(P\) values obtained in the range \(k_{y}\in[-a^{-1},a^{-1}]\) for each phase difference value for the increased in-plane field \(B=1\) T. We indeed see that in the outermost cones in each \(2\pi\) segment of the spectrum the particle polarization of the bands is positive at positive energy and vice versa. The opposite polarizations result in the change of the sign of the non-local conductance at zero energy, which marks the topological transition.
In the map of Fig. 4(a) there is also a clear signature of the appearance of positively charged bands at both positive and negative energies in the topological region--between the two outermost cones. If we look at the exemplary dispersion relation, obtained for the phase where the positive- and negative-energy middle cones meet, [Fig. 4(b)] we observe that the gap closing occurs at non-zero \(k_{y}\). Therefore, this gap closing does not result in a phase transition. The modes in those bands have a high Fermi velocity at zero energy and therefore a small \(\phi_{\sigma}\) Zeeman phase shift that results in a weak dependence on the position of this cone on the strength of the in-plane field. Finally, since those bands always have a considerable electron polarization, the electrons can be transmitted through the system for both positive and negative energies. This is clearly visible in the map of Fig. 4(c), where we show the electron transmission coefficient. This effect in turn results in lack of sign change
Figure 3: (a) Dispersion relation for \(B=0.5\) T and \(\phi=0.84\pi\). The colors denote the average charge polarization of the bands. Electron (left) and hole (right) components of probability currents obtained for \(E=0.169\) meV (b) and \(E=-0.169\) meV (c).
Figure 4: The charge polarization obtained for \(k_{y}\in[-a^{-1},a^{-1}]\) versus energy and phase difference. (b) The band structure obtained for \(\phi=1.15\pi\). (c) \(T_{12}^{ee}\) non-local conductance component. The results are obtained for \(B=1\) T.
of the non-local conductance at zero energy as it is for the cones that mark the topological/trivial transition.
### Phase biasing by a perpendicular magnetic field
#### iii.2.1 Analytical model
Phase biasing of the junction is achieved by placing the junction in a superconducting loop and threading the loop with a perpendicular magnetic field \(B_{\perp}\) resulting in the magnetic flux \(\Phi=B_{\perp}\pi R^{2}\), with \(R\) being the radius of the loop. Typically, those loops have significant inductance \(L\)[15, 27] leading to non-linear magnetic field to phase conversion governed by the equation
\[\phi=\frac{2\pi}{\Phi_{0}}\left(\Phi-L\sum_{\sigma=\pm 1}I_{\sigma}(\phi)\right). \tag{4}\]
The perpendicular field \(B_{\perp}\) magnitude is a few orders lower than the magnitude of the in-plane field, therefore, one can consider that it does bring negligible effects in terms of Zeeman spin splitting. The Zeeman interaction due to the in-plane field nevertheless leads to the evolution of ABSs through the formula Eq. (3) and causes the modification of the supercurrent whose phase dependence at zero temperature for a junction embedding \(M\) spinful modes can be approximated as
\[I_{\sigma}(\phi)=\frac{e\Delta^{2}\tau M}{4\hbar}\frac{\sin(\phi+\varphi_{ \sigma})}{E_{\sigma}(\phi)}. \tag{5}\]
The \(B_{\perp}\) to phase conversion obtained in the absence of the in-plane magnetic field \(B\), where \(\varphi_{\sigma}=0\), is plotted with a thick curve in Fig. 5(a). The dependency obtained is strongly nonlinear due to the \(LI(\phi)\) term in Eq. (4). Assuming a quasi-static approximation, for each value of \(B_{\perp}\) we minimize \(\varepsilon(\phi)=L(\sum_{\sigma=\pm 1}I_{\sigma}(\phi))^{2}/2-M\sum_{ \sigma=\pm 1}E_{\sigma}(\phi)\) to obtain the phase difference that guarantees the ground state of our system. The result is a single-valued conversion curve \(B_{\perp}\) to phase presented with a thin black line in Fig. 5(a). Here we take the parameters corresponding to the recent experiment [15], i.e., \(M=30\), \(L=321\) pH, \(\tau=0.99\) and \(R=4207\) nm.
Following the curve from the negative values of \(B_{\perp}\) we observe phase slips close to the values \(-\pi\) and \(\pi\). As a result, regulating the phase difference by the perpendicular field allows one to obtain phase values only from certain regions [15], which actually omits the most desired values close to \(\pi\). Importantly, the Zeeman interaction leads to a splitting in the ABS structure, as seen in Fig. 2(a). As a result, the current jumps are less pronounced and no longer occur at \(\pm\pi\)--see the red dots in Fig. 5(a).
In Fig. 5(b) with blue dots, we show the possible phase difference values versus the in-plane magnetic field. We indeed see that only at considerable Zeeman splitting energies it becomes possible to induce the \(\pi\) phase difference. In the same plot, we denote the analytical estimate of the phase values that guarantee the topological regime obtained from the analytical ABS spectrum as presented in Fig. 2(a). It is clear that, despite the topological gap opening at an already small parallel magnetic field at \(\phi=\pi\) it is not possible to set the necessary phase bias to actually induce the topological phase. We observe that only a strong Zeeman interaction unveils the phase-difference region, close to \(\pm\pi\)--where the topological superconductivity is present. This shows that the Zeeman interaction not only leads to the opening of the topological transition in the junction due to splitting of the ABS but also significantly modifies the flux-phase conversion that is necessary to bias the junction into the topological regime.
#### iii.2.2 Numerical results
Finally, we study the case where the flux-phase conversion is calculated from a numerical spectrum of the junction instead of a simple approximation of Eq. (5). For each value of the in-plane field, we calculate the spectrum of an isolated junction and then obtain the supercurrent \(I(\varphi)=-\frac{e}{\hbar}\sum_{E_{n}>0}\frac{\partial E_{n}}{\partial\varphi}\). We then follow the same procedure of flux-to-phase conversion as described above. The
Figure 5: (a) Plot of phase difference versus perpendicular field obtained without (black) and with (red) the in-plane field. The thick curves show the results without energy minimization, and the thin lines correspond to the case of the included energy minimization. (b) Map of available phase points versus the in-plane magnetic field. The inset shows with blue the parameter range in which the \(\phi=\pi\) phase bias is available.
results for three values of the in-plane field are shown in Figs. 6(a)(b) and (c). We see that only for a strong in-plane field the linear flux to phase conversion is restored with the possibility to bias the junction with the \(\pi\) phase difference.
In Figs. 6(d),(e) and (f) we show the non-local spectroscopy results versus the perpendicular field. We see that despite considering a transparent junction the probed ABS do not touch zero energy due to phase slips. Hence, the topological region, although present in the spectra plotted against the phase difference in Fig. 2(a), is not present when we consider a realistic situation of flux-induced phase biasing. Only at considerable Zeeman interaction strength (here 1T) is the transition between the trivial and topological regimes visible, as signified by the gap closing and reopening associated with the change of the sign of the non-local conductance at zero energy (see Fig. 6(f)).
## IV Discussion
We showed that despite the fact that the phase bias can lower the critical Zeeman energy required for the topological transition [1], the phase biasing, and specifically tuning the junction to \(\phi=\pi\) configuration turns out to be difficult to perform in practice, and is in fact dependent on the microscopical parameters of the SNS junction (such as the supercurrent) and the device geometry itself (e.g. the superconducting loop inductance). Phase jumps as shown in Fig. 6(d), (e) are actually visible in virtually every spectroscopic measurement of the ABS structure in planar junctions [14; 15; 16; 17; 18; 23] making them a realistic obstacle in probing the topological superconductivity.
Let us discuss the conditions that will make it more favorable to bias the junction with phase \(\pi\). Since it is the \(LI(\phi)\) term that induces the non-linearity of flux to phase conversion, one should consider limiting it to restore the possibility of realizing the \(\pi\) phase difference at small Zeeman fields. Limiting the current is less favorable because it requires either limiting the transparency of the junction by a decrease in the mean free path or making the width of the junction (\(W_{j}\)) smaller, thus decreasing the number of ABS. The latter is again unfavorable because it induces overlap between Majorana modes, lifting their degeneracy and would require the usage of extended geometries [34].
We approximate the condition under which increasing \(B_{\perp}\) results in linear growth of the phase in the vicinity of the values of \(\pi\) (mod \(2\pi\)) as when the second local maximum of \(B_{\perp}(\phi)\) becomes larger than the first in each repeating \(2\pi\) segment. For the case of \(\tau\to 1\) we can analytically estimate the values of these two extrema, which leads to the condition
\[\varphi_{+}+\frac{\pi}{\Phi_{0}}LI(\pi+2\varphi_{+})<0 \tag{6}\]
Solving it for \(L\) and \(B\) yields the critical magnetic field at which \(\pi\) phase biasing becomes possible. We plot the resulting diagram in the inset of Fig. 5(b) where blue denotes the parameter range that allows one to obtain the phase bias \(\pi\). We observe a rapid growth of the critical field with the increase in \(L\).
The inductance of the superconducting loop is typically dominated by the kinetic inductance \(L_{k}=lhR_{0}/w\pi\Delta\)[35] where \(l\) is the length, \(w\) is the width of the arm of the superconducting loop and \(R_{0}\) is normal state sheet resistance [18]. Therefore, smaller loops with wide arms could in principle be used to decrease the critical Zeeman field, which is necessary to phase bias the junction into the topological regime.
Figure 6: (a), (b) and (c) perpendicular field to phase conversion obtained from numerical ABS spectrum for absent (a) parallel field and (b) \(B=0.5\) T and (c) \(B=1\) T. (d), (e), and (f) non-local tunneling spectroscopy results as a function of the perpendicular field used for phase-biasing.
Summary and conclusions
In this theoretical study, we investigated the possibility of detection of the topological transition in a planar Josephson junction via the non-local spectroscopy technique. We showed that the topological transition, which is controlled by the in-plane magnetic field and the phase difference in the junction and which is associated with the Fermion parity change, results in a change of the sign of the non-local conductance at zero energy. We showed that this phenomenon is directly related to the change in the quasiparticle character of the bands and can be used to determine the topological transition in the transport measurements. As we showed, in a realistic situation the control of the phase bias in the junction is strongly dependent on the strength of the in-plane magnetic field as the Zeeman interaction controls the current-phase relation. This leads to the inability of scanning the entire phase space, specifically reaching the \(\pi\) bias required for topological transition at small Zeeman energies, unless the inductance of the superconducting loop embedding the junction is considerably reduced.
## VI Acknowledgement
We acknowledge the stimulating discussions with S. Goswami, C. M. Moehle, P. K. Rout and N. A. Jainandunsing. This work was supported by the National Science Center (NCN) agreement number UMO-2020/38/E/ST3/00418.
|
2307.01467
|
Technical Report for Ego4D Long Term Action Anticipation Challenge 2023
|
In this report, we describe the technical details of our approach for the
Ego4D Long-Term Action Anticipation Challenge 2023. The aim of this task is to
predict a sequence of future actions that will take place at an arbitrary time
or later, given an input video. To accomplish this task, we introduce three
improvements to the baseline model, which consists of an encoder that generates
clip-level features from the video, an aggregator that integrates multiple
clip-level features, and a decoder that outputs Z future actions. 1) Model
ensemble of SlowFast and SlowFast-CLIP; 2) Label smoothing to relax order
constraints for future actions; 3) Constraining the prediction of the action
class (verb, noun) based on word co-occurrence. Our method outperformed the
baseline performance and recorded as second place solution on the public
leaderboard.
|
Tatsuya Ishibashi, Kosuke Ono, Noriyuki Kugo, Yuji Sato
|
2023-07-04T04:12:49Z
|
http://arxiv.org/abs/2307.01467v1
|
# Technical Report for Ego4D Long Term Action Anticipation Challenge 2023
###### Abstract
In this report, we describe the technical details of our approach for the Ego4D Long-Term Action Anticipation Challenge 2023. The aim of this task is to predict a sequence of future actions that will take place at an arbitrary time or later, given an input video. To accomplish this task, we introduce three improvements to the baseline model, which consists of an encoder that generates clip-level features from the video, an aggregator that integrates multiple clip-level features, and a decoder that outputs \(Z\) future actions. 1) Model ensemble of SlowFast and SlowFast-CLIP; 2) Label smoothing to relax order constraints for future actions; 3) Constraining the prediction of the action class (verb, noun) based on word co-occurrence. Our method outperformed the baseline performance and recorded as second place solution on the public leaderboard.
## 1 Introduction
Ego4D [1] is a diverse and large first-person video dataset, and long-term action anticipation is one of the key tasks in Ego4D. The aim of this task is to realize a model that predicts \(Z\) future actions for an input video of arbitrary length.
Our contributions are summarized below:
1. We introduce three improvements to the baseline model, which consists of an encoder that generates clip-level features from the video, an aggregator that integrates multiple clip-level features, and a decoder that outputs \(Z\) future actions. 1) Model ensemble of SlowFast and SlowFast-CLIP; 2) Label smoothing to relax order constraints for future actions; 3) Constraining the prediction of the action class (verb, noun) based on word co-occurrence.
2. On the public leaderboard, our proposed model improves by 0.0331, 0.0574, and 0.0320 points over the baseline model prediction for verb, noun, and action, respectively.
## 2 Our Approach
### Overall Architecture
Our network architecture is shown in Figure 1. In this architecture, we first input the video clips to the Video Encoder, which extracts features from each clip. Then, the Feature Aggregator merges the extracted features. Next, the Multi-Head Decoder takes these features as input and generates output logits for each of the \(Z\) heads associated with nouns and verbs. After that, we perform an ensemble by combining decoder outputs using weighted sum Moreover, we refine the output results using statistical measures regarding the co-occurrence of verb and noun labels calculated from training and validation data to obtain the final prediction results. The following sections explain Encoder-Decoder Architecture, Model Ensemble, and Refinement Module.
### Encoder-Decoder Architecture
#### Video Encoder
In our approach, we used two types of video encoders: SlowFast [2], which extracts temporal features from \(I\) video clips, and CLIP [3], which extracts features related to relationships between objects and actions. In addition, it is reported in the baseline paper [1] that increasing the number of input clip videos to be encoded and observing more past videos allows predictions to consider the context of more past videos, thereby increasing accuracy. In our approach, we also added a model with increased input clip videos.
#### Feature Aggregator
The clip-level features generated by the encoders are then integrated by the subsequent aggregator. In our approach, we used Concat and Transformer, two methods that are introduced in the baseline method.
#### Multi-Head Decoder
Finally, the features of the entire observed video generated by the aggregator are passed to the decoder, which outputs
sequences of actions at future time steps. Again, we used the Multi-Head technique introduced in the baseline method, which predicts \(Z\) actions by independent heads.
### Weighted Ensemble
The two types of encoders used in Video Encoder (SlowFast and SlowFast-CLIP) have different characteristics: SlowFast is more accurate in predicting verbs by capturing temporal features, while CLIP is more accurate in predicting nouns by focusing on objects. Given this difference in characteristics, our approach employs an ensemble approach in which the two Encoders complement each other's inference of verbs and nouns, with the expectation that this will improve the prediction accuracy of the action. Specifically, the logits output from each model are combined by a weighted sum.
\[\mathbf{Logits}=\mathbf{\alpha logits_{SF}}+\mathbf{\beta Logits_{SF-CLIP}} \tag{1}\]
### Refinement Module
This study proposes a method for output refinement to consider contextual relationships and improve output consistency. The baseline method predicts verbs and nouns separately, resulting in a lack of consistency between them. Since the predicted classes are randomly selected based on the prediction probability distribution at each time step, there is no consistency within the sequential prediction patterns. As mentioned in [1], the co-occurrence of words, such as normalized pointwise mutual information (NPMI) [4], is considered important for long-term predictions. We have arranged the equation presented in [5] and introduced an indicator that considers the relationships between consecutive verbs and nouns. Given a time step z with the class label \(x_{n}\), the formula is as shown below:
\[f(x_{z-1},x_{z})=\ln\left(\frac{p(x_{z}|x_{z-1})}{p(x_{n-1})p(x_{n})}\right) \!\!/-\ln\!\left(p(x_{z}|x_{z-1})\right) \tag{2}\]
Furthermore, we compute the probability \(g\) of a verb \(V\) occurring simultaneously with a given noun \(N\) as follows:
\[g(V_{z},N_{z})=p(V_{z}|N_{z}) \tag{3}\]
We computed these statistics based on the training and validation sets. Finally, the predicted probability of verb \(v\) and nouns \(n\), \(P_{v}^{z}\) and \(P_{n}^{z}\), at time step z are refined as shown in Equations (4) and (5) for each sequential prediction pattern.
\[\hat{P}_{n}^{z}=P_{n}^{z}\circ ReLU\!\left(f_{noun}(N_{Z-1},n)\right) \tag{4}\] \[\hat{P}_{v}^{z}=P_{v}^{z}\circ ReLU\!\left(f_{verb}(V_{Z-1},v) \right)\circ g(v,N_{z}) \tag{5}\]
Moreover, we adopted a strategic approach to maintain consistency within the predicted patterns. For one of the predicted patterns, we selected the class with the highest prediction probability without performing output refinements, for another pattern, we selected the class with the highest prediction probability after output refinements. For the other patterns, we randomly selected based on the refined prediction probability distribution.
Figure 1: Overall architecture of our approach.
### Label Smoothing
In the baseline method, the loss for each predicted time step was calculated as the cross-entropy between the one-hot ground truth labels and the predicted probabilities for verbs/nouns. In this case, a huge penalty is applied even if the step is off by one. However, the long-term action anticipation task is challenging to predict the order accurately. Therefore, we adopt a less stringent learning approach regarding order errors by using smoothed labels instead of one-hot ground truth values. Given the one-hot ground truth value represented as \(y_{x}\), the smoothed label \(y^{\prime}_{z}\) is expressed as follows:
\[y^{\prime}_{z}=\frac{y_{x}+\frac{1}{Z}\Sigma_{t=1}^{Z}y_{t}}{2}\]
## 3 Experiments
### Implementation Details
For settings not explicitly mentioned, we follow the approach outlined in baseline method [1]. We trained two models as the foundation for our ensemble approach, with their respective training configurations detailed in Table 1. We used a pretrained checkpoint of SlowFast encoder model provided for long-term action anticipation task. In Model A, we employed the SlowFast encoder and a Concat aggregator, with the number of input clips set to 8. In Model B, we used two encoders, SlowFast and CLIP, along with a Transformer aggregator, setting the number of input clips to 4. Additionally, during the training process, we incorporated label smoothing as described in Section 2.5. For each training process, we used 4 NVIDIA Tesla V100 GPUs, utilizing a batch size of 32, a learning rate 0.0001, and spanning 50 epochs.
### Main Results
In Table 2, we show a comparison between the baseline and our proposed approach on the validation and test set. According to the official guidelines, the number of actions to predict \(Z\) was set at 20, while the number of output patterns \(K\) was set at 5. We conducted a thorough search for the parameters of the ensemble weights, aiming to improve the accuracy on the validation data. As a result, we determined the optimal parameters to be \(\alpha=0.6\) and \(\beta=1.4\), respectively.
To investigate the effect the ensemble approach, we performed a comparative analysis between the individual models and the ensemble model, shown in Table 3. In this evaluation, the ensemble weights were set to \(\alpha=0.5\) and \(\beta=0.5\). By ensembling the two models, improvements in edit distance values were observed: approximately 0.010 for verbs, 0.015 for nouns, and 0.006 for actions. In the case of actions, the edit distance is calculated considering the predictions of verb and noun pairs. The smaller improvement in actions compared to individual results for verbs and nouns can be attributed to the potential loss of consistency resulting from the ensemble of multiple models. Moreover, output refinement improved 0.032 point for actions, representing the most substantial improvement in our method. This suggests that considering the contextual relationship with past actions has a positive impact on the performance. The improvement in action was comparable to those of verb and noun performance, indicating the effectiveness of considering the co-occurrence of verbs and nouns.
Table 4 shows the effect of label smoothing, which was applied only during the training of Model B. The results of both verb and noun are improved, with a 0.05 point enhancement in action prediction. This can be attributed to
\begin{table}
\begin{tabular}{l l l l l} \hline \multicolumn{2}{c}{**Encoder**} & \multicolumn{1}{c}{**Aggregator**} & \multicolumn{1}{c}{_I_} & \multicolumn{1}{c}{**Label smoothing**} \\ \hline
**A** & SlowFast & Concat & 8 & \\ \hline
**B** & SlowFast+CLIP & Transformer & 4 & ✓ \\ \hline \end{tabular}
\end{table}
Table 1: Parameter configurations used in the training of individual models. Baseline settings were adopted for all other parameters.
\begin{table}
\begin{tabular}{l c c c} \hline
**Method** & **Verb** & **Noun** & **Action** \\ \hline
**model A** & 0.7053 & 0.7058 & 0.9232 \\ \hline
**model B** & 0.7046 & 0.6717 & 0.9139 \\ \hline
**A+B** & 0.6948 & 0.6563 & 0.9079 \\ \hline
**A+B+refinement** & **0.6618** & **0.6266** & **0.8762** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of the baseline and proposed approach for validation and test data. The baseline consists of SlowFast encoder and Transformer aggregator. The scores on the test data are cited from the leaderboard.
\begin{table}
\begin{tabular}{l c c c} \hline
**Method** & **Verb** & **Noun** & **Action** \\ \hline
**model A** & 0.7053 & 0.7058 & 0.9232 \\ \hline
**model B** & 0.7046 & 0.6717 & 0.9139 \\ \hline
**A+B** & 0.6948 & 0.6563 & 0.9079 \\ \hline
**A+B+refinement** & **0.6618** & **0.6266** & **0.8762** \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of individual models and the ensemble model, as well as the results obtained when incorporating the refinement module.
the suppression of penalties related to sequence order misalignments, similar to the evaluation metric of edit distance.
### Examples of Positive and Negative Results
Figure 2 shows the examples of that are correctly and incorrectly predicted by our approach. In accurate cases, fewer prominent objects are present in the image, and the number of action patterns is limited. In this case, there are only two types of actions: "sand wood" and "wipe wood." Furthermore, the proposed approach has become more likely to make consecutive predictions within the same pattern due to considering temporal relationships in the output refinement module.
On the other hand, common characteristics of failed cases include many objects with which people can easily interact, significant changes in the field of view due to the movement of subjects, and high complexity of actions meaning that there are many possible actions that can be taken towards a single object.
## 4 Conclusion
In this report, we introduce three improvements over the baseline in the long-term action anticipation task for first-person videos. Results on the validation and test set show that the proposed method can achieve excellent performance.
|
2302.02968
|
Quantifying tissue growth, shape and collision via continuum models and
Bayesian inference
|
Although tissues are usually studied in isolation, this situation rarely
occurs in biology, as cells, tissues, and organs, coexist and interact across
scales to determine both shape and function. Here, we take a quantitative
approach combining data from recent experiments, mathematical modelling, and
Bayesian parameter inference, to describe the self-assembly of multiple
epithelial sheets by growth and collision. We use two simple and well-studied
continuum models, where cells move either randomly or following population
pressure gradients. After suitable calibration, both models prove to be
practically identifiable, and can reproduce the main features of single tissue
expansions. However, our findings reveal that whenever tissue-tissue
interactions become relevant, the random motion assumption can lead to
unrealistic behaviour. Under this setting, a model accounting for population
pressure from different cell populations is more appropriate and shows a better
agreement with experimental measurements. Finally, we discuss how tissue shape
and pressure affect multi-tissue collisions. Our work thus provides a
systematic approach to quantify and predict complex tissue configurations with
applications in the design of tissue composites and more generally in tissue
engineering.
|
Carles Falcó, Daniel J. Cohen, José A. Carrillo, Ruth E. Baker
|
2023-02-06T17:58:18Z
|
http://arxiv.org/abs/2302.02968v1
|
# Quantifying tissue growth, shape and collision via continuum models and Bayesian inference
###### Abstract
Although tissues are usually studied in isolation, this situation rarely occurs in biology, as cells, tissues, and organs, coexist and interact across scales to determine both shape and function. Here, we take a quantitative approach combining data from recent experiments, mathematical modelling, and Bayesian parameter inference, to describe the self-assembly of multiple epithelial sheets by growth and collision. We use two simple and well-studied continuum models, where cells move either randomly or following population pressure gradients. After suitable calibration, both models prove to be practically identifiable, and can reproduce the main features of single tissue expansions. However, our findings reveal that whenever tissue-tissue interactions become relevant, the random motion assumption can lead to unrealistic behaviour. Under this setting, a model accounting for population pressure from different cell populations is more appropriate and shows a better agreement with experimental measurements. Finally, we discuss how tissue shape and pressure affect multi-tissue collisions. Our work thus provides a systematic approach to quantify and predict complex tissue configurations with applications in the design of tissue composites and more generally in tissue engineering.
Introduction
Cells do not live in isolation; instead they coexist and organize to form tissues and organs. In particular, during tissue growth, cells do not behave as isolated individuals, but sense their environment and direct their motion according to the information they receive. The sum of all individual cells, behaving in a coordinated manner and interacting with each other, can give rise to collective cell migration, which is essential for many different phenomena in biology, from wound healing and tumour invasion, to the formation of complex structures during development [1, 51]. Being such a fundamental process, much effort has been devoted to decipher the basic physical principles behind collective cell migration, both experimentally and from a modelling perspective [14, 30]. Being able to connect models and experimental data is thus essential in order to confirm the validity of mathematical models, as well as to gain further mechanistic insights.
At the tissue scale, mathematical models are usually based on a continuum description, where the cell density evolves according to a partial differential equation (PDE). Arguably the most famous continuum model of tissue spreading is the reaction-diffusion Fisher-KPP equation [45], which is based on the assumption that cell movement is essentially random, and that cells proliferate according to a logistic growth law. This model and variants of it have been used to describe a variety of tissue formation experiments [41, 53, 54].
From a biological perspective, however, the random motion assumption is not very realistic, as cells are able to sense the pressure exerted by neighbouring cells and direct their movement according to this information [31]. When population pressure is taken into account in continuum models one obtains the Porous-Fisher equation, which replaces the constant diffusion coefficient in the Fisher-KPP equation by a density-dependent function that increases as a power-law of the density. One of the most interesting features about this model is the appearance of compactly supported solutions, which give rise to the sharp invasion fronts observed in tissue formation experiments [8, 13, 22, 23]. Of course, there are additional effects which can play an important role in collective cell motility and have been modelled using extensions of the mentioned equations, such as cell-cell adhesion [2, 19, 25, 44], viscoelastic forces [1, 6, 32], interactions with the extracellular matrix [7, 17, 29], heterogeneity in cell size [39, 40], and cell-cycle dynamics [56].
Mathematical models can thus be more or less complex depending on the available data and the required level of biological detail, and they are a powerful tool to explore the impact of different biological mechanisms on collective cell movement. So-called _identifiability analysis_ methods [15, 16] provide a systematic approach to relate model complexity to the type
and amount of experimental data, and are a first step towards the estimation of model parameters. We say that a model is _structurally identifiable_ if different parameter values yield different model predictions. Hence, this is an intrinsic property of the model which depends on whether, given infinite ideal data, one can identify single values for the model parameters. Such formal structural identifiability analysis is possible for systems of ordinary differential equations [36, 49], and for certain families of PDEs (e.g. age-structured [50]), but is more challenging for reaction-diffusion equations. Added to this, biological data is never infinite nor ideal which limits how much insight we can gain from structural identifiability.
As a result, here we explore the question of _practical identifiability_[15] of two simple reaction-diffusion continuum models -- namely the Fisher-KPP and Porous-Fisher equations -- using data from recent tissue formation experiments [32, 34]. Practical identifiability deals with finite and possibly noisy data, and depends on the inference method, but at its core is motivated by the same question: can we confidently identify estimates for the different model parameters? Here we follow the ideas in [35, 56] and use a Bayesian approach in order to obtain posterior distributions for the different model parameters. Poor identifiability in a Bayesian context is thus associated with very broad posterior distributions indicating high uncertainty for the associated parameters [55].
Our work reveals that both models can be suitably calibrated to reproduce the dynamics of freely expanding epithelia, with the different model parameters being practically identifiable in all considered settings. However, when tissues are not isolated from each other and are allowed to collide as a result of motility and proliferation during tissue growth, only the Porous-Fisher model, which considers interactions between cells, is able to describe the experimental data. This model, while being relatively simple and having only three parameters, also proves to be useful for understanding the dynamics of multi-tissue collisions and, hence, for predicting steady state tissue configurations with applications in tissue engineering.
We structure the paper as follows; first, we describe the two continuum models and the inference approach taken. Then, we estimate the different model parameters using comprehensive experimental data of the growth of large, circular epithelia (Heinrich et al. [32]). After confirming that the two employed models are practically identifiable and that they can reproduce data collected from these experiments, we validate our models on more complex experimental datasets detailing how multiple epithelia interact with each other during collision and healing experiments (Heinrich et al. [34]). Using the obtained parameter estimates we explore whether the two models can reproduce several tissue collisions experiments with very different initial tissue geometries. Finally, we use the Porous-Fisher model to quantify and characterize the dynamics of multi-tissue collisions.
Simple models of tissue growth
We start by looking at simple models describing the growth of a single epithelial monolayer tissue. We denote cell density in the tissue by a continuous variable \(\rho(\mathbf{x},t)\) which depends on space \(\mathbf{x}\) and time \(t\). Cell density is assume to change due to cell movement and local proliferation. Mass conservation implies then that the density \(\rho\) satisfies the continuity equation
\[\partial_{t}\rho+\nabla\cdot\mathbf{j}=r\rho f(\rho), \tag{1}\]
where the flux \(\mathbf{j}\) determines how cells move, \(r\) is the proliferation rate, and \(f(\rho)\) is a crowding function which regulates how density-dependent effects reduce net growth. For simplicity we consider logistic growth given by \(f(\rho)=1-\rho/K\), with \(K\) a saturation density or carrying capacity. Note that epithelial tissues are well characterized to undergo contact inhibition of proliferation, where cell cycling decreases as cell density increases [32, 48] and hence the logistic growth assumption is reasonable -- see also [57] for other possibilities.
A very simple model can be motivated by assuming that cells move randomly following Brownian motion, which corresponds to the well-known Fick's law of diffusion, \(\mathbf{j}=-D\nabla\rho\). In this case, we obtain the Fisher-KPP equation
\[\partial_{t}\rho=D\Delta\rho+r\rho\left(1-\frac{\rho}{K}\right). \tag{2}\]
This model and related ones are particularly relevant to describe tissue growth due to the presence of travelling wave solutions, which are characterized by an invasion front of fixed shape that propagates at a constant speed [45].
However, a more realistic model should account for the fact that cell movement is not completely random and can be influenced by the local cell density. A standard approach in order to incorporate crowding effects into Eq. (1) results from the assumption that the velocity is proportional to the gradient of the density, so that cells move down population density gradients. In other words, we write the flux as \(\mathbf{j}=\rho\mathbf{v}\), where \(\mathbf{v}\) represents the cell velocity and now assume that \(\mathbf{v}=-D\nabla\rho\). This gives the following Porous-Fisher equation
\[\partial_{t}\rho=D\,\nabla\cdot(\rho\nabla\rho)+r\rho\left(1-\frac{\rho}{K} \right). \tag{3}\]
When there is no proliferation (\(r=0\)), Eq. (3) corresponds to a specific case of the well-known porous-medium equation [60]. This equation is also related to Darcy's law which links the velocity with the population pressure: \(\mathbf{v}=-\nabla P(\rho)\). For the general porous
medium equation, pressure and density are related via the power-law function \(P(\rho)\sim\rho^{m-1}\), depending on the exponent \(m\). During this work and unless stated otherwise, we will assume \(m=2\). Note that in the limit \(m\to 1\), one obtains the linear diffusion case with \(P(\rho)\sim\log\rho\).
From a microscopic point of view, where one focuses on individual cell trajectories, Eq. (2) corresponds to the continuum limit of a system of non-interacting agents which move randomly and can proliferate with a density-dependent probability. The porous-medium equation with \(m=2\) can also be derived from microscopic movement rules when one takes into account volume exclusion [10, 27, 47], starting from on-lattice [4, 24] and also from off-lattice agent-based models [13, 20, 21, 46]. Further, the case with \(m=3\) can be identified as the mean-field limit of a system of interacting agents with a particular diffusive scaling [62] and has also been suggested as _the simplest model_ to relate the dispersal velocity to both the density and its gradient [59].
In the following, we connect Eqs. (2) and (3) with data from recent experiments studying the dynamics of expanding and colliding epithelial monolayer tissues. The two suggested models are solved numerically in two spatial dimensions with the finite-volume numerical scheme described in [3, 11].
## 3 Single tissue expansions and parameter estimation
In order to calibrate the two suggested models, we focus on the experiments by [32]. In these, Heinrich et al. characterized the expansion dynamics and growth of single circular epithelial tissues using an MDCK cell line. Initially, cells are cultured in a silicone stencil for 18 hours and, after the stencil removal, tissues are allowed to freely expand for 46 hours, which enables for each cell to undergo 2-3 cell divisions given that the cell cycle duration is around 16 hours. Local densities are then quantified by counting the number of nucleus centroids -- for more details we refer to [32]. For our analysis, we only consider the measured cell densities after the first six hours of the experiment so that effects caused by the stencil removal are negligible. In Figure 1A we show snapshots from one such experiment using a circular tissue with initial diamater of 3.4 mm -- see Figure 1B for the quantified densities. The radial density profile resulting from averaging 11 experimental replicates is shown in Figure 1C. Datasets used to reproduce these figures were taken from [33]. See Figure S3 for individual density profiles at specific time points.
Our experimental data then consists of many individual measurements of the cell densities \(\rho(\mathbf{x},t)\), giving rise to the dataset \(\mathcal{D}=\{\rho^{\mathcal{D}}(\mathbf{x}_{i},t_{j})\}_{i,j}\). Here, the different measurements are
Figure 1: Expansions of single tissues and model predictions. (A) Microscopy images in phase-contrast at different times for the expansion of a circular tissue with initial diameter of 3.4 mm — taken from [33]. (B) Quantified experimental cell densities for the same expansion — data from [33]. (C) Experimental radial density profile obtained after averaging the expansions of 11 tissues with the same initial condition — from [32]. (D) Radial density profile from the Fisher-KPP model given by Eq. (2). (E) Radial density profile from the Porous-Fisher model given by Eq. (3). Model parameters correspond to the maximum posterior estimates. All densities thresholded at 10 cells/mm\({}^{2}\). See Figure S3 for individual density profiles at specific time points
recorded every 20 minutes, while the positions \(\{\mathbf{x}_{i}\}_{i}\) correspond to the centers of small voxels of \(115\times 115\)\(\mu\)m\({}^{2}\). In practice, and in order to keep the dimensionality of the data sufficiently low, we will only use the densities corresponding to the time points \(t_{j}=16,26,36,46\) h. In order to connect experimental data and models, we assume that the observations \(\rho^{\mathcal{D}}\) are noisy versions of the model predicted density \(\rho\). A common approach in mathematical biology [35, 56] is to impose that the observation errors are additive, independent and normally distributed with variance \(\sigma^{2}\). In other words, we assume the following error model
\[\rho^{\mathcal{D}}(\mathbf{x}_{i},t_{j})=\rho(\mathbf{x}_{i},t_{j})+\varepsilon,\quad\varepsilon\sim\mathcal{N}(0,\sigma^{2}). \tag{4}\]
### Parameter estimation via maximum likelihood
Both models -- Eqs. (2) and (3) -- have three parameters \(D,r,K\) to be estimated. Considering the variance of the observation error as an extra parameter, we can write them as a vector \(\theta=(D,r,K,\sigma)\). With the error model given by Eq. (4) we can explicitly write the log-likelihood of observing the measured data
\[\ell(\theta)=-\frac{1}{2}\sum_{i,j}\left(\log\left(2\pi\sigma\right)-\left( \frac{\rho(\mathbf{x_{i}},t_{j})-\rho^{\mathcal{D}}(\mathbf{x_{i}},t_{j})}{ \sigma}\right)^{2}\right). \tag{5}\]
A direct approach to estimating the parameters in the two models consists of maximixing this log-likelihood as a function of the parameter vector \(\theta\), which gives a maximum likelihood estimator of the model parameters: \(\theta_{\mathrm{ML}}=\mathrm{argmax}_{\theta}\,\ell(\theta)\). In the case of a fixed noise parameter \(\sigma\), this is equivalent to minimizing the 2-norm of the difference between model and data. Note, however, that whenever the models are non-identifiable maximizing the likelihood might lead to misleading results [55]. This is thus only a first step in our parameter inference analysis.
We perform the likelihood optimization using the parameter inference toolbox pyPESTO [52]. This toolbox allows for local optimization of the likelihood starting from an initial guess of \(\theta_{0}\). By randomly sampling a large number of initial vectors \(\theta_{0}\) we find the same local maximum in most of the optimization runs. Additionally, this local maximum also maximizes the likelihood among all the found local maxima. In order to generate initial guesses of \(\theta_{0}\) we sampled uniformly on log-scale using the parameter bounds \(10^{-2.5}<r<10^{1}\) h\({}^{-1}\), \(10^{3}<K<10^{3.5}\) cells/mm\({}^{2}\), \(10^{1}<\sigma<10^{3.5}\) cells/mm\({}^{2}\), for both models; and \(10^{2.5}<D<10^{4.5}\)\(\mu\)m\({}^{2}\)/h for the Fisher-KPP model, and \(10^{-1.5}<D<10^{-2.5}\)\(\mu\)m\({}^{2}\)/h, for the
Porous-Fisher model. The maximum likelihood estimators are indicated using dashed lines in Figures 2 and 3 for the Fisher-KPP and Porous-Fisher models respectively.
As stated earlier, only the experimental cell densities corresponding to the time points \(t_{j}=16,26,36,46\) h were used for the likelihood calculation in Eq. (5). This was done to keep the computational costs of computing the maximum likelihood estimate at reasonable levels. Different choices of these time points yielded similar results for the maximum likelihood estimate \(\theta_{\text{ML}}\).
### Bayesian inference
Here we explore the question of _practical identifiability_ of the Fisher-KPP and Porous-Fisher models using a Bayesian approach. In order to capture uncertainty in the model parameters, we are interested in estimating the posterior distribution \(P(\theta\,|\,\rho^{\mathcal{D}})\), which can be calculated from Bayes' theorem
\[P(\theta\,|\,\rho^{\mathcal{D}})\propto P(\rho^{\mathcal{D}}\,|\,\theta)\pi( \theta),\]
where \(P(\rho^{\mathcal{D}}\,|\,\theta)\) is the likelihood of observing the measured data, and \(\pi(\theta)\) is the prior distribution of the parameter vector \(\theta\). We assume the error model given by Eq. (4), and hence the log-likelihood is given by Eq. (5). The priors for the two considered models are assumed to be uniform on log-scale using the bounds given in the previous section.
In order to infer the posterior distribution, we use a Metropolis-Hastings MCMC (Markov chain Monte Carlo) sampler with adaptive proposal covariance, which is also implemented in pyPESTO [52]. The Metropolis-Hastings MCMC algorithm is a simple and popular choice for exploring the parameter space [35, 56], in which a Markov Chain starts at position \(\theta\), and accepts a potential move to \(\theta^{*}\) with probability \(q=\min\{1,P(\theta\,|\,\rho^{\mathcal{D}})/P(\theta^{*}\,|\,\rho^{\mathcal{D }})\}\). In this way, the Markov chain tends to move towards high values of the posterior distribution, while still allowing for transitions to regions of lower probability in order to move away from local maxima. In this context, poor identifiability of the parameters can be detected by Markov chains that fail to converge towards a unimodal peaked posterior distribution.
We run the MCMC algorithm starting from three different initial guesses of \(\theta\) for both models. In all cases, the Markov chains converge rapidly to narrow and well-defined stationary distributions -- see Figures S1 and S2 for plots of the chains and the univariate Gelman-Rubin convergence diagnostics. In particular, our typical Markov chain iterations are of length \(12000\). Taking the last \(5000\) iterations of the three chains in each model we obtain the posterior distributions \(P(\theta\,|\,\rho^{\mathcal{D}})\). In Figures 2 (Fisher-KPP) and 3 (Porous-Fisher)
we show a plot matrix representation of the univariate and bivariate marginal distributions, with unimodal and approximately symmetric univariate densities. We also observe an excellent agreement between the marginal univariate modes and the maximum likelihood estimates found in the last section. Note that for the two models, different combinations of the parameters \(D,r,K\) can result in the same invasion front speed, which explains the observed correlation between these parameters in the bivariate densities in Figures 2 and 3. However, we observe that there is only one set of parameters maximising the likelihood, and that these parameters can be confidently identified given the small variance of the posterior distribution.
All identified parameters lie within the biologically feasible bounds. In the linear diffusion case (Fisher-KPP), the univariate modes are given by \((D,r,K,\sigma)=(1073~{}\mu\)m\({}^{2}\)/h, 0.29 h\({}^{-1}\), 5113 cells/mm\({}^{2}\), 492 cells/mm\({}^{2}\)). Using an average density of \(\sim 3000\) cells/mm\({}^{2}\) the estimated proliferation rate is around \(\sim 0.1\) h\({}^{-1}\), which yields an estimated division time around 10 hours. This is consistent with the characteristic division time for MDCK cells of 16-18 hours given that this timescale can vary significantly with cell size [58]. The carrying capacity can
Figure 2: Results of the MCMC algorithm for the Fisher-KPP model given by Eq. (2). The diagonal plots represent the univariate marginal posterior distributions for each parameter. Below the diagonal we show the bivariate densities for every combination of parameters. Univariate posterior modes correspond to \((D,r,K,\sigma)=(1073\pm 13~{}\mu\)m\({}^{2}\)/h, \(0.289\pm 0.002\) h\({}^{-1}\), \(5113\pm 6\) cells/mm\({}^{2}\), \(492\pm 2\) cells/mm\({}^{2}\)), where the errors are given by one standard deviation, calculated from the posterior distributions. Black dashed lines indicate the maximum likelihood estimates for each parameter.
also be related to the typical cell radius for MDCK cells. Although notable variability has been reported [63], the MDCK cell radius \(a\) is estimated to oscillate between \(a\sim 6\ \mu\)m and \(a\sim 18\ \mu\)m [28]. Assuming that maximum densities in the monolayer are associated with hexagonal close packing of cells, the maximum theoretical density is given by \(K=1/(2\sqrt{3}a^{2})\)[61]. With our estimated carrying capacity this yields an estimate of \(a\sim 8\ \mu\)m, which again is consistent with previous measurements, at least for cells in the bulk of the tissue.
In the case of the Porous-Fisher model, we obtain the univariate modes \((D,r,K,\sigma)=(1.18\ \mu\)m\({}^{2}\)/h, \(0.21\ \mathrm{h}^{-1}\), \(5319\ \mathrm{cells/mm}^{2}\), \(427\ \mathrm{cells/mm}^{2}\)). Note that the proliferation related parameters \(r,K\) are very similar to the ones we estimated for the Fisher-KPP model. In this case, we estimate a cell divison time around \(\sim 11\) hours, and a typical cell radius of \(a\sim 7\ \mu\)m, again within the known ranges. Note that for the Porous-Fisher model, the diffusion coefficient is density-dependent -- \(D(\rho)=D\rho\). Using an average density of \(\sim 3000\) cells/mm\({}^{2}\), we also estimate an _average diffusion coefficient_ which is three times larger than in the linear case, but still of the same order. We also observe that the estimated noise related parameter \(\sigma\) is smaller in the Porous-Fisher case.
Figure 3: Results of the MCMC algorithm for the Porous-Fisher model given by Eq. (3). The diagonal plots represent the univariate marginal posterior distributions for each parameter. Below the diagonal we show the bivariate densities for every combination of parameters. Univariate posterior modes correspond to \((D,r,K,\sigma)=(1.18\pm 0.01\ \mu\)m\({}^{2}\)/h, \(0.214\pm 0.001\ \mathrm{h}^{-1}\), \(5319\pm 7\) cells/mm\({}^{2}\), \(427\pm 2\ \mathrm{cells/mm}^{2}\)), where the errors are given by one standard deviation, calculated from the posterior distributions. Black dashed lines indicate the maximum likelihood estimates for each parameter.
In summary, both models present well-defined and narrow posterior distributions for all the model parameters, with the parameter estimates being consistent with previous experimental measurements. Thus, we have shown via a Bayesian approach that all the model parameters appear to be identifiable. A more sophisticated approach aiming to use all the available data -- instead of measurements every 10 hours -- could include for instance a mini-batch algorithm [42]. However, taking a subset of the data highlights that the models are practically identifiable, suggesting such approaches are not necessary in this case.
### Almost identical predictions from different continuum models
Next, we explore to what extent the two considered models are able to reproduce the observed data. To do so, we solve numerically Eqs. (2) and (3) using the parameter values that we estimated in the previous section. In order to minimise the possible impact of the stencil removal on cell motility [32], we use as initial condition the experimental density profile at time \(t=6\) h. The resulting radial density profiles are shown in Figure 1C-D -- see also Figure S3.
First, we observe that both models yield very similar predictions with minor differences that are only noticeable near the expansion front. This is basically due to the fact that the solution of the Porous-Fisher model (3) presents a sharp front, in contrast with the exponential decay in space of the Fisher-KPP equation (2). Note that the Fisher-KPP model fails to accurately capture the behaviour of cell densities near the monolayer boundary, but the Porous-Fisher model, which accounts for population pressure, gives a more accurate description.
Secondly, we see that both models capture qualitatively the dynamics and growth of the expansions, but fail to capture the non-monotonic behaviour of the radial density profile for intermediate timescales. The experiments of Heinrich et al. [32] observed that this phenomenon is accentuated for smaller tissues. Moreover, for later times, the experiments report cell densities that are higher than the estimated carrying capacity, which might be due to the fact that the considered models neglect variability in cell area as cells progress through their cell cycle [39, 43].
All in all, these results show that both models, after being suitably calibrated, can explain equally well the data. As we will see next, it is only under more complex experimental conditions, when one needs to account for a more detailed level of physical description, that we can distinguish between the models.
Quantifying tissue-tissue collisions
Having seen that the two proposed models are practically identifiable, we now analyze how much mechanistic insight we can gain from more complex experiments. We consider a second set of experiments also performed by Heinrich et al. [34], where tissues are not isolated as in the previous experiments, but are allowed to interact with other tissues. In particular, Heinrich et al. study the dynamics of multi-tissue collisions, varying the shape and the number of colliding tissues, and find very complex patterns resulting from basic cell-cell interactions and mechanical properties. One of the most interesting observed features is the formation of sharp boundaries at the collision location, avoiding thus mixing of cells from different tissues, which is also characteristic of models that account for population pressure [5, 12]. Next, we follow closely these experiments and attempt to use both the Fisher-KPP and the Porous-Fisher models to reproduce different types of collisions.
Although we will always work with homotypic tissues (i.e. of the same cell type), it is particularly useful to identify a system consisting of multiple homotypic tissues with a model that accounts for several interacting cell populations. In our case, the tissues are composed of the same cell populations initially seeded at distinct spatial locations. Note, however, that the models presented below can account also for heterotypic tissue experiments. We denote the different species or tissues by \(\rho_{i}\) for \(i=1,\ldots,n\) with \(n\) being the total number of species. In the linear diffusion Fisher-KPP model we assume that each species follows random motion and hence the diffusive part in the PDE remains unaffected. Taking into account that proliferation is limited by the total population density, we may write for \(n=2\)
\[\begin{cases}\partial_{t}\rho_{1}=D\Delta\rho_{1}+r\rho_{1}\left(1-\frac{\rho _{1}+\rho_{2}}{K}\right),\\ \partial_{t}\rho_{2}=D\Delta\rho_{2}+r\rho_{2}\left(1-\frac{\rho_{1}+\rho_{2} }{K}\right).\end{cases} \tag{6}\]
For the nonlinear diffusion Porous-Fisher model, we can write the total population pressure as \(P(\rho_{1},\rho_{2})=D\left(\rho_{1}+\rho_{2}\right)\). With this, the two-species model becomes
\[\begin{cases}\partial_{t}\rho_{1}=D\,\nabla\cdot\left(\rho_{1}\nabla\left( \rho_{1}+\rho_{2}\right)\right)+r\rho_{1}\left(1-\frac{\rho_{1}+\rho_{2}}{K} \right),\\ \partial_{t}\rho_{2}=D\,\nabla\cdot\left(\rho_{2}\nabla\left(\rho_{1}+\rho_{2 }\right)\right)+r\rho_{2}\left(1-\frac{\rho_{1}+\rho_{2}}{K}\right).\end{cases} \tag{7}\]
Extensions of these models to an arbitrary number of species, \(n>2\), are straightforward. The existence theory for cross-diffusion systems of the type of (7) is studied in [9, 18]. Note
also that as a result of the population pressure term, system (7) gives sharp boundaries separating both species for initially segregated data [5, 12] which, again, motivates its use to reproduce the experiments in [34].
### Reproducing experimental tissue collisions
In the next sections, we explore numerically the two proposed models under different initial conditions. We start with a qualitative study of some of the experiments performed by Heinrich et al. [34] and follow with a more quantitative analysis of collisions between rectangular tissues.
#### 4.1.1 Simple binary tissue-tissue collisions
We first test the two proposed models in binary tissue-tissue collisions. In order to do so, here we choose different initial shapes for the two colliding tissues, namely we study circle-circle, rectangle-rectangle and circle-rectangle collisions. We also analyze the case of two colliding circles with different initial radii. See Figure 4A for the experimental initial and final configurations. We emphasize that, in contrast with all the shown numerical simulations of our models, the colours in the experimental snapshots are only used to label each different tissue and do not quantify cell densities.
We numerically solve Eqs. (7) for the four mentioned initial conditions and with the parameters that we estimated from the previous experiments [32]. The numerical scheme is identical to the one-species case [3, 11]. As expected, in all four studied configurations the Porous-Fisher model shows sharp boundaries separating the two tissues after collision, and the observed patterns are nearly identical to the experimental final configurations after a simulation time equivalent to around 60 hours (Figure 4B). Note that in contrast with the experimental snapshots, Figure 4B shows quantitative cell densities.
When, instead of the Porous-Fisher model accounting for population pressure (7), we use the Fisher-KPP model (6), we still observe patterns that resemble the experimental configurations. However, recall that in this case cells do not sense local pressure and are free to move in all directions, which results in a region where cells from both tissues can mix. Note that in this case, no sharp boundary between tissues is observed either -- Figure 4C. Even though the Fisher-KPP model fails to reproduce density profiles near the collision boundary, it still can capture qualitatively the density profiles in the bulk of the tissue, where the population density gradient becomes more uniform. Hence, after suitable calibration,
Figure 4: Reproducing tissue-tissue collisions with different geometries — animated movies available at [26]. Accounting for population pressure correctly predicts the sharp boundaries observed in experiments. (A) Experimental results for initial conditions with different tissue geometries. Figures adapted from [34] (Creative Commons License). Note that colours are only used to label each tissue and do not quantify cell densities. (B) Initial conditions and numerical simulations for the Porous-Fisher model (Eqs. (7)) at \(t=57\) h. Colours in the numerical simulations indicate cell densities according to the shown colorbars. (C) Comparison of the Fisher-KPP model (Eq. (6)) and the Porous-Fisher model (Eq. (7)). Solutions corresponding to the black dashed lines in (B). Parameter estimates given in the previous section: \((D,r,K)=(1073~{}\mu\)m\({}^{2}\)/h, \(0.29\) h\({}^{-1}\), \(5113\) cells/mm\({}^{2}\)) for the Fisher-KPP model and \((D,r,K)=(1.18~{}\mu\)m\({}^{2}\)/h, \(0.21\) h\({}^{-1}\), \(5319\) cells/mm\({}^{2}\) for the Porous-Fisher model.
both the Fisher-KPP and the Porous-Fisher models show similar behaviour in this region far from the collision boundary and the propagating front.
Observe also that collisions shown in Figure 4B that occur between tissues with the same shape (rectangle-rectangle and circle-circle collisions) were initialised with tissues of the same density. As reported experimentally in [34], these initial conditions result in the formation of a fixed sharp boundary that does not move in time. However, when collisions between tissues with different densities occur, then the denser tissue pushes the less dense tissue resulting in a boundary displacement which can be measured experimentally. For collisions between tissues with different shapes the collision boundary can also show a similar behaviour, as shown in Figure 4. In the next sections we study this phenomenon quantitatively using the Porous-Fisher model (7). Of course, given that linear diffusion fails to predict a sharp boundary between colliding tissues, this boundary displacement cannot be estimated from the Fisher-KPP model (6). Before moving to the study of collision boundary dynamics, we analyze a further set of more complex tissue collision experiments, which make evident the limitations of the simple Fisher-KPP model.
#### 4.1.2 Multi-tissue complex collisions
In the previous sections we have showed that, after suitable calibration, both the Fisher-KPP and the Porous-Fisher models show similar behaviour in regions of tissue that are far from boundaries. However, under more complex experimental conditions where tissue boundary dynamics become important, the predictive power of the Fisher-KPP model becomes more limited.
These differences between the Fisher-KPP (6) and the Porous-Fisher model (7) become more evident when multiple tissues collide simultaneously. Here we focus on the experiments performed by Heinrich et al. [34] shown in Figure 5, where eight homotypic circular tissues are initially set apart on a hexagonal lattice. The initial configuration is also represented in Figure 5 alongside the solutions predicted by the two proposed models after 57 hours. From these results, it becomes evident that the Fisher-KPP model is not suitable to describe complex interactions between tissues. In contrast, accounting for population pressure does yield the predicted behaviour with a final pattern nearly identical to that observed experimentally.
The Porous-Fisher model (7) can thus predict the behaviour observed in complex experimental settings with multiple tissues colliding. A numerical simulation of an extension of (7) to three species is depicted also in Figure 5. This last experiment mimics the self-assembly of a tri-tissue composite designed in [34].
### Quantifying collisions between rectangular tissues
As mentioned earlier, collisions between two rectangular tissues result in the formation of a sharp boundary. Whenever the two rectangles are identical -- i.e. have the same shape and density -- the tissue boundary does not move and coincides with the centroid of the combined tissue. However, Heinrich et al. observed that using larger or denser tissues results in a boundary displacement in the direction of the smaller or less dense tissue -- see Figure 6A for their experimental data. As shown in the Supplementary Information Section S2, the
Figure 5: Reproducing complex tissue collisions observed in Heinrich et al. experiments — animated movies available at [26]. The Fisher-KPP model cannot reproduce complex multi-tissue collisions. (A) Experimental multi-tissue collisions, adapted from [34] (Creative Commons License) (B) Multi-tissue collision between eight homotypic circles for both the Fisher-KPP (6) and the Porous-Fisher (7) models. Density profiles are taken along the black dashed lines. Note that numerical simulations use parameter estimates obtained from different experiments — Figures 2 and 3. (C) Tri-tissue tesselation inspired by Escher’s artwork and reproduced experimentally also by Heinrich et al [34]. Here we show numerical simulations of the Porous-Fisher model. Rightmost panel zooms in the region indicated in the middle panel, and shows sharp boundaries.
Porous-Fisher model also predicts this boundary displacement when there is a width/density mismatch between the initial tissues.
Here, we focus on the Porous-Fisher model, and explore to what extent it can reproduce the observed experimental data. In order to perform a quantitative comparison of model and experiments, we calibrate again Eqs. (7) by using the data corresponding to a collision between identical rectangular tissues (control case in Figure 6A). After carrying out parameter estimation, we explore how the model performs in collisions of rectangular tissues with relative mismatches in either the width or number of cells (density and width mismatch in Figure 6A). For simplicity, and after having determined that our model is practically identifiable, we estimate the parameters using a maximum likelihood approach, as explained in previous sections, by comparing experimental and simulated cell densities. The initial densities are taken from experimental data, which in the control case (rectangles with equal density and equal width) are identical to those in Figure 4A.
For this set of experiments, the maximum likelihood estimate yields \((D,r,K)=(3.26\)\(\mu\)m\({}^{2}\)/h, \(0.11\) h\({}^{-1}\), \(4077\) cells/mm\({}^{2}\)), which gives an approximate cell radius of \(\sim 8\)\(\mu\)m. Observe that the diffusion parameter \(D\) and the proliferation rate \(r\) show notable differences with respect to the previous set of experiments. In particular these parameters suggest faster migration and slower proliferation, while the front speed remains more or less constant with respect to the case of a single tissue expansion. Note however, that as we proved in the previous sections, the Porous-Fisher model is practically identifiable and hence, although different parameter combinations result in the same invasion speed, we can confidently identify a set of parameters which maximises the likelihood of observing our data.
In fact, the differences in the parameters estimated from the two experiments [32, 34] could be explained by accounting for the transient regime that occurs immediately after the stencil removal. This short timescale is estimated to last around 6-8 hours, which we remove in order to calibrate the model. However, if we only take into account the first 20 hours of the experiment, the maximum likelihood procedure yields very different estimates for the model parameters, which suggests that the experimental collision time could be smaller than this transient timescale.
After the model is calibrated using the control case data, we can simulate Eqs. (7) under different settings by varying the initial conditions. We study collisions between two rectangular tissues with an initial density (2600 vs 1800 cells/mm\({}^{2}\)) or width (1000 vs 500 \(\mu\)m) mismatch. In Figure 6B we plot the density profiles obtained from the numerical simulations, which show an excellent agreement with the experimental data once both tissues
have collided. At early times however, and in line with our discussion above, the model cannot reproduce the observed experimental dynamics. In particular tissue-tissue collisions occur around eight hours before they are observed in the experiments. The agreement between model and data becomes more evident upon visualising individual snapshots from these density profiles (see Figure 6C). Note that here, in the numerical simulation of both the density and width mismatch cases, we use the parameters estimated from a collision between identical rectangles.
### Population pressure gradients drive boundary displacement
As discussed earlier, the Porous-Fisher model produces a sharp boundary separating the two colliding tissues. When the two tissues are not identical, there is a population pressure gradient at this boundary, which yields a net displacement with velocity \(\mathbf{v}=-\nabla P(\rho)\). The nonlinear diffusion model assumes \(P(\rho)\sim\rho\) and thus the boundary will move in the direction of the less crowded tissue. This translates, of course, into a wider tissue pushing a more narrow one, or a denser tissue pushing a less dense one.
Our numerical simulations also reveal this behaviour (Figure 6D), giving a larger boundary displacement the larger the width/density mismatch. When using parameters inferred from the control case, the total boundary displacement that the model predicts falls short with respect to the experimental measures of Heinrich et al. by around 60-100 \(\mu\)m [34], which accounts for less than 5% of the final tissue width after collision. We believe that uncertainty associated with the experimental measures might have a minor impact on these results, as the boundary location can be determined experimentally up to subcellular accuracy and is then averaged over the collision axis -- given that different parts of the tissue might not collide at the same time. However, the transient behaviour that cells exhibit after stencil removal can have a more significant effect on the later dynamics [37, 38], especially if this timescale is of the order of the collision time.
Another aspect which could have a more important influence on tissue boundary dynamics from the modelling perspective is the choice of the pressure function \(P(\rho)\). In the Porous-Fisher model, cells move following population pressure gradients, moving away from crowded regions with a pressure function that is assumed to depend linearly on the density. However, using a more general pressure function would also give similar qualitative results but with possibly different dynamics. Note that a logarithmic dependence \(P(\rho)\sim\log\rho\)[34] is not suitable for this problem as it corresponds to the case of random cell movement in which there is no sharp boundary separating the tissues.
More generally, one could consider pressure functions that grow as a power-law function of the density, \(P(\rho)\sim\rho^{m-1}\) for \(m>1\). For large values of the exponent \(m\), cells only move when the density gradient is large, while in the limit \(m\to 1\) we recover the linear diffusion case. Considering this pressure-density relationship yields a porous-medium equation with proliferation for the evolution of the density, which also produces sharp boundaries between colliding tissues for \(m>1\). Hence one could ask how does boundary displacement depend on the relationship between pressure and density -- i.e. on the exponent \(m\). For small or no proliferation, this dependence can be analytically examined in the long-time regime. For instance, for one-dimensional tissues with an initial mass mismatch, a power-law pressure function yields a boundary displacement that grows in time as \(\sim t^{1/(m-1)}\) thus giving a faster boundary motion for \(m<1\) -- see Supplementary Information Section S2. We hence believe that it would be interesting to explore the practical identifiability of the exponent \(m\), and whether considering a more general pressure-density relationship could give more accurate tissue boundary dynamics.
Figure 6: Quantifying rectangle-rectangle collisions. (A) Experimental density and velocity profiles resulting from rectangle-rectangle collisions — data provided by Heinrich et al. [34]. (B) Numerical simulations of the Porous-Fisher model (7) with parameters \((D,r,K)=(3.26\ \mu\)m\({}^{2}\)/h, 0.11 h\({}^{-1}\), 4077 cells/mm\({}^{2}\)). Initial conditions taken from the experimental data. (C) Comparing experimental density (dashed) with model prediction (solid) for the width mismatch case. Note that although parameters are estimated from the control case, there is an excellent agreement between model and data when the initial conditions are modified. (D) Boundary displacement predicted by the model, for different density mismatches, labelled in different colours.
Conclusion and outlook
In this work, we have focused on two main aspects of tissue formation modelling: the practical identifiability of the Fisher-KPP and Porous-Fisher models using a Bayesian approach, and the applicability of the two models to describe tissue collisions experiments. Using data from recent experiments studying the growth and expansion of single epithelial sheets [32], we were able to obtain well-defined posterior distributions for each of the model parameters with relatively narrow confidence intervals. Our work thus adds to a growing literature assessing the practical identifiability of similar models under a variety of different experimental conditions [8, 56].
In contrast with previous studies, and for the sake of conciseness, here we opted for using only a Bayesian MCMC approach. Another commonly used option is the profile likelihood method [56, 57], which requires the solution of an optimization problem. This method however, can yield similar results to the MCMC algorithm and significantly reduce computational time. Although the Bayesian method can be very helpful in performing uncertainty quantification, we believe that studies comparing a larger number of models may benefit from a likelihood-based approach.
From a modelling perspective, we have proposed a systematic way to quantify cell densities and boundary locations in tissue collision experiments. This extends the model by Heinrich et al. [34], which was able to predict the boundary location for simple geometries and for tissues of the same initial density, but not to quantify tissue density. In contrast, our approach allows for more predictive power under a huge range of different experimental conditions. As discussed, being able to quantify and reproduce these tissue collision experiments is a first step towards the design and assemble of tissue composites.
This work could be extended by including other biological mechanisms in the models, such as more general pressure functions, cell-cell adhesion [13, 25], cell-cycle dynamics [32] or heterogeneity in cell size [39], all of which could improve our understanding of how cell interactions impact tissue collision dynamics. Although accounting for these different effects should be straightforward, whether the different model extensions are structural or even practically identifiable is not evident. Even simple models, very similar to the ones we used here, can lead to non-identifiability issues [56]. Combining more detailed models with appropriate model selection and identifiability analysis thus seems challenging but also necessary in order to obtain better insights from the experimental work.
**Data access:** Experimental data used to calibrate our models (Figure 1) is available on [33]. Experimental data corresponding to tissue-tissue collisions (Figure 6) was provided by Heinrich et al. [34]. Code used to perform the parameter estimation and to solve numerically the models is available on Github at: [https://github.com/carlesfalco/BInference-TissueCollisions](https://github.com/carlesfalco/BInference-TissueCollisions). Animated movies corresponding to the numerical simulations in the manuscript can be found on Figshare [26] at: [https://figshare.com/projects/Quantifying_tissue_shape_growth_and_collision/157068](https://figshare.com/projects/Quantifying_tissue_shape_growth_and_collision/157068). Code used to create the animations is also available on Github.
**Acknowledgments:** The authors thank M. Schmidtchen, M. A. Heinrich, A. E. Wolf for helpful discussions, and the members of the Hasenauer lab for providing assistance with the parameter estimation toolbox pyPESTO.
**Funding:** JAC was supported by the Advanced Grant Nonlocal-CPD (Nonlocal PDEs for Complex Particle Dynamics: Phase Transitions, Patterns and Synchronization) of the European Research Council Executive Agency (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 883363). JAC was also partially supported by EPSRC grants EP/T022132/1 and EP/V051121/1. CF acknowledges support of a fellowship from "la Caixa" Foundation (ID 100010434) with code LCF/BQ/EU21/11890128.
## References
* [1] R. Alert and X. Trepat. Physical models of collective cell migration. _Annual Review of Condensed Matter Physics_, 11(1):77-101, 2020.
* [2] N. J. Armstrong, K. J. Painter, and J. A. Sherratt. A continuum approach to modelling cell-cell adhesion. _Journal of Theoretical Biology_, 243(1):98-113, 2006.
* [3] R. Bailo, J. A. Carrillo, H. Murakawa, and M. Schmidtchen. Convergence of a fully discrete and energy-dissipating finite-volume scheme for aggregation-diffusion equations. _Mathematical Models and Methods in Applied Sciences_, 30(13):2487-2522, 2020.
* [4] R. E. Baker and M. J. Simpson. Models of collective cell motion for cell populations with different aspect ratio: Diffusion, proliferation and travelling waves. _Physica A: Statistical Mechanics and its Applications_, 391(14):3729-3750, 2012.
* [5] M. Bertsch, R. Dal Passo, and M. Mimura. A free boundary problem arising in a simplified tumour growth model of contact inhibition. _Interfaces and Free Boundaries_, 12(2):235-250, 2010.
* [6] C. Blanch-Mercader, R. Vincent, E. Bazellieres, X. Serra-Picamal, X. Trepat, and J. Casademunt. Effective viscosity and dynamics of spreading epithelia: a solvable model. _Soft Matter_, 13(6):1235-1243, 2017.
* [7] A. P. Browning, P. Haridas, and M. J. Simpson. A Bayesian sequential learning framework to parameterise continuum models of melanoma invasion into human skin. _Bulletin of Mathematical Biology_, 81(3):676-698, 2019.
* [8] A. P. Browning, O. J. Maclaren, P. R. Buenzli, M. Lanaro, M. C. Allenby, M. A. Woodruff, and M. J. Simpson. Model-based data analysis of tissue growth in thin 3D printed scaffolds. _Journal of Theoretical Biology_, 528:110852, 2021.
* [9] M. Burger and A. Esposito. Porous medium equation as limit of nonlocal interaction. _arXiv preprint arXiv:2202.05030_, 2022.
* [10] V. Calvez and J. A. Carrillo. Volume effects in the Keller-Segel model: energy estimates preventing blow-up. _Journal de Mathematiques Pures et Appliquees_, 86(2):155-175, 2006.
* [11] J. A. Carrillo, A. Chertock, and Y. Huang. A finite-volume method for nonlinear nonlocal equations with a gradient flow structure. _Communications in Computational Physics_, 17(1):233-258, 2014.
* [12] J. A. Carrillo, S. Fagioli, F. Santambrogio, and M. Schmidtchen. Splitting schemes and segregation in reaction cross-diffusion systems. _SIAM Journal on Mathematical Analysis_, 50(5):5695-5718, 2018.
* [13] J. A. Carrillo, H. Murakawa, M. Sato, H. Togashi, and O. Trush. A population dynamics model of cell-cell adhesion incorporating population pressure and density saturation. _Journal of Theoretical Biology_, 474:14-24, 2019.
* [14] L. Chen, K. Painter, C. Surulescu, and A. Zhigun. Mathematical models for cell migration: a non-local perspective. _Philosophical Transactions of the Royal Society B_, 375(1807):20190379, 2020.
* [15] O.-T. Chis, J. R. Banga, and E. Balsa-Canto. Structural identifiability of systems biology models: a critical comparison of methods. _PloS one_, 6(11):e27755, 2011.
* [16] C. Cobelli and J. J. Distefano III. Parameter and structural identifiability concepts and ambiguities: a critical review and analysis. _American Journal of Physiology-Regulatory, Integrative and Comparative Physiology_, 239(1):R7-R24, 1980.
* [17] C. Colson, F. Sanchez-Garduno, H. M. Byrne, P. K. Maini, and T. Lorenzi. Travelling-wave analysis of a model of tumour invasion with degenerate, cross-dependent diffusion. _Proceedings of the Royal Society A_, 477(2256):20210593, 2021.
* [18] M. Di Francesco, A. Esposito, and S. Fagioli. Nonlinear degenerate cross-diffusion systems with nonlocal interaction. _Nonlinear Analysis_, 169:94-117, 2018.
* [19] P. Domschke, D. Trucu, A. Gerisch, and M. A. J. Chaplain. Mathematical modelling of cancer invasion: Implications of cell adhesion variability for tumour infiltrative growth patterns. _Journal of Theoretical Biology_, 361:41-60, 2014.
* [20] L. Dyson and R. E. Baker. The importance of volume exclusion in modelling cellular migration. _Journal of Mathematical Biology_, 71:691-711, 2014.
* [21] L. Dyson, P. K. Maini, and R. E. Baker. Macroscopic limits of individual-based models for motile cell populations with volume exclusion. _Phys. Rev. E_, 86(3):031903, 2012.
* [22] M. El-Hachem, S. W. McCue, and M. J. Simpson. Travelling wave analysis of cellular invasion into surrounding tissues. _Physica D: Nonlinear Phenomena_, 428:133026, 2021.
* [23] M. El-Hachem, S. W. McCue, and M. J. Simpson. A continuum mathematical model of substrate-mediated tissue growth. _Bulletin of Mathematical Biology_, 84(4):1-27, 2022.
* [24] C. Falco. From random walks on networks to nonlinear diffusion. _Phys. Rev. E_, 106(5):054103, 2022.
* [25] C. Falco, R. E. Baker, and J. A. Carrillo. A local continuum model of cell-cell adhesion. _arXiv preprint arXiv:2206.14461_, 2022. To appear in SIAM Journal on Applied Mathematics.
* [26] C. Falco, D. J. Cohen, J. A. Carrillo, and R. E. Baker. Quantifying tissue shape, growth and collision: Fisshare media, 2023. [https://figshare.com/projects/Quantifying_tissue_shape_growth_and_collision/157068](https://figshare.com/projects/Quantifying_tissue_shape_growth_and_collision/157068).
* [27] A. Gamba, D. Ambrosi, A. Coniglio, A. de Candia, S. Di Talia, E. Giraudo, G. Serini, L. Preziosi, and F. Bussolino. Percolation, Morphogenesis, and Burgers Dynamics in Blood Vessels Formation. _Phys. Rev. Lett._, 90(11):118101, 2003.
* [28] E. Gauquelin, S. Tlili, C. Gay, G. Peyret, R.-M. Mege, M.-A. Fardin, and B. Ladoux. Influence of proliferation on the motions of epithelial monolayers invading adherent strips. _Soft Matter_, 15(13):2798-2810, 2019.
* [29] A. Gerisch and M. Chaplain. Mathematical modelling of cancer cell invasion of tissue: Local and non-local models and the effect of adhesion. _Journal of Theoretical Biology_, 250(4):684-704, 2008.
* [30] R. Giniunaite, R. E. Baker, P. M. Kulesa, and P. K. Maini. Modelling collective cell migration: neural crest as a model paradigm. _Journal of Mathematical Biology_, 80(1):481-504, 2020.
* [31] M. E. Gurtin and R. C. MacCamy. On the diffusion of biological populations. _Mathematical Biosciences_, 33(1-2):35-49, 1977.
* [32] M. A. Heinrich, R. Alert, J. M. LaChance, T. J. Zajdel, A. Kosmrlj, and D. J. Cohen. Size-dependent patterns of cell proliferation and migration in freely-expanding epithelia. _eLife_, 9:e58945, 2020.
* [33] M. A. Heinrich, R. Alert, J. M. LaChance, T. J. Zajdel, A. Kosmrlj, and D. J. Cohen. Size-dependent patterns of cell proliferation and migration in freely-expanding epithelia (Version 1) [Data set]. Zenodo, 2020. [https://doi.org/10.5281/zenodo.3858845](https://doi.org/10.5281/zenodo.3858845).
* [34] M. A. Heinrich, R. Alert, A. E. Wolf, A. Kosmrlj, and D. J. Cohen. Self-assembly of tessellated tissue sheets by expansion and collision. _Nature Communications_, 13(1):1-10, 2022.
* [35] K. E. Hines, T. R. Middendorf, and R. W. Aldrich. Determination of parameter identifiability in nonlinear biophysical models: A Bayesian approach. _The Journal of General Physiology_, 143(3):401-416, 2014.
* [36] D. L. Janzen, L. Bergenholm, M. Jirstrand, J. Parkinson, J. Yates, N. D. Evans, and M. J. Chappell. Parameter identifiability of fundamental pharmacodynamic models. _Frontiers in Physiology_, 7:590, 2016.
* [37] W. Jin, E. T. Shah, C. J. Penington, S. W. McCue, L. K. Chopin, and M. J. Simpson. Reproducibility of scratch assays is affected by the initial degree of confluence: Experiments, modelling and model selection. _Journal of Theoretical Biology_, 390:136-145, 2016.
* [38] W. Jin, E. T. Shah, C. J. Penington, S. W. McCue, P. K. Maini, and M. J. Simpson. Logistic proliferation of cells in scratch assays is delayed. _Bulletin of Mathematical Biology_, 79(5):1028-1050, 2017.
* [39] E. Khain and J. Straetmans. Dynamics of an expanding cell monolayer. _Journal of Statistical Physics_, 184(2):1-13, 2021.
* [40] E. Khain and L. S. Tsimring. Effective pressure and cell area distribution in a confined monolayer. _Fluid Dynamics Research_, 50(5):051413, 2018.
* [41] P. K. Maini, D. S. McElwain, and D. I. Leavesley. Traveling wave model to interpret a wound-healing cell migration assay for human peritoneal mesothelial cells. _Tissue Engineering_, 10(3-4):475-482, 2004.
* [42] S. Martina Perez, H. Sailem, and R. E. Baker. Efficient Bayesian inference for mechanistic modelling with high-throughput data. _PLOS Computational Biology_, 18(6):e1010191, 2022.
* [43] O. M. Matsiaka, C. J. Penington, R. E. Baker, and M. J. Simpson. Discrete and continuum approximations for collective cell migration in a scratch assay with cell size dynamics. _Bulletin of Mathematical Biology_, 80(4):738-757, 2018.
* [44] H. Murakawa and H. Togashi. Continuous models for cell-cell adhesion. _Journal of Theoretical Biology_, 374:1-12, 2015.
* [45] J. D. Murray. _Mathematical Biology II: Spatial Models and Biomedical Applications_, volume 3. Springer New York, 2001.
* [46] K. Oelschlager. Large systems of interacting particles and the porous medium equation. _Journal of Differential Equations_, 88(2):294-346, 1990.
* [47] K. Painter and T. Hillen. Volume-filling and quorum-sensing in models for chemosensitive movement. _Canadian Applied Mathematics Quarterly_, 10(4):501-544, 2002.
* [48] A. Puliafito, L. Hufnagel, P. Neveu, S. Streichan, A. Sigal, D. K. Fygenson, and B. I. Shraiman. Collective and single cell behavior in epithelial contact inhibition. _Proceedings of the National Academy of Sciences_, 109(3):739-744, 2012.
* [49] A. Raue, J. Karlsson, M. P. Saccomani, M. Jirstrand, and J. Timmer. Comparison of approaches for parameter identifiability analysis of biological systems. _Bioinformatics_, 30(10):1440-1448, 2014.
* [50] M. Renardy, D. Kirschner, and M. Eisenberg. Structural identifiability analysis of age-structured PDE epidemic models. _Journal of Mathematical Biology_, 84(1):1-30, 2022.
* [51] L. Schumacher. Collective cell migration in development. In C. La Porta and S. Zapperi, editors, _Cell Migrations: Causes and Functions_, volume 1146, pages 105-116. Springer, 2019.
* Parameter EStimation TOolbox for python, 2021. [https://github.com/ICB-DCM/pyPESTO](https://github.com/ICB-DCM/pyPESTO).
* [53] B. G. Sengers, C. P. Please, and R. O. Oreffo. Experimental characterization and computational modelling of two-dimensional cell spreading for skeletal regeneration. _Journal of the Royal Society Interface_, 4(17):1107-1117, 2007.
* [54] J. A. Sherratt and J. D. Murray. Models of epidermal wound healing. _Proceedings of the Royal Society of London. Series B: Biological Sciences_, 241(1300):29-36, 1990.
* [55] I. Siekmann, J. Sneyd, and E. J. Crampin. MCMC can detect nonidentifiable models. _Biophysical Journal_, 103(11):2275-2286, 2012.
* [56] M. J. Simpson, R. E. Baker, S. T. Vittadello, and O. J. Maclaren. Practical parameter identifiability for spatio-temporal models of cell invasion. _Journal of the Royal Society Interface_, 17(164):20200055, 2020.
* [57] M. J. Simpson, A. P. Browning, D. J. Warne, O. J. Maclaren, and R. E. Baker. Parameter identifiability and model selection for sigmoid population growth models. _Journal of Theoretical Biology_, 535:110998, 2022.
* [58] S. J. Streichan, C. R. Hoerner, T. Schneidt, D. Holzer, and L. Hufnagel. Spatial constraints control cell proliferation in tissues. _Proceedings of the National Academy of Sciences_, 111(15):5586-5591, 2014.
* [59] C. Topaz, A. Bertozzi, and M. Lewis. A nonlocal continuum model for biological aggregation. _Bulletin of Mathematical Biology_, 68:1601-1623, 2006.
* [60] J. L. Vazquez. _The Porous Medium Equation: Mathematical Theory._ Oxford University Press, 2007.
* [61] S. Vittadello, S. McCue, G. Gunasingh, N. Haass, and M. Simpson. Mathematical models for cell migration with real-time cell cycle dynamics. _Biophysical Journal_, 114(5):1241-1253, 2018.
* [62] J. Worsfold, T. Rogers, and P. Milewski. Density fluctuations in stochastic kinematic flows. _arXiv preprint arXiv:2204.02926_, 2022.
* [63] S. M. Zehnder, M. Suaris, M. M. Bellaire, and T. E. Angelini. Cell volume fluctuations in MDCK monolayers. _Biophysical Journal_, 108(2):247-250, 2015.
**Supplementary Information**
**Figure S2:** Typical Markov chain iterations of length 12000 for the nonlinear diffusion Porous-Fisher model and the parameters \(D,r,K,\sigma\). The three shown chains are initiated with \((D,r,K,\sigma)=(10^{-1},0.1,3200,160),(10,0.1,1600,100),(10^{-2},0.02,8000,120)\) respectively. The maximum univariate Gelman-Rubin diagnostic among the four parameters satisfies \(\hat{R}<1.03\) — using the last 5000 chain iterations.
**Figure S3:** Comparing data and model prediction for tissue growth experiments [5] — individual snapshots corresponding to Figure 1 in the main text. Top row shows prediction from the Fisher-KPP (linear diffusion) model; bottom row shows results from the Porous-Fisher (nonlinear diffusion) model. Blue lines represent numerical simulations using the maximum likelihood estimate for the model parameters.
Boundary displacement for cross-diffusion systems
Here, we extend the Porous-Fisher model for two species to study analytically tissue boundary dynamics after a collision.
### Total population density and the porous medium equation
We start by assuming that there is no proliferation, and hence the equations governing the dynamics of the two tissues read
\[\begin{cases}\partial_{t}\rho_{1}=D\,\nabla\cdot\left(\rho_{1}\nabla\left(\rho_{ 1}+\rho_{2}\right)\right),\\ \partial_{t}\rho_{2}=D\,\nabla\cdot\left(\rho_{2}\nabla\left(\rho_{1}+\rho_{2 }\right)\right).\end{cases}\]
The total population density, \(\rho=\rho_{1}+\rho_{2}\), satisfies then a porous medium equation with exponent two
\[\partial_{t}\rho=D\,\nabla\cdot\left(\rho\nabla\rho\right)=\frac{D}{2}\,\Delta \rho^{2}.\]
Note that this argument also works for more general versions of the population pressure, which in this case was assumed to grow linearly with the total population density: \(P(\rho)\sim\rho\). More generally, if we assume that the pressure inside each tissue increases as a power-law function of the total population density, \(P(\rho)\sim\rho^{m-1}\), we obtain the cross-diffusion system
\[\begin{cases}\partial_{t}\rho_{1}=D\,\nabla\cdot\left(\rho_{1}\left(\rho_{1}+ \rho_{2}\right)^{m-2}\nabla\left(\rho_{1}+\rho_{2}\right)\right),\\ \partial_{t}\rho_{2}=D\,\nabla\cdot\left(\rho_{2}\left(\rho_{1}+\rho_{2} \right)^{m-2}\nabla\left(\rho_{1}+\rho_{2}\right)\right).\end{cases}\]
The resulting equation for the total population density \(\rho=\rho_{1}+\rho_{2}\) is now a porous-medium equation with exponent \(m\)
\[\partial_{t}\rho=D\,\nabla\cdot\left(\rho^{m-1}\nabla\rho\right)=\frac{D}{m}\, \Delta\rho^{m}\,.\]
Solutions to the porous-medium equation in the whole space tend to the self-similar solution given by the Barenblatt profile [3], with known explicit expressions [6]. For simplicity we deal with the one-dimensional case, although the argument works similarly in higher
spatial dimensions. In that case, the solution1\(\rho(x,t)\) tends as \(t\to\infty\) to
Footnote 1: Assuming a proper rescaling of time and space to fix the constant in the porous medium equation
\[\rho(x,t)=t^{-\alpha}\psi\left(\left|x\right|t^{-\alpha}\right),\]
where \(\alpha=1/(m+1)\) and
\[\psi(y)=\left(C-\kappa z^{2}\right)^{1/(m-1)},\quad\left|y\right|<\sqrt{\frac{ C}{\kappa}},\]
with \(\kappa=\alpha(m-1)/2D\) and the constant \(C\) determined by conservation of mass. Note that \(\rho\) propagates with a free boundary whose radius is given by \(r(t)=\sqrt{C/\kappa}\,t^{\alpha}\). Imposing conservation of mass \(M=\int_{-r(t)}^{r(t)}\rho(x,t)\,\mathrm{d}x\) yields an expression for the constant \(C\)
\[C^{1/(m-1)}\sqrt{\frac{C}{\kappa}}=\frac{\Gamma\left(\frac{1}{m-1}+\frac{3}{2} \right)}{\Gamma\left(\frac{m}{m-1}\right)}\frac{M}{\sqrt{\pi}}.\]
### Connecting pressure and boundary displacement
We have seen that when the pressure is given by a power-law function of the density -- \(P(\rho)\sim\rho^{m-1}\) -- the total population density \(\rho\) is described by a porous-medium equation with exponent \(m\), whose asymptotic solution is known. Further, we also know that due to the population pressure term, the two species do not mix but stay segregated [1, 2]. This means that there is an interface separating them whose position is given by \(b(t)\). In order to find the position of this interface we just need to impose mass conservation for one of the tissues: \(M_{2}=\int_{b(t)}^{r(t)}\sigma(x,t)\,\mathrm{d}x\) -- where the masses of the two colliding tissues are given by \(M_{1}\) and \(M_{2}\). By doing so, we obtain
\[M_{2}=\int_{b(t)}^{r(t)}\sigma(x,t)\,\mathrm{d}x =t^{-\alpha}C^{1/(m-1)}\int_{b(t)}^{r(t)}\left(1-\frac{\kappa x^{ 2}}{Ct^{2\alpha}}\right)^{1/(m-1)}\mathrm{d}x\] \[=\frac{\Gamma\left(\frac{1}{m-1}+\frac{3}{2}\right)}{\Gamma\left( \frac{m}{m-1}\right)}\frac{M}{\sqrt{\pi}}\int_{b(t)/r(t)}^{1}\left(1-y^{2} \right)^{1/(m-1)}\,\mathrm{d}y\,. \tag{1}\]
Eq. (1) predicts the behaviour of \(b(t)\). By noting that the integral cannot depend on time, we obtain that \(b(t)\) is a fraction of the expansion radius, \(b(t)/r(t)=\ell<1\), and hence \(b(t)\)
grows at the same rate as \(r(t)\)
\[b(t)\sim t^{\alpha},\quad\alpha=\frac{1}{m+1}\,.\]
Note that this equation relates the form of the equation of state -- \(P(\rho)\sim\rho^{m-1}\) -- with the boundary displacement, which can be measured experimentally.
The ratio \(\ell\) can be found by solving the relation
\[\frac{2M_{2}}{M}=\frac{\int_{\ell}^{1}\left(1-y^{2}\right)^{1/(m-1)}\,\mathrm{ d}y}{\int_{0}^{1}\left(1-y^{2}\right)^{1/(m-1)}\,\mathrm{d}y}\,,\]
or in terms of the Beta function
\[\frac{M_{1}-M_{2}}{M}=\frac{\mathrm{B}\left(\ell^{2};1/2,\frac{m}{m-1}\right) }{\mathrm{B}\left(1/2,\frac{m}{m-1}\right)},\]
where \(B(\ell^{2}\,;a,b)\) is the incomplete Beta function evaluated at \(\ell^{2}\). It is clear then that if \(M_{2}=M_{1}=M/2\), then \(\ell=0\) and that \(M_{2}<M_{1}\) (density or width mismatch) allows for boundary displacement with \(0<\ell<1\).
### Boundary displacement in the presence of proliferation
In a setting where the cell divison time is smaller than the timescale of the experiment one can consider a system equivalent to the ones in the previous section but with a linear proliferation term
\[\begin{cases}\partial_{t}\rho_{1}=D\,\nabla\cdot\left(\rho_{1}\left(\rho_{1}+ \rho_{2}\right)^{m-2}\nabla\left(\rho_{1}+\rho_{2}\right)\right)+r\rho_{1}\,, \\ \partial_{t}\rho_{2}=D\,\nabla\cdot\left(\rho_{2}\left(\rho_{1}+\rho_{2} \right)^{m-2}\nabla\left(\rho_{1}+\rho_{2}\right)\right)+r\rho_{2}.\end{cases}\]
Now the equation for the total population density \(\rho=\rho_{1}+\rho_{2}\) reads
\[\partial_{t}\rho=\frac{D}{m}\,\Delta\rho^{m}+r\rho.\]
Under the change of variables \(\rho=\tilde{\rho}e^{rt}\) and \(\tau=(e^{r(m-1)t}-1)/r(m-1)\)[4], the equation for the total population density reduces to
\[\partial_{\tau}\tilde{\rho}=\frac{D}{m}\,\Delta\tilde{\rho}^{m}.\]
Now the boundary satisfies
\[b(t)\sim\tau^{\alpha}\sim\left(e^{r(m-1)t}-1\right)^{\alpha},\]
which shows the same power-law behaviour for \(t\ll 1/r(m-1)\), but grows exponentially for \(t>1/r(m-1)\).
Note however that the exponential growth approximation eventually becomes biologically unrealistic as at some point proliferation will become density-limited. At this point, the densities at both tissues are uniform, driving the boundary to slow down until it finally stops moving.
|
2306.02247
|
Sen2Pro: A Probabilistic Perspective to Sentence Embedding from
Pre-trained Language Model
|
Sentence embedding is one of the most fundamental tasks in Natural Language
Processing and plays an important role in various tasks. The recent
breakthrough in sentence embedding is achieved by pre-trained language models
(PLMs). Despite its success, an embedded vector (Sen2Vec) representing a point
estimate does not naturally express uncertainty in a taskagnostic way. This
paper thereby proposes an efficient framework on probabilistic sentence
embedding (Sen2Pro) from PLMs, and it represents a sentence as a probability
density distribution in an embedding space to reflect both model uncertainty
and data uncertainty (i.e., many-to-one nature) in the sentence representation.
The proposed framework performs in a plug-and-play way without retraining PLMs
anymore, and it is easy to implement and generally applied on top of any PLM.
The superiority of Sen2Pro over Sen2Vec has been theoretically verified and
practically illustrated on different NLP tasks.
|
Lingfeng Shen, Haiyun Jiang, Lemao Liu, Shuming Shi
|
2023-06-04T03:26:43Z
|
http://arxiv.org/abs/2306.02247v1
|
# Sen2Pro: A Probabilistic Perspective to Sentence Embedding
###### Abstract
Sentence embedding is one of the most fundamental tasks in Natural Language Processing and plays an important role in various tasks. The recent breakthrough in sentence embedding is achieved by pre-trained language models (PLMs). Despite its success, an embedded vector (Sen2Vec) representing a point estimate does not naturally express uncertainty in a task-agnostic way. This paper thereby proposes an efficient framework on probabilistic sentence embedding (Sen2Pro) from PLMs, and it represents a sentence as a probability density distribution in an embedding space to reflect both model uncertainty and data uncertainty (i.e., many-to-one nature) in the sentence representation. The proposed framework performs in a plug-and-play way without retraining PLMs anymore, and it is easy to implement and generally applied on top of any PLM. The superiority of Sen2Pro over Sen2Vec has been theoretically verified and practically illustrated on different NLP tasks.
## 1 Introduction
Sentence embedding, which maps an input sentence to a point (i.e., a vector) in an embedded space, is one of the most fundamental tasks in Natural Language Processing (NLP), and it plays an important role in various downstream tasks such as sentiment analysis, text classification, and natural language inference Howard and Ruder (2018); Reimers and Gurevych (2019); Gao et al. (2021). There is a surge of interest in learning sentence embedding. The early work resorts to word embedding Bengio et al. (2003); Mikolov et al. (2013); Pennington et al. (2014) and represents an input sentence by a pooling vector (mean or weighted mean) over all embeddings of its words Kiros et al. (2015); Wieting et al. (2015). More recently, sentence embedding obtained from pre-trained language models (PLMs) made a breakthrough thanks to PLM's powerful ability in modeling global context Peters et al. (2018); Devlin et al. (2019); Liu et al. (2019), and it quickly became the standard practice for sentence embedding.
Despite the success of sentence embedding from PLMs, an embedded vector (Sen2Vec) representing a point estimate does not naturally express uncertainty about the target concepts associated with the input sentence Vilnis and McCallum (2015). In essence, this uncertainty originates from many-to-one nature in language representation: (1) **model uncertainty**: model uncertainty refers to the notion of randomness caused by inherently random effects within the model. (i.e. one sentence may have many representations according to different representations within the same model (e.g., dropout)) Considering that these representation are from the same sentence, they should remain close to each other; (2) **data uncertainty**: many sentences with different linguistic structure (e.g., paraphrase) may have the same meaning in semantics. Considering the identical semantics, their corresponding representation should get close with each other.
When quantifying uncertainty, we assume that close-semantic sentences' representation follows the same probability distribution. Given a sentence, since the model only observes one sample, it is natural to ask how much a language model can capture such a rich distribution.
A natural solution to this issue is to merge such a probabilistic perspective into sentence embedding, which represents a sentence as a distribution \(P(\mu,\Sigma)\), where \(\mu\) is the mean and covariance \(\Sigma\) intuitively portrays the uncertainty of the distribution \(P\). Unfortunately, there is a critical challenge to putting this idea into practice: previous works Bamler and Mandt (2017); Camacho-Collados and Pilehvar (2018); Zhou et al. (2019) are only plausible on word embedding and require retraining word embedding with probabilistic embedding on large-scale data to advance SOTA. It is costly even for training a PLM without probabilistic embed
ding Devlin et al. (2019); Radford et al. (2019); He et al. (2020); Clark et al. (2019); Raffel et al. (2020), which typically consumes considerable GPU computations for weeks.
In this paper, we propose an efficient framework for probabilistic sentence embedding (Sen2Pro) from PLMs that represents a sentence as a probability density in an embedding space to reflect the uncertainty in the sentence representation.
Concerning **model uncertainty** and **data uncertainty**, we propose two simple methods to quantify both on top of a PLM. Specifically, to measure model uncertainty, we assume a sentence vector is drawn from a distribution \(P(\mu^{m},\Sigma^{m})\), which can be estimated by many representations of the targeted sentence using a set of stochastic PLMs obtained from Monte Carlo dropout Gal and Ghahramani (2016).
Similarly, we apply the data augmentation technique to generate many semantically equivalent sentences to the targeted sentence for measuring data uncertainty. Then we assume a sentence vector is drawn from a distribution \(P(\mu^{d},\Sigma^{d})\), which the representations of the augmented sentences from the PLM can estimate. In addition, we also introduce some ways to utilize both \(\mu^{m}\) (\(\mu^{d}\)) and \(\Sigma^{m}\) (\(\Sigma^{d}\)) as the final sentence representation for different downstream tasks.
Moreover, drawing from previous works Chen et al. (2016); Li and Chen (2019); Gao et al. (2019); Tschannen et al. (2019); Grohs et al. (2021) that explored the relationships between deep learning representation and relative entropy, we present theoretical explanations of why our probabilistic sentence embedding (Sen2Pro) is superior to point vector-based sentence embedding (Sen2Vec). Meanwhile, extensive experiments demonstrate the practical effectiveness of Sen2Pro on text classification, semantic similarity match, dialogue generation evaluation, and machine translation evaluation. Besides, Sen2Pro demonstrates its superiority in capturing sentence-level linguistic analogy over Sen2Vec.
## 2 Related Work
### Sentence Embedding
Methods for sentence embedding learning have been extensively explored, and all these methods represent a sentence as a point embedding. Early works use the weighted sum of word embedding to represent a sentence. Then some methods based on the distributional hypothesis have been done. Hill Hill Hill et al. (2016) learned sentence representations with the internal structure of each sentence, and Kiros Kiros et al. (2015) followed the idea of Word2Vec Mikolov et al. (2013) to represent a sentence by predicting its surrounding sentences. In recent years, the pre-trained language model Devlin et al. (2019) has become the standard sentence paradigm because of its strong ability to capture syntactic and semantic features of sentences by learning from the large-scale corpus. Furthermore, several researchers used contrastive learning to augment sentence representation Zhang et al. (2020); Yan et al. (2021); Meng et al. (2021); Gao et al. (2021); Wang et al. (2021), based on the assumption that a high-quality representation method should bring similar sentences closer while pushing away dissimilar ones. They belong to Sen2Vec and thus fail to model uncertainty. This paper goes beyond point sentence embedding and explores probabilistic sentence embedding.
### Probabilistic Word Embedding
In NLP, probabilistic embedding originated from word embedding Bengio et al. (2003); Mikolov et al. (2013), and existing probabilistic embedding methods work only on words, where a word from the vocabulary is represented as a density distribution in an embedding space. Although variants of probabilistic word embedding have been developed Vilnis and McCallum (2015); Bamler and Mandt (2017); Camacho-Collados and Pilehvar (2018); Zhou et al. (2019), they used a similar paradigm, which adapts the Skip-gram model Mikolov et al. (2013) with a non-parametric approach. Specifically, a Skip-gram model is retrained as the density distribution with a specific word sampling (e.g., synonym) method and a specific loss function (e.g., a margin loss). Therefore, existing probabilistic embedding needs an extremely time-consuming retraining stage and can not be applied to PLMs (e.g., BERT). Different from them, this paper contributes to the literature by developing probabilistic embedding for sentences that serves as a plug-and-play method on pre-trained language models Devlin et al. (2019) without any time-consuming retraining stage.
## 3 Methodology: Sen2Pro
Because of the many-to-one nature of sentence embedding, we model the uncertainty from two perspectives, i.e., model uncertainty and data uncer
tainty. Accordingly, we assume that the representation of one sentence follows a distribution \(P(\mu,\Sigma)\), which measures either model uncertainty or data uncertainty. The goal of our Sen2Pro framework is to estimate the two distributions \(P(\mu,\Sigma)\) for a sentence embedding based on a pre-trained language model \(f_{\theta}\) (e.g., BERT), where \(\theta\) denotes its parameters. In general, there are two steps in Sen2Pro: the sampling stage and the estimation stage. The sampling stage generates embedding instances to capture two kinds of uncertainties: model uncertainty and data uncertainty (SS3.1). The estimation stage aims to estimate the parameters in the density distributions (i.e., mean vector and covariance matrix) based on the embedding instances (SS3.2). After both distributions are estimated, the general idea to apply them to specific tasks is presented (SS3.3).
### Sampling Stage
Model UncertaintyModel uncertainty originates from the fact that one sentence may have different representations due to inherent randomness within models. In this paper, we use a pre-trained language model \(f_{\theta}\) and try to quantify _model uncertainty_. Considering that the key ingredient of model uncertainty is to vary the model while keeping the sentence \(s\) unchanged, we utilize MC dropout to create embedding instances for quantifying model uncertainty. Specifically, for each sentence \(s\), we utilize MC Dropout Gal and Ghahramani (2016); Lakshminarayanan et al. (2017) over the parameters \(\theta\) as sampling, and repeat sampling \(N\) times to obtain different subsets of the parameters \(\theta\): \(\{\widehat{\theta}_{i}\mid i=1,\ldots,N\}\). In this way, we generate a set of embeddings as follows:
\[\mathcal{S}^{m}=\left\{x_{i}=f_{\widehat{\theta}_{i}}\left(s\right)\mid i=1, \ldots,N\right\} \tag{1}\]
As shown in Eq. 1, each subset of model's parameters represent a sub-structure of the model, which naturally matches with the definition of model uncertainty.
Data UncertaintyData uncertainty corresponds to the many-to-one nature of sentence embedding. In other words, sentences that are semantically similar but slightly different should have close representations. Data uncertainty exists in universal real-world scenarios: since the model requires lots of training data to perform well, it is common to augment high-quality labeled sentences with lower-quality web-crawled data to save time and effort. To naturally imitate such uncertainty, in this paper, a simple data augmentation method, word-level operation, is applied to the input sentence, which adds proper noise to an input sentence. After repeating data augmentation \(N\) times (i.e., randomly dropping a word in \(s\), swapping two words, replacing or inserting a word in \(s\) with any word from vocabulary) for the input sentence \(s\), a set of new sentences, \(s_{1},s_{2},\ldots,s_{N}\) are obtained. Then, the sentences are fed to the pre-trained model \(f_{\theta}\) to get \(N\) embeddings. In this way, we can obtain a set of embeddings as follows:
\[\mathcal{S}^{d}=\left\{x_{i}=f_{\theta}\left(s_{i}\right)\mid i=1,\ldots,N\right\} \tag{2}\]
### Estimation Stage
After obtaining the required embedding instances for model and data uncertainty, we can estimate the probability distributions on them, respectively. Similar to cases of Kendall and Gal (2017); Maddox et al. (2019); Ovadia et al. (2019); Abdar et al. (2021); Hullermeier and Waegeman (2021), we do estimation towards two uncertainty individually rather than unifying, which is also empirically verified in Appendix D. Suppose \(\mathcal{S}\) denotes either \(\mathcal{S}^{m}\) or \(\mathcal{S}^{d}\), it is natural to estimate its mean and covariance as follows:
\[\mu=\frac{1}{|\mathcal{S}|}\sum_{x\in\mathcal{S}}x \tag{3}\]
\[\Sigma=\frac{1}{|\mathcal{S}|}\sum_{x\in\mathcal{S}}(x-\mu)(x-\mu)^{\top} \tag{4}\]
where \(|\mathcal{S}|\) means the size of the set \(\mathcal{S}\) and \((.)^{\top}\) is an transpose operation. We use \(\mu^{m}\) and \(\Sigma^{m}\) respectively to denote the statistics estimated from model uncertainty (i.e., \(\mathcal{S}=\mathcal{S}_{m}\)), and \(\mu^{d}\) and \(\Sigma^{d}\) denote those estimated through data uncertainty (i.e., \(\mathcal{S}=\mathcal{S}_{d}\)).
However, such a simple covariance matrix estimator (SCE) owns severe problems on both theoretical Xiao and Wu (2012) and practical sides: it is known to degrade rapidly as an estimator as the number of variables increases, thus performing badly in a high-dimensional case (e.g., 768 dimensions in BERT). To address this issue, inspired by Bien et al. (2016), we instead employ the banding estimator, which uses an off-diagonal entry removal operation on the covariance matrix.
Specifically, for a covariance matrix \(\Sigma=\left(\Sigma_{ij}\right)_{k\times k}\) where \(k\) is the dimension of \(\Sigma\), we use
\(B(\Sigma)\) as the estimation of \(\Sigma\) as follows:
\[\hat{\Sigma}=B(\Sigma)=\mathrm{Diag}(\Sigma) \tag{5}\]
Besides, Theorem 1 provides an estimation error bound for our banding estimator, whose proof is presented in Appendix A.
**Theorem 1**.: _Suppose \(\Sigma\) is the covariance matrix of the ground truth distribution \(P(\mu,\Sigma)\), and \(\hat{\Sigma}\) denote \(B(\Sigma)\), then we have_
\[\left\|\hat{\Sigma}-\Sigma\right\|_{2}=O_{p}\left(\left(\frac{\log k}{n} \right)^{M}\right) \tag{6}\]
_where \(k\) is the dimension of \(\Sigma\), and \(M\) is a positive constant satisfying \(M<\frac{1}{2}\). \(O_{p}\) and \(n\) means \(\frac{\log k}{n})^{M}\) is stochastically bounded as \(n\rightarrow\infty\)._
Besides, SCE also in practicability problems compared to the banding estimator. SCE owns a significantly worse performance-efficiency trade-off than our banding estimator, which will be empirically verified in Sec 5. Moreover, theoretical analyses for comparison between Sen2Vec and Sen2Pro are deferred to Appendix B.
### Usage of Sen2Pro
Unlike previous works on probabilistic embedding that drop \(\hat{\Sigma}\) in tasks, in Sen2Pro, both mean vector \(\mu\) (i.e., \(\mu^{m}\) and \(\mu^{d}\)) and covariance vector \(\hat{\Sigma}\) (i.e., \(\hat{\Sigma}^{m}\) and \(\hat{\Sigma}^{d}\)) are used for sentence embedding. In the next section, we illustrate our strategies to use \(\mu\) and \(\hat{\Sigma}\), more details are presented in Sec 4.1.
## 4 Experiment
This section comprehensively evaluates the effectiveness of our Sen2Pro framework on the following various tasks. The results consistently show that Sen2Pro outperforms Sen2Vec. Specifically, we choose two sets of tasks for evaluation: Fine-tune-need Task and Fine-tune-free Task.
### Basic Setting
Recall that \(\mu^{m}\) and \(\hat{\Sigma}^{m}\) (\(\mu^{d}\) and \(\hat{\Sigma}^{d}\)) denote the estimated mean and covariance from model (data) uncertainty. Then we present how to use them for different tasks.
Fine-tune-need Task: Text ClassificationAfter estimation stage in data and model uncertainty, each sentence is represented as the concatenation of \(\frac{\mu^{m}+\mu^{d}}{2}\) and (diagonal entries of) \(\frac{\hat{\Sigma}^{m}+\hat{\Sigma}^{d}}{2}\). Specifically, we use reparameterization to handle the non-differentiable issue of the sampling process.
Fine-tune-free Task: Sentence Similarity, Dialogue Evaluation, Neural Machine TranslationEvaluationFor such NLP tasks, the distance between two representations is needed. Although KL divergence is natural to measure the distance between two distributions, it has practical drawbacks. For example, in BERT-base case where \(\mu\in\mathcal{R}^{768\times 1}\), \(\hat{\Sigma}\in\mathcal{R}^{768\times 768}\), KL divergence not only is time-consuming but also results in numerical errors, considering that KL divergence includes operations \(det(\hat{\Sigma})\) and \(\hat{\Sigma}^{-1}\). In practice, most entries of \(\hat{\Sigma}\) are between 0 and 0.5, so the determinant becomes extremely small, which leads to numerical errors. Moreover, the computation of \(\hat{\Sigma}^{-1}\) will be unstable when the dimension is high.
Therefore, a simple function is taken to measure the distance between two probabilistic sentence embeddings \((\mu_{a},\Sigma_{a})\) and \((\mu_{b},\Sigma_{b})\) as follows:
\[d(\mathcal{N}(\mu_{a},\Sigma_{a}),\mathcal{N}(\mu_{b},\Sigma_{b })) \tag{7}\] \[=(1-\alpha)l_{1}(\mu_{a}-\mu_{b})+\alpha l_{1}(\Sigma_{a}-\Sigma_ {b})\]
where \(l_{1}\) represents the \(l_{1}\)-norm and \(\alpha\) is to balance the two terms. For \(\alpha\), we consider it as a balance factor for different magnitudes of \(\mu\) and \(\Sigma\), defined as follows:
\[\alpha=\frac{l_{1}(\mu_{a}-\mu_{b})}{l_{1}(\Sigma_{a}-\Sigma_{b})} \tag{8}\]
In most cases, \(\alpha\) ranges from 0.01 to 0.05. Besides, we will try to apply Sen2Pro on other evaluation tasks (Shen et al., 2022a,b) like paraphrase, data-to-text, and summarization.
### Text Classification
Benchmarks and SettingWe choose four widely-used text classification benchmarks: AG News (Zhang et al., 2015), DBpedia (Maas et al., 2011), Yahoo! Answers (Chang et al., 2008) and IMDB (Maas et al., 2011). The evaluation metric is the test accuracy, and the best performance is selected based on validation set accuracy. The baseline is Sen2Vec, with 15 data augmentation samples per sentence. In Sen2Pro, we choose the BERT-base model as the PLM and use the 'first-last-avg'(a pooling way that average first and last layer representation). The sampling number is 15 for the model uncertainty and data uncertainty. Moreover, we use two settings for model evaluation: few-shot and full-dataset. In the few-shot setting, models are trained with randomly selected 10 and 200 labeled sentences per class. In the full-dataset setting, models are trained with the whole training set.
PerformancesThe results are listed in Table 1. Our Sen2Pro consistently performs better than Sen2Vec under few-shot settings because Sen2Pro contains more semantic information about a sentence due to its probabilistic characteristic. Moreover, Sen2Pro achieves comparable or better performances in the full training setting than Sen2Vec.
### Sentence Similarity
Benchmarks and SettingWe use seven commonly-used STS datasets Agirre et al. (2012, 2013, 2014, 2015, 2016) for evaluation. Besides, considering the limitation of traditional intrinsic evaluation Wang et al. (2022), we choose EvalRank Wang et al. (2022) for linguistic analogy evaluation, which overcomes the previous limitation. In Sen2Pro, we use several state-of-the-art pre-trained language models, including BERT-base Devlin et al. (2019), BERT-whitening Su et al. (2021); Huang et al. (2021), Sentence-BERT Reimers and Gurevych (2019), and SimCSE Gao et al. (2021). Specifically, EvalRank uses the mean reciprocal rank (MRR) and Hits@k scores for evaluation, and a higher score indicates a better embedding model. The sampling number for each uncertainty is set as 15.
PerformancesThe results of Sen2Pro are reported in Table 2 and 3, which illustrate that Sen2Pro outperforms Sen2Vec with a substantial gap under all settings. Moreover, Sen2Pro using the'base' PLMs can achieve better or comparable performance than Sen2Vec using 'large' PLMs.
### Dialogue Evaluation
Benchmark and SettingWe choose three widely-used dialogue benchmarks: Daily(H) Lowe et al. (2017), Convai2 Dinan et al. (2020), and Empathetic Rashkin et al. (2019). Each benchmark consists of dialogue queries, the corresponding responses, and human-annotated responses. For baseline metrics, we choose BLEU Papineni et al. (2002), ROUGE Lin (2004), METEOR Denkowski and Lavie (2014), Greedy Matching Rus and Lintean (2012), Embedding Average Wieting et al. (2015), Vector Extrema Forgues et al. (2014) and BERTScore Zhang et al. (2019). For
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \hline Dataset & Model & 10 & 200 & Full & Dataset & Model & 10 & 200 & Full \\ \hline \multirow{2}{*}{AGNews} & BERT-base & 69.5 & 87.5 & 95.2 & \multirow{2}{*}{DBPedia} & BERT-base & 95.2 & 98.5 & 99.3 \\ & BERT-base-G & 74.4(1.1) & 90.2(0.3) & 95.6(0.1) & & & BERT-base-G & 96.5(0.2) & 99.1(0.1) & 99.3(*) \\ \hline \multirow{2}{*}{Yahoo} & BERT-base & 56.2 & 69.3 & 77.6 & \multirow{2}{*}{IMDB} & BERT-base & 67.5 & 86.9 & 95.6 \\ & BERT-base-G & 60.5(1.8) & 72.9(0.6) & 78.2(0.2) & & & BERT-base-G & 70.4(0.6) & 88.5(0.3) & 95.7(*) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test accuracy(%) comparison between Sen2Pro and Sen2Vec for text classification. The results on each dataset are the mean of three runs; the standard derivation (i.e., the values in brackets) is given for PLM-G, where * means the derivation is smaller than 0.1%.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Sentence Embedding & MRR & Hits@1 & Hits@3 \\ \hline BERT & 68.01 & 51.70 & 81.91 \\ BERT+Ours & 68.69 & 52.24 & 82.63 \\ \hline BERT-whitening & 66.58 & 46.54 & 84.22 \\ BERT-whitening+Ours & 67.49 & 48.23 & 84.56 \\ \hline Sentence-BERT & 64.12 & 47.07 & 79.05 \\ Sentence-BERT+Ours & 66.10 & 48.55 & 80.34 \\ \hline SimCSE & 69.50 & 52.34 & 84.43 \\ SimCSE+Ours & 70.01 & 52.68 & 84.69 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The experimental results of adding our Sen2Pro on widely used sentence embedding methods. Specifically, BERT\({}_{l}\) means BERT\({}_{large}\).
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Dataset & Model & 10 & 200 & Full & Dataset & Model & 10 & 200 & Full \\ \hline \multirow{2}{*}{AGNews} & BERT-base & 69.5 & 87.5 & 95.2 & \multirow{2}{*}{DBPedia} & BERT-base & 95.2 & 98.5 & 99.3 \\ & BERT-base-G & 74.4(1.1) & 90.2(0.3) & 95.6(0.1) & & & BERT-base-G & 96.5(0.2) & 99.1(0.1) & 99.3(*) \\ \hline \multirow{2}{*}{Yahoo} & BERT-base & 56.2 & 69.3 & 77.6 & \multirow{2}{*}{IMDB} & BERT-base & 67.5 & 86.9 & 95.6 \\ & BERT-base-G & 60.5(1.8) & 72.9(0.6) & 78.2(0.2) & & & BERT-base-G & 70.4(0.6) & 88.5(0.3) & 95.7(*) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test accuracy(%) comparison between Sen2Pro and Sen2Vec for text classification. The results on each dataset are the mean of three runs; the standard derivation (i.e., the values in brackets) is given for PLM-G, where * means the derivation is smaller than 0.1%.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Baseline & STS-12 & STS-13 & STS-14 & STS-15 & STS-16 & Avg \\ \hline BERT & 57.86\(\rightarrow\)59.55 & 61.97\(\rightarrow\)66.20 & 62.49\(\rightarrow\)65.19 & 70.96\(\rightarrow\)73.50 & 69.76\(\rightarrow\)72.10 & 63.69\(\rightarrow\)66.70(+3.01) \\ BERT\({}_{l}\) & 57.74\(\rightarrow\)59.90 & 61.16\(\rightarrow\)66.20 & 61.18\(\rightarrow\)65.62 & 68.06\(\rightarrow\)73.01 & 70.30\(\rightarrow\)74.72 & 62.62\(\rightarrow\)67.47(+4.85) \\ \hline W-BERT & 63.62\(\rightarrow\)64.50 & 73.02\(\rightarrow\)73.69 & 69.23\(\rightarrow\)69.69 & 74.52\(\rightarrow\)74.69 & 72.15\(\rightarrow\)76.11 & 69.21\(\rightarrow\)70.39 (+1.18) \\ W-BERT & 64.02\(\rightarrow\)64.90 & 73.27\(\rightarrow\)73.94 & 69.58\(\rightarrow\)70.04 & 74.77\(\rightarrow\)74.94 & 72.50\(\rightarrow\)76.44 & 69.58\(\rightarrow\)70.69 (+1.26) \\ C-BERT & 64.09\(\rightarrow\)65.01 & 78.21\(\rightarrow\)78.54 & 68.68\(\rightarrow\)69.04 & 79.56\(\rightarrow\)79.90 & 75.41\(\rightarrow\)75.74 & 72.27\(\rightarrow\)72.69 (+0.42) \\ C-BERT\({}_{l}\) & 70.23\(\rightarrow\)70.70 & 82.13\(\rightarrow\)82.54 & 73.60\(\rightarrow\)74.12 & 81.72\(\rightarrow\)82.01 & 77.01\(\rightarrow\)77.58 & 76.03\(\rightarrow\)76.48 (+0.45) \\ \hline Sim-BERT & 68.93\(\rightarrow\)69.33 & 78.68\(\rightarrow\)78.93 & 73.57\(\rightarrow\)73.95 & 79.68\(\rightarrow\)80.01 & 79.11\(\rightarrow\)79.29 & 75.11\(\rightarrow\)75.44 (+0.33) \\ Sim-BERT\({}_{l}\) & 69.25\(\rightarrow\)69.60 & 78.96\(\rightarrow\)79.30 & 73.64\(\rightarrow\)73.92 & 80.06\(\rightarrow\)80.31 & 79.08\(\rightarrow\)79.42 & 75.31\(\rightarrow\)75.61 (+0.30) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performances on EvalRank Wang et al. (2022) using Sen2Vec and Sen2Pro. ‘+Ours’ means Sen2Pro, and the results are the mean of five runs to show the statistical significance with \(p\) value <0.05.
Sen2Pro, we use the BERT-base model and the 'first-last-avg' representation and set the sampling number of uncertainty to 15.
PerformancesThe performances of various evaluation metrics are reported in Table 4. Sen2Pro performs best in most cases, demonstrating its good generalization ability and robustness for the dialogue evaluation task.
### NMT Evaluation
Benchmark and SettingWe work on the WMT-17 machine translation benchmark Bojar et al. (2017). Moreover, BLEU Papineni et al. (2002), CDER Leusch et al. (2006), BLEND Ma et al. (2017), Sen2Vec, and BERTScore Zhang et al. (2019) are chosen as baseline metrics. The setting of Sen2Pro is the same as the one of dialogue evaluation.
PerformancesThe results are shown in Table 5. Our Sen2Pro achieves comparable performance towards a top metric (i.e., BERTScore) and significantly outperforms Sen2Vec, which demonstrates the effectiveness of Sen2Pro in the NMT evaluation task. Segment-level results are shown in Appendix H. The results indicate that our Sen2Pro can yield competitive performance to SOTA as an automatic metric to evaluate dialogue generation.
## 5 Analysis and Discussion
This section presents analyses of Sen2Pro, and the BERT model is used as the PLM in the analyses. Specifically, the representation uncertainty analysis is made on the STS task, and detailed results are deferred to Appendix E and F. Moreover, we choose the BERT model as our pre-trained sentence encoder in the analysis.
Feature with higher model uncertainty is more importantWe investigate the relation between the model uncertainty and the **feature importance** on the STS task. In Sen2Pro, a sentence is represented as \(\mu^{m}\in\mathcal{R}^{768\times 1}\) and diagonal entries of \(\hat{\Sigma}^{m}\in\mathcal{R}^{768\times 768}\). From another perspective, a sentence is represented by 768 features in \(\mu^{m}\), and \(\hat{\Sigma}^{m}\) reflects the corresponding model uncertainties of such features. Let \(T\) be the feature set, and \(t\in T\) is a feature subset. We define the **feature importance** as follows:
\[\mathit{score}(t)=|\rho(T)-\rho(T/t)| \tag{9}\]
where \(\rho\) represents Spearman's correlation in STS evaluation, and \(\mathit{score}(t)\) describes the performance change after removing the feature subset \(t\) from \(T\). Then we separate the 768 features into five groups according to their uncertainty \(\hat{\Sigma}^{m}\), and name them as 'I' to 'V', as shown in Figure 1. When a group of features is removed from \(T\), these features are set to 0 in \(\mu^{m}\) and \(\hat{\Sigma}^{m}\), respectively. Figure 1 lists the results on STS-12. In Figure 1, as the feature's uncertainty decreases, the importance score drops, indicating that features with higher uncertainty are more important in the STS task.
**Fluctuation Rate**\(Q\) as follows:
\[Q(f,D)=\frac{\Sigma_{i=1}^{|D|}(\Sigma_{j=1}^{k}\sigma_{ij})}{k\times|D|} \tag{10}\]
where1\(f\) and \(D\) represent the PLM and benchmark, and \(\sigma_{ij}\) is \(j\)-th element of the covariance matrix for \(i\)-th sentence in \(D\), and \(k\) denotes the dimension of model \(f\). Such a metric can generally reflect the uncertainty of a specific model towards a specific benchmark. Based on \(Q(f,D)\), we define the improvement score \(I\) reflecting the improvement of Sen2Pro over Sen2Vec as follows:
Footnote 1: Note that \(\sigma\) corresponds to \(\hat{\Sigma}^{m}\), we change the notation here to avoid repeating with the sum operation \(\Sigma\).
\[I=P_{\textit{Sen2Pro}}(f,D)-P_{\textit{Sen2Vec}}(f,D) \tag{11}\]
where \(P_{\textit{Sen2Pro}}(f,D)\) and \(P_{\textit{Sen2Vec}}(f,D)\) represent the performance of model \(f\) on \(D\) under Sen2Pro and Sen2Vec, respectively. Besides, we add the result of [CLS] representation into experiments since the [CLS] representation owns a significantly higher fluctuation rate than the 'first-last-avg' representation. The results on STS-12 are illustrated in Figure 2. As the fluctuation rate increases, the improvement score becomes more significant. Such empirical results demonstrate that the effectiveness of Sen2Pro is highly correlated to the model uncertainty and show Sen2Pro's superiority over Sen2Vec owns a positive correlation to model uncertainty.
Sen2Pro's performances. Specifically, Sen2Pro is evaluated on one intrinsic evaluation (STS) and one downstream task (text classification), and the results are demonstrated in Table 6 and Table 7, respectively. As shown in Table 6, the performance decreases when the data uncertainty is applied alone for the STS task since sentences' semantics may be changed due to the data augmentation. In contrast, the model uncertainty consistently brings benefits to the sentence representation. For text classification, both uncertainties improve the representation's generalization. Specifically, the contribution of the data uncertainty is higher than the model uncertainty. These empirical results also illustrate the usage of Sen2Pro (SS3.3).
Effect of the Banding EstimatorAs mentioned in Sec 3.2, we use the banding estimator for covariance estimation. This part compares SCE (the usual estimator) and the banding estimator in two aspects: performance and efficiency. We choose BERT-base as the PLM, and the results are presented in Table 8, and the efficiency is shown in Figure 3. As shown in Figure 3, our estimator achieves a significantly better performance-efficiency trade-off than SCE, demonstrating the banding estimator's effectiveness.
## 6 Linguistic Case Study
We conduct case studies following the famous analogy from Word2Vec [11]. The widely used analogy takes the following form: Knowing that \(A\) is to \(B\) as \(C\) is to \(D\), given \(l_{2}\) normalized embeddings \(\vec{v}_{A},\vec{v}_{B},\vec{v}_{C},\vec{v}_{D}\) for sentences \(A,B,C,D\) for an analogy of the above form, the task compare the embedding distance \(dis(A,B)\) and \(dis(C,D)\), which is defined as follows:
\[x=|\vec{v}_{A}-\vec{v}_{B}|_{2}-|\vec{v}_{C}-\vec{v}_{D}|_{2} \tag{12}\]
Then we use the sentence analogy set created from [11], which is specifically for this test. Here is a quadruple example:
_A: A man is not singing._
_B: A man is singing._
_C: A girl is not playing the piano._
_D: A girl is playing the piano._
We can see that the relation between A(C) and B(D) is negation; ideally, \(x\) in this quadruple should be small. We list the performance of the sentence embedding method w/ and w/o Sen2Pro in Table 9.
## 7 Conclusion and Future Work
This paper investigates the probabilistic representation for sentences and proposes Sen2Pro, which portrays the representation uncertainty from model and data uncertainty. The effectiveness of Sen2Pro is theoretically explained and empirically verified through extensive experiments, which show the great potential of probabilistic sentence embedding. In the future, we will investigate several aspects of Sen2Pro, like how to pre-train language models from an uncertainty perspective since existing pre-training models are based on Sen2Vec. Also, we expect to design more natural schemes that utilize Sen2Pro instead of concatenating the mean and variance vectors. Such directions can further enhance Sen2Pro's performance and efficiency.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Method & STS & TC & Dialog & NMT \\ \hline SCE & 64.98 & 75.11 & 87.5 & 97.0 \\ Ours & 65.23 & 75.45 & 89.4 & 97.8 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Comparisons between SCE and banding estimator. The number is the average performance.
Figure 3: The performance-efficiency trade-off between SCE and our banding estimator, where ‘1’ represents our estimation and ‘2’ represents ‘SCE’.
\begin{table}
\begin{tabular}{c c|c c} \hline \hline Embedding & x & Embedding & x \\ \hline BERT & 21.3 & SBERT & 15.6 \\ BERT+Ours & 14.7 & SBERT+Ours & 11.0 \\ \hline whitening & 18.8 & SimCSE & 12.0 \\ whitening+Ours & 12.9 & SimCSE+Ours & 10.1 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Results on the analogy task. Smaller \(x\) means better. We can observe that our methods bring improvements to baselines.
### Limitation
This major limitation of Sen2Pro lies in the computational cost due to the generation of several samples. Thus, improving the efficiency of Sen2Pro is a future direction. Besides, since we choose to concat the representation of different samples, there may be more natural ways to merge information from samples.
|
2308.05084
|
On the Thermodynamics of Gravitational Radiation
|
This article deals with the thermodynamics of gravitational radiation arising
from the Bondi-Sachs space-time. The equation of state found allows us to
conclude that the dependence of the energy density on the temperature is a
quadratic power of the latter. Such a conclusion is possible once the
consequences of the first law of thermodynamics are analyzed. Then, in analogy
to electromagnetic radiation, the same approach as used by Planck to obtain the
quantum of energy of the gravitational radiation is proposed. An energy for the
graviton proportional to the cubic frequency is found. The graviton is here
understood as the quantum of gravitational energy.
|
S. C. Ulhoa, F. L. Carneiro, J. W. Maluf
|
2023-08-09T17:27:07Z
|
http://arxiv.org/abs/2308.05084v1
|
# On the Thermodynamics of Gravitational Radiation
###### Abstract
This article deals with the thermodynamics of gravitational radiation arising from the Bondi-Sachs space-time. The equation of state found allows us to conclude that the dependence of the energy density on the temperature is a quadratic power of the latter. Such a conclusion is possible once the consequences of the first law of thermodynamics are analyzed. Then, in analogy to electromagnetic radiation, the same approach as used by Planck to obtain the quantum of energy of the gravitational radiation is proposed. An energy for the graviton proportional to the cubic frequency is found. The graviton is here understood as the quantum of gravitational energy.
## I Introduction
In 1884 Boltzmann derived Stefan's law, in what would come to be known as the Stefan-Boltzmann law [1]. This law stated that the energy density of electromagnetic radiation was proportional to the fourth power of the temperature. The Maxwell's equations alone do not allow one to conclude that electromagnetic radiation is thermal, since such equations do not depend on temperature. They however establish an equation of state for the field, that is, the pressure is equal to one third of the energy. Boltzmann only considered the thermodynamic consequences of the electromagnetic field equation of state to demonstrate the dependence of energy density on temperature. In his seminal paper published in 1901 [2], Planck both showed how to obtain the constant of proportionality of the Stefan-Boltzmann law and introduced the Quanta theory. He considered that blackbody radiation could be described by a collection of harmonic oscillators whose energy was discrete. Each quantum of energy was proportional to the frequency of the radiation. The quantum of energy of electromagnetic radiation became known as the photon and the constant of proportionality as Planck's constant, \(h\). That constant has proved fundamental to every quantum process since then.
The gravitational field has several similarities with the electromagnetic field, for instance the Einstein equations also do not explicitly depend on the temperature. So there would be no reason to expect any thermal radiation in the framework of such a description of the gravitational field. On the other hand, the very definition of gravitational energy, in the metric formulation of the field, is quite controversial. In this context, no definition achieves the properties expected of such a quantity. Namely, the energy must be independent of the coordinate system and it must be sensitive to the choice of the reference frame. There is, on the other hand, a dynamically equivalent description of general relativity, known as Teleparallelism Equivalent to General Relativity (TEGR), in which quantities such as energy, momentum, and angular momentum are well defined [3]. Armed with such a description of the gravitational field, one can consider the thermodynamic consequences of a specific equation of state. In particular, the gravitational radiation is of special interest because, like the electromagnetic field, it can reveal quantum aspects of the physical system.
In the same way that Maxwell's theory allows us to establish an explicit relationship between the energy density \(\epsilon\) and the electromagnetic radiation pressure \(p\), and consequently, derive a relation between energy density and temperature based on fundamental thermodynamics, our aim is to establish a thermodynamic description of gravitational radiation. In order to describe the gravitational radiation, we consider the Bondi-Sachs spacetime, which represents a source emitting gravitational waves. The energy of this spacetime consists of a rapidly decaying mass aspect and another term that decays more slowly. We focus on the metric in the limit of being far from the source, but not at infinity. In this limit, the contribution from the source (or mass aspect) vanishes, while the other term remains finite. Consequently, we identify this remaining term as the energy of the gravitational radiation. By evaluating the gravitational pressure in the same limit, we can compare the two expressions and obtain an equation of state for gravitational radiation. From this point, we pursue Planck's idea and, through the application of the first law of thermodynamics, establish a relationship between the energy density of gravitational radiation and its temperature. Thus, we obtain a purely macroscopic and classical equation of state for gravitational radiation. Based on this equation of state, we calculate the Gibbs energy, similar to what is observed for electromagnetic radiation. Consequently, we adopt a statistical model for the radiation and arrive at an intriguing result: a cubic dependence between a hypothetical gravitational quantum (graviton) and the frequency of the radiation. The article is divided as follows. In section II, the Bondi-Sachs spacetime is described and the results of the gravitational energy and pressure of the radiation are given. In section III, Boltzmann and Planck procedures are used for gravitational radiation. With this, the energy of the graviton is established. Finally,
in the last section, the final considerations are presented.
## II Bondi-Sachs space-time
The purpose of this section is to briefly present the Bondi-Sachs space-time, in addition to the gravitational energy-momentum calculated within the framework of TEGR in ref. [4]. It describes the gravitational radiation at spacelike and null infinities. The line-element, in the natural unities system, has the form
\[ds^{2} = g_{00}\,du^{2}+g_{22}\,d\theta^{2}+g_{33}\,d\phi^{2}+2g_{01}\,du\,dr +2g_{02}\,du\,d\theta+2g_{03}\,du\,d\phi+2g_{23}d\theta\,d\phi\,, \tag{1}\]
where \(u=t-r\) is the retarded time and the asymptotic behavior at \(r\rightarrow\infty\) of the metric tensor is given by
\[g_{00} \simeq -1+\frac{2M}{r}\,,\] \[g_{01} \simeq -1+\frac{c^{2}+d^{2}}{2r^{2}}\,,\] \[g_{02} \simeq l+\frac{1}{r}(2cl+2d\bar{l}-p)\,,\] \[g_{03} \simeq \bar{l}\sin\theta+\frac{1}{r}(-2c\bar{l}+2dl-\bar{p})\sin\theta\,,\] \[g_{22} \simeq r^{2}+2cr+2(c^{2}+d^{2})\,,\] \[g_{33} \simeq [r^{2}-2cr+2(c^{2}+d^{2})]\sin^{2}\theta\,,\] \[g_{23} \simeq 2dr\sin\theta+\frac{4d^{3}}{3r}\sin\theta\,, \tag{2}\]
with
\[l=\partial_{2}c+2c\,\cot\theta+\partial_{3}d\,\csc\theta\,,\]
and
\[\bar{l}=\partial_{2}d+2d\,\cot\theta-\partial_{3}c\csc\theta\,.\]
Here \(M(u,\theta)\), \(\partial_{0}c(u,\theta)=\partial c/\partial u\) and \(\partial_{0}d(u,\theta)\) are the mass aspect, the first and second news functions respectively. The functions \(p\) and \(\bar{p}\) are defined in references of [4]. They do not contribute to the energy-momentum and therefore will not be detailed here. The time derivative of the mass aspect yields the loss of mass, while the news functions are interpreted as degrees of freedom of
gravitational radiation. In the TEGR the energy-momentum depends on the choice of the tetrad field, which represents the choice of the observer. For that, a set of tetrads was chosen that satisfies the condition \(e_{(0)}^{\ \ i}=0\). With the help of the tetrad field, the Weitzenbock connection can be defined, i.e.
\[\Gamma_{\mu\lambda\nu}=e^{a}\,_{\mu}\partial_{\lambda}e_{a\nu}\,,\]
whose antisymmetric part determines the following torsion tensor
\[T^{a}\,_{\lambda\nu}=\partial_{\lambda}e^{a}\,_{\nu}-\partial_{\nu}e^{a}\,_{ \lambda}\,. \tag{3}\]
It is worth noting that the Weitzenbock connection relates to the Christoffel symbols by the following mathematical identity
\[\Gamma_{\mu\lambda\nu}={}^{0}\Gamma_{\mu\lambda\nu}+K_{\mu\lambda\nu}\,, \tag{4}\]
where
\[K_{\mu\lambda\nu} = \frac{1}{2}(T_{\lambda\mu\nu}+T_{\nu\lambda\mu}+T_{\mu\lambda\nu })\,, \tag{5}\]
is the contorsion tensor. Such an identity allows writing the curvature scalar in terms of torsions, i.e.
\[eR(e)\equiv-e(\frac{1}{4}T^{abc}T_{abc}+\frac{1}{2}T^{abc}T_{ bac}-T^{a}T_{a})+2\partial_{\mu}(eT^{\mu})\,, \tag{6}\]
where \(e=det(e^{a}\,_{\mu})\). It should be noted that the left-hand side of the above expression is the Hilbert-Einstein Lagrangian density, thus the TEGR Lagrangian density takes the following form
\[{\mathfrak{L}}(e_{a\mu}) = -\kappa\,e\,(\frac{1}{4}T^{abc}T_{abc}+\frac{1}{2}T^{abc}T_{ bac}-T^{a}T_{a})-{\mathfrak{L}}_{M} \tag{7}\] \[\equiv -\kappa\,e{\mathfrak{L}}^{abc}T_{abc}-{\mathfrak{L}}_{M}\;,\]
where \({\mathfrak{L}}_{M}\) is the Lagrangian density of matter fields, \(\kappa=\frac{1}{16\,\pi}\) is the coupling constant in natural units and \({\mathfrak{L}}^{abc}\) is defined by
\[\Sigma^{abc}=\frac{1}{4}(T^{abc}+T^{bac}-T^{cab})+\frac{1}{2}(\eta^{ac}T^{b}-\eta^{ ab}T^{c})\;. \tag{8}\]
The derivative of this Lagrangian density with respect to the tetrad leads to the following field equation
\[\partial_{\nu}\left(e\Sigma^{a\lambda\nu}\right)=\frac{1}{4\kappa}e\,e^{a}\,_{ \mu}(t^{\lambda\mu}+T^{\lambda\mu})\;, \tag{9}\]
where
\[t^{\lambda\mu}=\kappa\left[4\,\Sigma^{bc\lambda}T_{bc}\,^{\mu}-g^{\lambda\mu} \,\Sigma^{abc}T_{abc}\right]\,, \tag{10}\]
which is interpreted as the energy-momentum of the gravitational field. Such a definition has been shown to be quite consistent over the years. Hence the energy-momentum vector is defined by
\[P^{a}=\int_{V}d^{3}x\,e\,e^{a}\,_{\mu}(t^{0\mu}+T^{0\mu})\,. \tag{11}\]
It is important to realize that the energy-momentum vector is invariant under coordinate transformations and is covariant under Lorentz transformations. This means that the zero component, identified with the energy, has all the qualities expected for a consistent definition of gravitational energy.
Thus, for the Bondi-Sachs metric, the energy reads [4]
\[P^{(0)}=4\kappa\int_{0}^{2\pi}d\phi\int_{0}^{\pi}d\theta\sin\theta\bigg{[}M+ \partial_{0}F\bigg{]}\,, \tag{12}\]
where
\[F=-\frac{1}{4}\bigg{(}l^{2}+\bar{l}^{2}\bigg{)}+\frac{1}{2}c^{2}+d^{2} \tag{13}\]
It is worth noting that the term \(\partial_{0}F\) generalises the standard Bondi-Sachs energy. The momentum is given by
\[P^{(i)} = 4\kappa\int_{0}^{2\pi}d\phi\int_{0}^{\pi}d\theta\sin\theta\bigg{[} (M+\partial_{0}F)\hat{r}^{i} \tag{14}\] \[+\frac{1}{4}(l\partial_{0}M)\hat{\theta}^{i}+\frac{1}{4}(\bar{l} \partial_{0}M)\hat{\phi}^{i}\bigg{]}\,.\]
It should be noted that the energy-momentum is presented in terms of its components. The energy can also be expressed as
\[P^{(0)}=\int d^{3}x\,\epsilon\,, \tag{15}\]
where \(\epsilon=4\kappa\,\partial_{r}\bigg{[}M+\partial_{0}F\bigg{]}\), assuming that a realistic physical system, like a radiating star, does not have a singularity. That means \(\epsilon\) is the volumetric energy density. The derivative of the momentum with respect to time is precisely the force, thus
\[\frac{dP^{(i)}}{dt}=-\int dS_{j}\,\phi^{(i)j}\,, \tag{16}\]
where \(\phi^{(1)1}=p_{r}=-4\kappa\,\frac{\partial}{\partial t}\bigg{[}M+\partial_{0} F\bigg{]}\) is the radial pressure [3]. There is, therefore, a well-defined equation of state for this gravitational radiation
\[\epsilon=p_{r}\,, \tag{17}\]
that is, the energy density \(\epsilon\) is equal to the radial pressure \(p_{r}\). Next, the consequences of this equation of state are analyzed.
## III Thermodynamics of gravitational radiation
The first law of thermodynamics is essentially a conservation law in which the heat produced by a heat engine minus the work done by the engine equals the internal energy of the system. Although historically established in the context of heat engines, thermodynamics has a more fundamental character. It was this fundamental feature of thermodynamics that led Boltzmann to propose an ingenious interpretation of the entropy thermodynamic potential. Until then, entropy accounted for the reversibility of thermodynamic phenomena. Boltzmann proposed that it should be proportional to the logarithm of the number of possible states of the system. The first law formulated in terms of entropy is then
\[TdS=dU+pdV\,, \tag{18}\]
which in terms of the free energy \(F=U-TS\) is
\[dF=-SdT-pdV\,. \tag{19}\]
It should be noted that the free energy is a thermodynamic potential which depends on the variables temperature and volume \(F\equiv F(T,V)\), thus \(S=-\left(\frac{\partial F}{\partial T}\right)_{V}\) and \(p=-\left(\frac{\partial F}{\partial V}\right)_{T}\). Hence \(\left(\frac{\partial S}{\partial V}\right)_{T}=\left(\frac{\partial p}{ \partial T}\right)_{V}\). Then the first law of thermodynamics reads
\[T\frac{dp}{dT}=\epsilon+p\,, \tag{20}\]
where \(\epsilon=\frac{\partial U}{\partial V}\) and \(p\) is the pressure. One should not confuse it with the quantity in the Bondi-Sachs space-time. It is worth noting that such an equation yields the Stefan-Boltzmann law for the electromagnetic radiation equation of state \(p=\epsilon/3\).
Let's consider the equation of state for gravitational radiation obtained in the previous section \(\epsilon=p_{r}\), we identify the thermodynamic energy \(U\) with the gravitational energy \(P^{(0)}\), therefore, equation (20) reads
\[T\frac{dp_{r}}{dT}=2p_{r}\,, \tag{21}\]
which yields the following dependence of pressure on the temperature
\[p_{r}=\sigma\,T^{2} \tag{22}\]
where \(\sigma\) is the integration constant. The energy has the same dependence, \(\epsilon=\sigma\,T^{2}\), while the entropy is \(S=2\sigma TV\). It should be noted that entropy is calculated from the relation \(\left(\frac{\partial S}{\partial V}\right)_{T}=\left(\frac{\partial p}{ \partial T}\right)_{V}\). As an immediate consequence one can calculate the Gibbs free energy
\[G = U-TS+PV \tag{23}\] \[= 0\,.\]
This result is very interesting, as it also occurs for the electromagnetic radiation. In the latter case, the vanishing of \(G\) implies the vanishing of the chemical potential \(\mu\), since \(G=\mu N\), where \(N\) is the number of particles. Hence the number of particles must be non-zero (or there would be no physical system), which gave rise to the idea of photons for the electromagnetic field. This suggests that the gravitational radiation may have a quantum of energy \(U_{0}\). Planck used Boltzmann's concept of entropy to derive the photon energy. Here we're going to take a slightly different route to obtain
the "graviton" energy. Thus, it is considered that gravitational radiation can be understood as a collection of discrete oscillators, each one with the energy \(U_{n}=nU_{0}\). Then the average value of the energy is
\[\bar{U}=\frac{\sum_{n}U_{n}\exp\left(\frac{U_{n}}{kT}\right)}{\sum_{n}\exp\left( \frac{U_{n}}{kT}\right)}\,, \tag{24}\]
where \(k\) is the Boltzmann constant. That yields
\[\bar{U}=\frac{U_{0}}{\exp\left(\frac{U_{0}}{kT}\right)-1}\,. \tag{25}\]
The number of oscillators that exist between frequencies \(\nu\) and \(\nu+d\nu\) is \(dN=8\pi V\nu^{2}d\nu\). That leads to the following energy density
\[\epsilon=\frac{8\pi U_{0}\nu^{2}}{\exp\left(\frac{U_{0}}{kT}\right)-1}\,, \tag{26}\]
which can be used to define the intensity of the radiation \(I=\frac{\epsilon}{4\pi}\). Such a radiation intensity should reflect the squared temperature behavior when integrated over all frequencies, i.e.,
\[\int I(\nu,T)d\nu=\sigma\,T^{2}\,. \tag{27}\]
If we assume that the quantum of energy of the gravitational radiation depends on the frequency of the radiation as a power law, then in view of the previous equation the energy is
\[U_{0}=\alpha\,\nu^{3}\,, \tag{28}\]
which is the energy of the graviton. The equation (27) may be rewritten as
\[\sigma T^{2}=\frac{2k^{2}T^{2}}{3c^{2}\alpha}\int_{0}^{\infty}\frac{x}{e^{x}-1 }\,dx\,,\]
once the variable \(x=\frac{\alpha\nu^{3}}{kT}\) is settled. Here SI units are used, in this system of units \(c\) is the speed of light, which cannot be confused with the Bondi-Sachs space-time function. Since the integral above has a well-defined value \(\int_{0}^{\infty}\frac{x}{e^{x}-1}\,dx=\frac{\pi^{2}}{6}\), then the \(\sigma\) constant in terms of the Boltzmann constant and the \(\alpha\) constant is given by
\[\sigma=\frac{\pi^{2}k^{2}}{9c^{2}\alpha}\,.\]
In order to determine the value of such a constant, it should be noted that the alpha constant has a dimension of \(J\,s^{3}\), the meaning of this result is possibly that the constant, which appears in the energy of the graviton, is identified with Planck's constant itself times the squared Planck time, i.e., \(\alpha\sim 10^{-122}\). Thus the order of magnitude of this constant is \(\sigma\sim 10^{60}\), in SI units. This may indicate that the \(\alpha\) constant is completely independent of Planck's constant. Perhaps only experimental measurement can establish the value of this constant.
## IV Conclusion
In this article we show that the definition of gravitational energy and pressure, in the context of Teleparallelism Equivalent to General Relativity, leads to a well-defined equation of state for the Bondi-Sachs space. Such an equation of state for gravitational radiation induces a specific dependence on temperature, that is, \(p_{r}=\sigma\,T^{2}\). It is interesting to note that such an expression is a direct consequence of the first law of thermodynamics along with the existence of the equation of state. Since the Gibbs free energy is zero, we assume that the gravitational radiation also obeys the Planck's hypothesis. Therefore we adjust the quantum of energy to reproduce the correct dependence on the temperature. We obtain the energy of the graviton as \(U_{0}=\alpha\,\nu^{3}\). It is worth noting that if an experiment were to confirm the dependence of gravitational energy density on temperature, this would render the metric approach to gravitation unfeasible. This is due to the fact that any dependence on temperature implies the existence of an equation of state involving energy and pressure. In this way the gravitational energy, whose existence is at least controversial in the context of general relativity, would have a physical existence. In Addition, the first law of thermodynamics, together with an equation of state, predicts a dependence of the pressure (or the energy) on the temperature. On the other hand, the non-existence of the concept of gravitational energy would contradict the first law of thermodynamics, if any dependence on the temperature of a gravitational radiation is verified. Thus measuring the temperature of a gravitational wave is a viable way to rule out the non-localizability of the gravitational energy. In the last 15 years, a series of measurements involving pulsars indicate the existence of background gravitational waves, which were not necessarily produced by black holes [5; 6]. They can be legitimate gravitational radiation and it is necessary to investigate what relationship there is between such waves and thermal energy. Thus an analogous experiment can be carried out in order to correlate the frequency with the
temperature of these waves.
|
2305.14175
|
Site-Selective Enhancement of Superconducting Nanowire Single-Photon
Detectors via Local Helium Ion Irradiation
|
Achieving homogeneous performance metrics between nominally identical pixels
is challenging for the operation of arrays of superconducting nanowire
single-photon detectors (SNSPDs). Here, we utilize local helium ion irradiation
to post-process and tune single-photon detection efficiency, switching current,
and critical temperature of individual devices on the same chip. For 12nm thick
highly absorptive SNSPDs, which are barely single-photon sensitive prior to
irradiation, we observe an increase of the system detection efficiency from $<
0.05\,\%$ to $(55.3 \pm 1.1)\,\%$ following irradiation. Moreover, the internal
detection efficiency saturates at a temperature of 4.5 K after irradiation with
$1800\, \mathrm{ions}\, \mathrm{nm}^{-2}$. For irradiated 10 nm thick detectors
we observe a doubling of the switching current (to $20\, \mu\mathrm{A}$)
compared to 8 nm SNSPDs of similar detection efficiency, increasing the
amplitude of detection voltage pulses. Investigations of the scaling of
superconducting thin film properties with irradiation up to a fluence of
$2600\, \mathrm{ions}\, \mathrm{nm}^{-2}$ revealed an increase of sheet
resistance and a decrease of critical temperature towards high fluences. A
physical model accounting for defect generation and sputtering during helium
ion irradiation is presented and shows good qualitative agreement with
experiments.
|
Stefan Strohauer, Fabian Wietschorke, Lucio Zugliani, Rasmus Flaschmann, Christian Schmid, Stefanie Grotowski, Manuel Müller, Björn Jonas, Matthias Althammer, Rudolf Gross, Kai Müller, Jonathan J. Finley
|
2023-05-23T15:51:13Z
|
http://arxiv.org/abs/2305.14175v1
|
# Site-Selective Enhancement of Superconducting Nanowire Single-Photon Detectors
###### Abstract
Achieving homogeneous performance metrics between nominally identical pixels is challenging for the operation of arrays of superconducting nanowire single-photon detectors (SNSPDs). Here, we utilize local helium ion irradiation to post-process and tune single-photon detection efficiency, switching current, and critical temperature of individual devices on the same chip. For \(12\,\mathrm{nm}\) thick highly absorptive SNSPDs, which are barely single-photon sensitive prior to irradiation, we observe an increase of the system detection efficiency from \(<0.05\,\mathrm{\char 37}\) to \((55.3\pm 1.1)\,\mathrm{\char 37}\) following irradiation. Moreover, the internal detection efficiency saturates at a temperature of \(4.5\,\mathrm{K}\) after irradiation with \(1800\,\mathrm{ions\,nm^{-2}}\). For irradiated \(10\,\mathrm{nm}\) thick detectors we observe a doubling of the switching current (to \(20\,\mathrm{\SIUnitSymbolMicro A}\)) compared to \(8\,\mathrm{nm}\) SNSPDs of similar detection efficiency, increasing the amplitude of detection voltage pulses. Investigations of the scaling of superconducting thin film properties with irradiation up to a fluence of \(2600\,\mathrm{ions\,nm^{-2}}\) revealed an increase of sheet resistance and a decrease of critical temperature towards high fluences. A physical model accounting for defect generation and sputtering during helium ion irradiation is presented and shows good qualitative agreement with experiments.
## I Introduction
Superconducting Nanowire Single-Photon Detectors (SNSPDs) [1] play a significant role in quantum technologies [2; 3; 4; 5; 6; 7; 8; 9] and a wide range of applications requiring general faint light detection [10; 11]. Compared to Single-Photon Avalanche Diodes (SPADs) [12], their superior performance metrics, consisting of high detection efficiency also at long wavelengths [13; 14], low dark count rate [15], and low timing jitter [16] make them ideally suited for demanding applications such as quantum key distribution [2; 3; 4], quantum computing [17], or deep space optical communication [7]. Moreover, their waveguide-integrated form is a key component for photonic integrated circuits [18; 19; 20; 21; 22].
Since recently, SNSPDs also find application in fields such as astronomy [23], dark matter detection [24], and particle detection [25; 26]. However, these applications typically require large detector arrays or even an SNSPD camera, which to date turns out to be challenging due to the necessary readout and homogeneity within an ensemble of the order of hundreds to thousands of detectors. Recently, row-column multiplexing of a 1024-pixel array [27], and a promising readout architecture based on thermal coupling and time-of-flight measurements [28] were demonstrated. For such pixel arrays, typically amorphous materials such as MoSi and WSi are used, although SNSPDs based on polycrystalline materials like NbN and NbTiN exhibit higher critical temperatures, larger critical currents, and lower timing jitter. Compared to polycrystalline materials and their spatial inhomogeneities of the superconducting energy gap [29; 30; 31; 32], amorphous films attain better homogeneity and the associated higher yield of similarly performing detectors [33; 34; 35; 36; 37]. To enable the use of NbN for large pixel arrays, atomic layer deposition and molecular-beam epitaxy of highly homogeneous films have been investigated recently as alternatives to the common deposition of polycrystalline NbN and NbTiN films grown using reactive magnetron sputtering [38; 39; 40; 41]. In addition to methods for obtaining better homogeneity during film deposition, a method to tune detector metrics of individual devices after fabrication would also be highly advantageous. Inspired by the recent work of Zhang _et al._[42], which sparked interest in irradiating SNSPDs with helium (He) ions [43; 44; 45], we use a He ion microscope as a post-processing tool to tune detector metrics of individual NbTiN devices fabricated on the same chip. At the same time, we investigate how SNSPD properties such as detection efficiency and switching current depend on the He ion fluence. In addition to detector metrics, we explore the scaling of NbTiN thin film parameters such as sheet resistance and critical temperature with increasing irradiation.
Experimental
To study the influence of He ion irradiation on the native transport properties of NbTiN thin films and the performance of SNSPDs, we deposited NbTiN films with thicknesses of 8 nm, 10 nm, and 12 nm using DC reactive magnetron sputtering onto Si substrates with a 130 nm thick thermally grown SiO\({}_{2}\) layer. The NbTiN thickness was controlled by measuring the sputtering rate and adjusting the sputtering time correspondingly. Subsequently, we patterned the NbTiN films into cloverleaf structures and SNSPDs using electron beam lithography and reactive ion etching, followed by optical lithography and gold evaporation for contact pad fabrication [46]. The detector design consists of a 100 nm wide wire in a meander form with a fill factor of 50 %, and a total active area of 10 \(\mathrm{\SIUnitSymbolMicro m}\times 10\) \(\mathrm{\SIUnitSymbolMicro m}\). The cloverleaf structures were fabricated in order to perform magneto-transport measurements in van-der-Pauw geometry [47, 48] with an active area of 10 \(\mathrm{\SIUnitSymbolMicro m}\times 10\) \(\mathrm{\SIUnitSymbolMicro m}\) and to correlate the results of macroscopic transport with the He ion fluence dependent performance metrics of the corresponding SNSPDs. To ensure the best comparability, cloverleafs (CLs) and SNSPDs were fabricated on the same chip. For this study, they were subsequently irradiated with a He ion microscope (Zeiss Orion Nanofab) with He ion fluences ranging from 0 ions \(\mathrm{nm}^{-2}\) to 2600 ions \(\mathrm{nm}^{-2}\).
The magneto-transport measurements were performed by cooling the samples to 4.2 K before allowing them to slowly heat up to 20 K in external magnetic fields between \(-0.1\) T and 1 T, applied perpendicular to the sample plane. From these measurements, we extract the sheet resistance of the superconducting thin film at 20 K and room temperature, the critical temperature of the superconducting thin film, and the Bogoliubov quasiparticle diffusivity. Also, by measuring the CLs in Hall geometry and performing magnetic field sweeps, followed by a linear fit of the Hall voltage, we determine the Hall coefficient and electron density of the NbTiN films.[49]
Switching current \(I_{\mathrm{sw}}\) and system detection efficiency (SDE) of the SNSPDs were measured using a cryogenic probe station (Janis) at 4.5 K. To calculate the SDE, we determined the dark count rate (DCR) before we measured the count rate (CR) by homogeneous illumination of the SNSPD with an attenuated 780 nm continuous wave diode laser and polarization parallel to the nanowire. The SDE is then defined as \(\mathrm{SDE}=\frac{\mathrm{CR-DCR}}{\mathrm{PR}}\) with the photon rate PR incident on the cryogenic probe station.
## III Results and discussion
In this section, we present the dependence of NbTiN thin film properties and detector metrics on He ion irradiation for film thicknesses of 8 nm, 10 nm, and 12 nm. Provided that the SNSPDs are sensitive to single photons, using larger thicknesses for SNSPDs generally results in stronger optical absorption [50, 51] and therefore enhances their overall system detection efficiency (SDE). Moreover, we aim for a better understanding of how He ion irradiation modifies the transport properties of the NbTiN film and focus on establishing structure-property relationships that link detector thickness, He ion fluence, and detector performance.
### Performance of He ion irradiated SNSPDs
Figure 1 shows the increase in SDE and the simultaneous decrease of switching current of a representative 10 nm thick device measured before irradiation and after fluences of 50 ions \(\mathrm{nm}^{-2}\) and 800 ions \(\mathrm{nm}^{-2}\). Irradiating the detector with 50 ions \(\mathrm{nm}^{-2}\) already results in an increase in SDE from \(<2\) % to 25 %. At a fluence of 800 ions \(\mathrm{nm}^{-2}\) the detector shows the beginning of saturating SDE at 43 %, close to the maximum absorption of 53.1 % in the detector as determined by finite-difference time-domain (FDTD) simulations (Appendix A). Simultaneously, a decrease in switching current \(I_{\mathrm{sw}}\), which is defined as the maximum current the detector can sustain before switching to the normal conducting state, is apparent and ranges from 39.2 \(\mathrm{\SIUnitSymbolMicro m}\) to 28.8 \(\mathrm{\SIUnitSymbolMicro m}\) and 8.6 \(\mathrm{\SIUnitSymbolMicro m}\) after irradiation.
To study the scaling of the switching current with He ion fluence, we irradiated multiple detectors that have thicknesses of 8 nm, 10 nm, and 12 nm using different He ion fluence values. Figure 2 shows the resulting data, revealing a clear trend of decreasing \(I_{\mathrm{sw}}\) with He ion flu
Figure 1: System detection efficiency vs. bias current of the same 10 nm thick detector for three different He ion fluences. The relative uncertainties of He ion fluence, SDE, and bias current are 5 %, 2 % and less than 1 %, respectively (error bars not shown for clarity). With increasing fluence the efficiency rises up to 43% and shows the beginning of saturating internal detection efficiency, while the switching current decreases. The largest change in SDE and \(I_{\mathrm{sw}}\) is induced by the first He ions that hit the detector.
ence. As expected, \(I_{\rm sw}\) is higher for thicker devices of the same fluence due to the larger cross-sectional area of thicker nanowires. We explain the scattering of measured switching currents of nominally identical detectors by constrictions that limit \(I_{\rm sw}\) to a lower value than non-constricted devices have.[35] Such scatter is particularly visible by the large variation of currents of the non-irradiated devices and the small values of the two 12 nm detectors irradiated with 400 ions nm\({}^{-2}\). The inset of Figure 2 shows the switching current density \(j_{\rm sw}\) as calculated from \(I_{\rm sw}\) and the cross sectional area of the wire, given by the width and the nominal wire thickness presented in Table 2. Furthermore, we accounted for an effective reduction of the nominal thickness due to surface sputtering during He ion irradiation as derived in Section III.2 and for a native NbTiN oxide of 1.3 nm thickness [52]. The switching current density of our non-irradiated NbTiN detectors is comparable to the data Korneeva _et al._[53] present for 5.8 nm thick NbN devices. Moreover, we observe \(j_{\rm sw}\) of the 8 nm film being smaller than for the 10 nm and 12 nm films. We note that for thin and narrow wires, the depairing current density [53; 54] can limit the measurable switching current density and reveals a dependence on the film thickness [55; 56]. Thus, an increased depairing current density for the thicker devices also likely contributes to their higher \(I_{\rm sw}\) and \(j_{\rm sw}\).
Similar to \(I_{\rm sw}\), we investigated the scaling of SDE with the He ion fluence. As shown in Figure 3, we observe an increase of SDE with the He ion fluence for all detector thicknesses, despite the large scatter between data obtained from nominally identical SNSPDs. Most notably, the SDE for the 12 nm thick detectors increases from less than 0.05 % for the non-irradiated case to 55.3 % and just saturating detection efficiency for a fluence of 1800 ions nm\({}^{-2}\). As expected, the SDE increases with detector thickness due to the higher absorption. The dashed horizontal lines in Figure 3 show the upper limit for the SDE, defined by the absorption of SNSPDs of the respective thicknesses that we obtained from FDTD simulations discussed in Appendix A. We note that one can further enhance the absorption and thus the SDE over a broad wavelength range by adding a metal mirror with an optical cavity underneath the SNSPD.[57] Recently, a similar approach for He ion irradiated detectors involving a narrow-band cavity, realized with a distributed Bragg reflector, has shown to push the absorption to over 90 %.[43] The fact that the measured SDE of the highest irradiated 8 nm detector shown in Figure 3 is less than 3 % can be explained as follows: Due to the irradiation-induced reduction of the switching current (to 2.8 uA for this detector), which is also the maximum applicable bias current, the maximum voltage pulse amplitude decreases as well. However, the trigger level of the counter used to measure the efficiency can only be reduced correspond
Figure 2: Switching current vs. He ion fluence for 8 nm, 10 nm, and 12 nm detector thickness, including statistical errors. \(I_{\rm sw}\) decreases with the He ion fluence, showing the largest decrease for low fluences. A strong dependence of \(I_{\rm sw}\) on the film thickness is apparent throughout the whole fluence range studied. The inset shows the switching current density \(j_{\rm sw}\) as calculated from \(I_{\rm sw}\) and the wire width and thickness, accounting for an effective thickness reduction due to surface sputtering during He ion irradiation as well as a 1.3 nm thick native NbTiN oxide.
Figure 3: System detection efficiency vs. He ion fluence for the three detector thicknesses studied in this work. Dashed lines indicate the absorption in the SNSPD simulated with FDTD; data points with saturating SDE are highlighted with a red frame. The relative uncertainties of SDE and He ion fluence are 2 % and 5 %, respectively (error bars not shown for clarity). Each of the data points stems from a different detector that was irradiated once with the given dose except for two 10 nm and two 12 nm detectors that were irradiated twice; for some SNSPDs we measured the SDE in addition also before irradiation. Despite the large scattering of data points that can be explained by the strong variation of the initial SDE between individual devices, one clearly sees that the SDE increases with He ion fluence and that the total maximum SDE is reached by the largest detector thickness.
ingly as long as it is well above the electrical noise floor. This implies that once the pulse amplitude becomes comparable to the electrical noise floor, a substantial fraction of detection pulses will not be registered by the counter anymore. Depending on the readout electronics used and on their noise floor, this sets the limit for meaningful He ion fluences when irradiating SNSPDs. Surprisingly, one of the \(12\,\mathrm{nm}/400\,\mathrm{ions}\,\mathrm{nm}^{-2}\) detectors shown in Figure 3 exhibits a high SDE although \(I_{\mathrm{sw}}\) was lower than expected for these two SNSPDs. This hints to a relatively homogeneous current density within the nanowire that allows biasing close to the depairing current density and thus achieving high SDE.
Another key metric for SNSPDs is their recovery time since it determines the detector's maximum count rate. It can be estimated from the time constant \(\tau_{\mathrm{d}}\) of the exponential decay of a detection voltage pulse.[58; 18] Figure 4 shows how the measured decay time increases with increasing He ion fluence and demonstrates that it is smaller for thicker detectors. These observations can be understood as follows: The decay time depends on the kinetic inductance \(L_{\mathrm{k}}\) of the detector by \(\tau_{\mathrm{d}}=L_{\mathrm{k}}/R_{\mathrm{load}}\) with a typical load resistance of \(R_{\mathrm{load}}=50\,\mathrm{\SIUnitSymbolOhm}\) for the readout electronics.[58] At the same time, \(L_{\mathrm{k}}\) for a thin (\(d\ll\lambda_{\mathrm{eff}}\)) and dirty (\(\ell\ll\xi_{0}\)) film of length \(l\), width \(w\), and thickness \(d\) is given by
\[L_{\mathrm{k}}=\mu_{0}\lambda_{\mathrm{eff},\mathrm{tf}}\;\frac{l}{w}\;, \tag{1}\]
with the effective magnetic penetration depth for thin films \(\lambda_{\mathrm{eff},\mathrm{tf}}=\lambda_{\mathrm{eff}}^{2}/d\) as introduced by Pearl [59], where \(\lambda_{\mathrm{eff}}\) is the effective magnetic penetration depth of a dirty bulk superconductor like NbTiN, given by
\[\lambda_{\mathrm{eff}}=\lambda_{\mathrm{L}}\sqrt{\frac{\xi_{0}}{\ell}}=\sqrt{ \frac{\hbar\rho}{\pi\mu_{0}\Delta(0\,\mathrm{K})}} \tag{2}\]
according to Bartolf [60, Eq. (9.36)]. Here, \(\lambda_{\mathrm{L}}\) is the London penetration depth, \(\xi_{0}\) the BCS coherence length, \(\ell\) the mean free path, \(\rho\) the specific resistivity of the superconducting film in the normal conducting state, and \(\Delta(0\,\mathrm{K})\) the superconducting energy gap.[61] Hence, with the effective magnetic penetration depth one can express the kinetic inductance as
\[L_{\mathrm{k}}=\mu_{0}\frac{\lambda_{\mathrm{eff}}^{2}}{d}\frac{l}{w}=\frac{ \hbar R_{\mathrm{sheet}}}{\pi\Delta(0\,\mathrm{K})}\frac{l}{w}\;. \tag{3}\]
Thus, for detectors of similar length and width, the kinetic inductance and the decay time are smaller for detectors that exhibit a smaller sheet resistance, for example due to the use of a thicker film or due to less irradiation with He ions. In this way, we conclude that the increase of decay time due to irradiation can be compensated to a certain extent by using thicker films.
For applications, simultaneously high SDE and \(I_{\mathrm{sw}}\) are desired since a higher \(I_{\mathrm{sw}}\) yields a higher detection pulse, which reduces not only the requirements for pulse detection with the readout electronics but also the timing jitter induced by electrical noise [46; 16]. To compare these two performance metrics, Figure 5 shows the SDE against \(I_{\mathrm{sw}}\) (open symbols representing the non-irradiated detectors, a red frame highlighting saturating SDE, and dashed lines indicating the simulated SDE upper limit). It is interesting to note how \(I_{\mathrm{sw}}\) and SDE compare between the \(8\,\mathrm{nm}/\) and the \(10\,\mathrm{nm}/\) devices with an SDE between \(39\,\%\)
Figure 5: Maximum system detection efficiency vs. switching current for the three detector thicknesses studied in this work. The simulated maximum SDE of each thickness is indicated by dashed lines, while saturating SDE is highlighted by symbols with a red frame; open symbols represent non-irradiated devices. The relative uncertainties of SDE and switching current are \(2\,\%\) and \(6\,\%\), respectively (error bars not shown for clarity). Noteworthy are the data points at an SDE \(\approx 45\,\%\), where the \(10\,\mathrm{nm}/\) SNSPDs provide a similar SDE like the \(8\,\mathrm{nm}/\) ones but offer the doubled switching current. Furthermore, for a similar \(I_{\mathrm{sw}}\) of about \(8\,\mathrm{\SIUnitSymbolMicro}/\), one \(12\,\mathrm{nm}/\) SNSPD shows up to \(55.3\,\%\) SDE, whereas the \(8\,\mathrm{nm}/\) ones provide only up to \(43.8\,\%\) SDE.
Figure 4: Decay time vs. He ion fluence, including statistical errors. \(\tau_{\mathrm{d}}\) increases with increasing fluence and decreasing thickness due to the resulting higher kinetic inductance of the detector.
and 46 %: While providing a similar efficiency, the 10 nm devices offer twice as much switching current, 20 nA instead of 10 pA. This \(I_{\mathrm{sw}}\) is also higher than that of the non-irradiated 8 nm detectors. Another comparison can be drawn between the 8 nm SNSPDs with saturating SDE close to 44 % and the 12 nm SNSPD showing 55.3 % SDE: at similar switching currents of about 8 pA, the 12 nm SNSPD provides a substantially higher SDE. Furthermore, it is noteworthy that the shift of the data point clouds in this two-dimensional parameter space is not monotonous to higher \(I_{\mathrm{sw}}\) with higher thickness (except for the non-irradiated devices), which could hint to the existence of an optimum thickness between 8 nm and 12 nm to reach simultaneous high SDE and \(I_{\mathrm{sw}}\) via He ion irradiation.
To conclude, by choosing a suitable detector thickness and He ion fluence, one can tune \(I_{\mathrm{sw}}\) and SDE, even to better performance in both parameters simultaneously compared to non-irradiated detectors. Moreover, by individual irradiation of NbTiN SNSPDs with a suitable He ion fluence, one can intentionally modify the performance of selected detectors or even mitigate differences between nominally identical devices.
### Scaling of thin film metrics with He ion fluence
To investigate how He ion irradiation impacts upon the bare NbTiN film metrics such as critical temperature \(T_{\mathrm{c}}\), sheet resistance \(R_{\mathrm{sheet}}\), and electron density \(n_{\mathrm{e}}\), we fabricated cloverleaf structures together with the detectors on the same sample to perform magneto-transport measurements in van-der-Pauw geometry. In Figure 6 we present the dependence of the sheet resistance \(R_{\mathrm{sheet}}\) on the He ion fluence. As expected, \(R_{\mathrm{sheet}}\) is higher for thinner films and increases with increasing He ion fluence as the number of defects in the NbTiN film increases. Interestingly, the sheet resistance does not scale as \(R_{\mathrm{sheet}}=\rho/d_{0}\) with the nominal film thickness \(d_{0}\) as expected if all samples had the same specific resistivity \(\rho\). Even if one subtracts a 1.3 nm thick layer of oxidized NbTiN [52] from the nominal NbTiN thickness, the resulting resistivities of the non-irradiated films are still lower for the thicker films than for the 8 nm film (1.94 pA, 1.73 pA, and 1.73 pA for the 8 nm, 10 nm, and 12 nm films, respectively). Although one might expect \(R_{\mathrm{sheet}}\) to saturate at high fluences due to a saturating defect density in the film, we experimentally observe a continuous increase of \(R_{\mathrm{sheet}}\) with He ion fluence. This could have its origin in noticeable surface sputtering [62] and intermixing [63] at the film/substrate interface by the impinging He ions and an associated reduction of the effective film thickness.
Based on these observations and taking the sheet resistance to be directly proportional to the defect density, we develop a simple physical model. In our model, each ion that passes through the film can create a defect cluster of an average volume \(v_{\mathrm{D}}\) with an efficiency \(\eta\). Moreover, we consider the film volume \(V\) as divided into many volume elements with the same size as the average defect cluster volume \(v_{\mathrm{D}}\), and defect clusters may only be created in volume elements that do not already contain a defect cluster. Those considerations imply that irradiating a film of volume \(V\), thickness \(d\), and area \(A\) using a He ion fluence \(\Delta F\) creates \(\Delta N_{\mathrm{D}}\) new defect clusters according to
\[\Delta N_{\mathrm{D}}=\left(\frac{V-N_{\mathrm{D}}\,v_{\mathrm{D}}}{V}\right) \left(\frac{d}{\sqrt[3]{v_{\mathrm{D}}}}\right)\eta\,A\,\Delta F\;. \tag{4}\]
The first fraction represents the fraction of \(V\) that does not yet contain defect clusters, the second fraction represents the number of potential defect clusters that an impinging ion could create when passing the film along its thickness. Dividing this equation by the total volume \(V\) to obtain an expression for the defect cluster density \(n_{\mathrm{D}}\) and taking the limit \(\Delta F\to 0\) yields
\[\frac{\mathrm{d}n_{\mathrm{D}}}{\mathrm{d}F}=\frac{\eta}{\sqrt[3]{v_{\mathrm{ D}}}}\left(1-v_{\mathrm{D}}\,n_{\mathrm{D}}(F)\right)\;. \tag{5}\]
This differential equation has the solution
\[n_{\mathrm{D}}(F)=\frac{1}{v_{\mathrm{D}}}\left(1-\left(1-n_{\mathrm{D},0}\,v _{\mathrm{D}}\right)e^{-\eta v_{\mathrm{D}}^{2/3}F}\right)\;, \tag{6}\]
where \(n_{\mathrm{D},0}\) is the defect cluster density of the non-irradiated film. We relate the defect cluster density to the specific resistivity \(\rho\) via direct proportionality with a film-thickness dependent constant \(a_{d_{0}}\). To arrive at a model for the sheet resistance, we further account for the previously mentioned surface sputtering due to He ion bombardment by including an effective reduction of the original film thickness \(d_{0}\) with a sputtering rate \(r_{\mathrm{s}}\) and
Figure 6: Sheet resistance vs. He ion fluence for 8 nm, 10 nm, and 12 nm film thickness, including statistical errors. The sheet resistance increases with He ion fluence and decreasing film thickness. All three data sets are described by the fit function given by Equation (7) with the parameters of Table 1.
conclude
\[R_{\rm sheet}(F)=\frac{1}{v_{\rm D}}\left(1-\left(1-n_{\rm D,0}\,v_{\rm D}\right)e^ {-\eta v_{\rm D}^{2/3}F}\right)\frac{a_{d_{0}}}{d_{0}-r_{\rm s}F}\;. \tag{7}\]
We fit this model to the experimental data and present the results of this fitting in Figure 6. Since the sputtering rate \(r_{\rm s}\) and the factors \(n_{\rm D,0}\,v_{\rm D}\) and \(\eta v_{\rm D}^{2/3}\) contain only thickness-independent quantities, we choose these factors as common fit parameters for all three thicknesses. In this way, \(a_{d_{0}}/v_{\rm D}\) is the only individual fit parameter for each thickness, while the other three previously mentioned parameters are shared between all films. As such, we fit the three data sets with six parameters. Table 1 lists the parameters that result in the fit functions shown in Figure 6. Considering the volume \(V\) as divided into many volume elements with the same size as the average defect cluster volume \(v_{\rm D}\), the parameter \(n_{\rm D,0}\,v_{\rm D}\) can be qualitatively interpreted as the fraction of volume elements of the non-irradiated film that contains transport-related defective regions such as grain boundaries and scattering centers. We obtain \(n_{\rm D,0}\,v_{\rm D}=0.79\), indicating that the initial defect cluster density is large and/or the average volume of a defect cluster induced by a single He ion collision is on the order of the NbTiN grain size. A high defect density is not unexpected for a polycrystalline material such as NbTiN, and a defect cascade induced by a single He ion can extend over a volume similar to the NbTiN grain size (few nm) according to a study of He ion irradiation induced defect clusters in copper [64]. Furthermore, the quantity \(\eta v_{\rm D}^{2/3}\) can be understood as the cross section determining the probability that an impinging He ion creates a defect cluster of volume \(v_{\rm D}\). Moreover, the sputtering rate of \(9.4\times 10^{-4}\,\)nm/(ions/nm\({}^{2}\)) implies that an irradiation by \(1000\,\)ions\(\,\)nm\({}^{-2}\) leads to an effective reduction of the NbTiN film thickness by about \(1\,\)nm. Although Zhang _et al._[42] did not observe a change in thickness after irradiating their NbN film with \(500\,\)ions\(\,\)nm\({}^{-2}\), our observation agrees well with the simulated and experimentally observed sputtering yield of typically \(1\,\)nm per \(1000\,\)ions\(\,\)nm\({}^{-2}\) found in literature [62, 65].
Figure 7 shows the dependence of the critical temperature \(T_{\rm c}\) on the He ion fluence for \(8\,\)nm, \(10\,\)nm, and \(12\,\)nm thick films. Clearly, \(T_{\rm c}\) decreases continuously by about \(30\,\%\) from the non-irradiated film to the film irradiated with \(1200\,\)ions\(\,\)nm\({}^{-2}\). Similarly to \(R_{\rm sheet}\), also \(T_{\rm c}\) decreases most strongly for small He ion fluences. Interestingly, the measured values of \(T_{\rm c}\) for the \(10\,\)nm and \(12\,\)nm films are very similar for low He ion fluences, although we would expect a lower \(T_{\rm c}\) for the thinner film due to the suppression of superconducting properties when transitioning from bulk to the nanoscale.[67, 68, 69] Furthermore, we fit our experimental data for \(T_{\rm c}\) and \(R_{\rm sheet}\) with the universal scaling law introduced by Ivry _et al._[66], \(d_{0}T_{\rm c}=AR_{\rm sheet}^{-B}\), which relates critical temperature, sheet resistance, and film thickness. Combining then the resulting fit function \(T_{\rm c}(R_{\rm sheet},d_{0})\) with our physical fit function for \(R_{\rm sheet}\), Equation (7), we obtain the fits shown in Figure 7. Appendix C contains details of the fitting procedure used for \(T_{\rm c}\). A recent publication by Ruhtinas and Maasilta [70] contains a study of the critical temperature and the critical current density of comparably thick, \(35\,\)nm and \(100\,\)nm, NbTiN bridges, in which they suppressed superconductivity by He ion irradiation of a narrow line perpendicular to the bridge. Empirically, they observed a logarithmic dependence of \(T_{\rm c}\) and an exponential dependence of \(j_{\rm sw}\) on the He ion fluence \(F\). For the critical temperature, a fit of our data with \(T_{\rm c}(F)=-a\log(F+b)+c\) and fitting parameters \(a\), \(b\), and \(c\) for each of the three thicknesses describes our data even a bit better than the universal scaling law. However, using the universal scaling law, we need only the two fitting parameters \(A\) and \(B\) to describe all three data sets, while the empirical logarithmic fit function requires three fitting parameters for each thickness, a total of nine parameters for our three data sets. Moreover, our data for \(j_{\rm sw}\) as shown in the inset of Figure 2 indicates that the switching current density does not follow the exponential dependence observed by Ruhtinas and Maasilta [70], especially for the smaller He ion fluences. However, we note that compared to our work, Ruhtinas and Maasilta [70] studied the switching current density for higher fluences, ranging from \(2\times 10^{4}\,\)ions\(\,\)nm\({}^{-2}\) to \(12\times 10^{4}\,\)ions\(\,\)nm\({}^{-2}\). Furthermore, since the film thickness has a strong influence on \(T_{\rm c}\), an interesting question is how the detectors' \(T_{\rm c}\) compares between a thicker, higher irradiated SNSPD with a thinner, lower irradiated detector that both show
Figure 7: Critical temperature vs. He ion fluence, including statistical errors. \(T_{\rm c}\) decreases with He ion fluence, with the reduction in \(T_{\rm c}\) at small fluences being the strongest. Surprisingly, \(T_{\rm c}\) for the unirradiated \(10\,\)nm and \(12\,\)nm films are similar for small fluences, although \(T_{\rm c}\) is typically higher for thicker films. The continuous functions we determined by fitting the \(T_{\rm c}\) and \(R_{\rm sheet}\) data with the universal scaling law introduced by Ivry _et al._[66], \(d_{0}T_{\rm c}=AR_{\rm sheet}^{-B}\), and subsequently using our physical model for \(R_{\rm sheet}\), given by Equation (7), as input to the universal scaling law.
a similar SDE. As elaborated in Appendix B, our data suggests that with the \(10\,\mathrm{n}\mathrm{m}\) detectors one can reach a similar SDE as with the \(8\,\mathrm{n}\mathrm{m}\) thick SNSPDs, while retaining a \(T_{\mathrm{c}}\) of \(8\,\mathrm{K}\) instead of \(7.5\,\mathrm{K}\). This is especially useful for applications with limited cooling powers.
Next, we discuss measurements of the quasiparticle diffusivity \(D\). For this, we measured the temperature dependence of the upper critical magnetic field \(B_{\mathrm{c2}}(T)\) by performing magneto-transport measurements while varying the temperature. From linear fits of \(B_{\mathrm{c2}}(T)\) close to \(T_{\mathrm{c}}\), we extract the slope \(\mathrm{d}B_{\mathrm{c2}}/\mathrm{d}T\) and calculate the diffusivity [50]
\[D=\frac{4k_{\mathrm{B}}}{\pi e}\left[\frac{\mathrm{d}B_{\mathrm{c2}}}{\mathrm{ d}T}\right]_{T\to T_{\mathrm{c}}}^{-1}. \tag{8}\]
Magnetic field sweeps were also performed with the film in the normal conducting state and at constant temperature, while measuring the Hall voltage \(V_{\mathrm{H}}\). Since \(V_{\mathrm{H}}\) varies linearly with the applied magnetic field \(B\) and measurement current \(I\), we determine the Hall coefficient \(R_{\mathrm{H}}=V_{\mathrm{H}}d_{0}/(IB)\) using the slope of a linear fit of the \(V_{\mathrm{H}}(B)\) data. From this, we estimate the electron density \(n_{\mathrm{e}}\) according to \(R_{\mathrm{H}}=-1/(n_{\mathrm{e}}e)\) within the free electron model (see also [49, 50, 71]). Figures 8 and 9 show the quasiparticle diffusivity and the electron density as a function of the He ion fluence. Both are almost constant within the experimental error bars, although one might see a slight decrease of the diffusivity and the electron density with increasing He ion fluence. Since we usually observe decreasing electron density and diffusivity with decreasing film thickness like Sidorova _et al._[49], this may be related to the observed effective thickness reduction of \(0.94\,\mathrm{n}\mathrm{m}\) per \(1000\,\mathrm{ions}\,\mathrm{n}\mathrm{m}^{-2}\) due to sputtering during He ion irradiation.
## IV Conclusion and outlook
In summary, we used a He ion microscope to locally tune the performance metrics of individual SNSPDs fabricated on the same chip. At the same time, our results demonstrated the possibilities of using thick (up to \(12\,\mathrm{n}\mathrm{m}\)) NbTiN films and He ion irradiation to enhance performance metrics such as system detection efficiency, switching current, decay time, and operating temperature compared to SNSPDs of smaller thicknesses.
Figure 8: Quasiparticle diffusivity vs. He ion fluence, including statistical errors. \(D\) is almost constant within the error bars, and averaging over all fluences reveals the thickness dependence of \(D\). One might see a slight decrease of \(D\) with increasing fluence, which could be explained by the thickness reduction due to sputtering during He ion irradiation and the thickness dependence of \(D\).
Figure 9: Electron density vs. He ion fluence, including statistical errors. Despite fluctuations between measurements, \(n_{\mathrm{e}}\) seems almost constant. One might see a slight decrease with increasing He ion fluence, which could be explained by the thickness dependence of \(n_{\mathrm{e}}\) and the reduction of the effective film thickness during irradiation due to sputtering.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(d_{0}\) (nm) & \(a_{d_{0}}/v_{\mathrm{D}}\) (\(\Omega\)) & \(n_{\mathrm{D,0}}\,v_{\mathrm{D}}\) (1) & \(\eta v_{\mathrm{D}}^{2/3}\) (nm\({}^{2}\)) & \(r_{\mathrm{s}}\) (nm/(ions/nm\({}^{2}\))) \\ \hline \(8\) & \(2957\pm 36\) & & & & \\ \(10\) & \(2618\pm 36\) & \(0.79\pm 0.01\) & (\(4.7\pm\) & \(0.7)\times 10^{-3}\) & (\(9.4\pm\) & \(0.6)\times 10^{-4}\) \\ \(12\) & \(2484\pm 32\) & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Fit parameters of the physical model according to Equation (7), describing the data in Figure 6. For each thickness, the fit function has its own fit parameter \(a_{d_{0}}/v_{\mathrm{D}}\), while the other parameters are independent of film thickness and therefore shared between the three fit functions.
Thicker detectors exhibit higher optical absorption efficiency and shorter decay times as compared to similar SNSPDs fabricated from thinner films. However, due to the reduction of single-photon sensitivity with detector thickness, such SNSPDs typically offer only small detection efficiencies. Here, we have shown how He ion irradiation can boost the initially negligible SDE (\(<0.05\,\%\)) of \(12\,\mathrm{nm}\) thick SNSPDs at \(4.5\,\mathrm{K}\) by three orders of magnitude to \(55.3\,\%\), resulting in an internal detection efficiency just within the saturated regime. This enables the use of thicker films and the associated advantages--at temperatures reachable with standard pulse-tube or Gifford-McMahon cryocoolers.[72] Furthermore, we found that by combining He ion irradiation and detectors fabricated from thicker films, one can enhance SDE and \(I_{\mathrm{sw}}\) while reducing the decay time compared to non-irradiated smaller-thickness SNSPDs. While reduced decay times result in increased maximum count rates, higher \(I_{\mathrm{sw}}\) and the associated higher detection voltage pulley imply a higher signal-to-noise ratio, which reduces the electrical noise induced timing jitter [16] and the necessary amplification of the electrical readout circuit.
Using a He ion microscope to irradiate individual detectors and cloverleaf structures on the same chip with different fluences allowed us to precisely study SNSPD and film properties over He ion fluences ranging from \(0\,\mathrm{ions\,nm^{-2}}\) to \(2600\,\mathrm{ions\,nm^{-2}}\), avoiding any errors that could arise from the high sensitivity of device properties on the exact sputtering or the subsequent fabrication process. We found that the increase of sheet resistance with the He ion fluence can be well described by a simple physical model that includes defect generation in the NbTiN film and an effective reduction of thickness due to sputtering during He ion bombardment. Moreover, the decrease of critical temperature with the He ion fluence can be described by combining our physical model for \(R_{\mathrm{sheet}}\) with the universal scaling law from Ivry _et al._[66], which relates critical temperature, film thickness, and sheet resistance. At the same time, the quasiparticle diffusivity and electron density stay almost constant for the He ion fluences studied in this work. These magneto-transport measurements also show that irradiation of SNSPDs with He ions continuously changes their properties--although one can employ irradiation to enhance the SNSPD performance, excessive He ion irradiation ultimately leads to a vanishing, non-detectable signal when a photon is absorbed, rendering the SNSPD inoperative. These findings could be particularly interesting for applications where SNSPDs are exposed to radiation and high-energy particles.[25]
Besides the general enhancement of performance metrics of NbTiN SNSPDs by using thicker films combined with He ion irradiation, one can use targeted irradiation of individual devices with a He ion microscope for example in large SNSPD arrays to mitigate inhomogeneities of detector performance between pixels (or even dark pixels). This would be challenging without a post-processing technique such as site-selective He ion irradiation. Furthermore, targeted He ion irradiation enables the optimization of detectors for different performance metrics on the same chip, also after fabrication.
###### Acknowledgements.
The authors thank Kirill Fedorov and Stefan Appel for helpful discussions. We gratefully acknowledge support from the German Federal Ministry of Education and Research (BMBF) via the projects PhotonQ (13N15760), SPINNING (13N16214), MARQUAND (BN105022), and "Photonics Research Germany" (13N14846), via the funding program "Quantum technologies - from basic research to market" (16K1SQ033, 13N15855, and 13N15982), as well as from the German Research Foundation (DFG) under Germany's Excellence Strategy EXC-2111 (390814868) and projects INST 95/1220-1 (MQCL) and INST 95/1654-1 FUGG. This research is part of the "Munich Quantum Valley", which is supported by the Bavarian state government with funds from the "High-tech Agenda Bayern Plus".
## Appendix A Simulation of optical absorption in SNSPDs
The optical absorption in a detector provides an upper limit for its SDE. To determine the absorption for the detectors fabricated in this work, we performed finite-difference time domain (FDTD) simulations (Ansys Lumerical). Input parameters for these simulations are the width and thickness of the nanowire and the optical constants of the superconducting film that provides the basis for the detectors. We controlled the thickness of the films by measuring the sputter deposition rate and selecting the deposition time accordingly. The optical constants were measured with a variable angle spectroscopic ellipsometer (M-2000, J.A. Woollam Co.). After detector fabrication, we evaluated the width of 22 representative detectors (Genesys ProSEM) and determined their mean wire width as listed in Table 2. Moreover, for the simulations we chose a plane-wave source with its polarization parallel to the nanowire, in line with the experiment. Table 2 shows the simulation input parameters and results for the optical absorption of the detectors of this work. The design of all detectors consists of \(100\,\mathrm{nm}\) wide wires and a fill factor of \(50\,\%\) as described in Section II. The measured width of the fabricated detectors deviates from this nominal value due to slight under-/overex during electron-beam lithography. This, however, does not change the reasoning within this work: For the same pitch, increased wire width (increased fill factor) or increased thickness both increase optical absorption and switching current, while reducing the detector's sensitivity to single photons.
## Appendix B Comparison of \(T_{\rm c}\) and SDE
The critical temperature \(T_{\rm c}\) of SNSPDs is especially important for applications with limited cooling capabilities. In this section, we address the question how \(T_{\rm c}\) compares between thicker, higher irradiated detectors and thinner, lower irradiated SNSPDs that both show a similar SDE. For this, we compare the detector's SDE with the thin-film \(T_{\rm c}\) in Figure 10. The data suggests that with 44 % the 10 nm detectors can actually reach an SDE comparable to the 8 nm SNSPDs, while retaining a \(T_{\rm c}\) of 8 K instead of 7.5 K. This can be particularly useful for applications because a higher \(T_{\rm c}\) reduces the requirements for the cooling system to operate the SNSPDs.
## Appendix C Fitting of \(T_{\rm c}\) with universal scaling law
To describe the data for the critical temperature in Figure 7, we use the universal scaling law, introduced by Ivry _et al._[66],
\[d\,T_{\rm c}=AR_{\rm sheet}^{-B}\;, \tag{1}\]
which relates film thickness, critical temperature, and sheet resistance. Figure 11 shows the critical temperature, multiplied with the thickness of the non-irradiated film, \(d_{0}T_{\rm c}\). Evidently, this quantity exhibits a linear dependence on the sheet resistance on a log-log scale. As the data of the differently irradiated 8 nm, 10 nm, and 12 nm thick films approximately collapse on a single line, we choose one joint fitting function to determine the constants \(A\) and \(B\) of the universal scaling law and obtain the unitless constants \(A=1.44\times 10^{4}\) and \(B=0.957\), provided that the data for \(d\), \(T_{\rm c}\), and \(R_{\rm sheet}\) are given in nm, K, and \(\Omega\), respectively. With these parameters, we obtain the fitting functions shown in Figure 7.
It is interesting to note that the linearity of the three data sets shown in the upper part of Figure 11 is lost when multiplying \(T_{\rm c}\) with the effective thickness \(d_{\rm eff}=d_{0}-r_{\rm s}\,F\) instead of the thickness before irradiation, as shown in the lower part of Figure 11. As introduced in Section III.2, this reduction of the effective thickness by 0.94 nm per 1000 ions nm\({}^{-2}\) accounts for surface sputtering and intermixing at the film/substrate interface. At present, we can only give a qualitative explanation why the effective thickness is important to describe the continuous increase of \(R_{\rm sheet}\) in Figure 6 and why it is not relevant for describing \(T_{\rm c}\): Via AFM measurements, we observed a surface roughening by He ion irradiation due to surface sputtering and redeposition. Considering now a thin slab of the rough surface, parallel to the sample plane, it consists of many connected islands of NbTiN (or an oxide thereof). On the one hand, this slab has a higher resistivity in the normal conducting state due to the voids; on the other hand, it should have a
\begin{table}
\begin{tabular}{c c c c c} \hline \(d_{0}\) (nm) & \(w\) (nm) & \(n\) (1) & \(k\) (1) & \(\alpha\) (\%) \\ \hline
8 & \(92.6\pm 4.0\) & 2.47 & 3.18 & 44.2 \\
10 & \(107.3\pm 1.9\) & 2.48 & 3.33 & 53.1 \\
12 & \(115.3\pm 3.4\) & 2.48 & 3.54 & 57.7 \\ \hline \end{tabular}
\end{table}
Table 2: Simulation parameters and results for the absorption in the 8 nm, 10 nm, and 12 nm thick detectors of this work. \(d_{0}\) is the nominal thickness of the detector, while \(w\) represents its mean wire width. \(n\) and \(k\) are refractive index and extinction coefficient, respectively. The absorption fraction \(\alpha\) denotes the percentage of light that is absorbed in the detector, obtained from FDTD simulations.
Figure 10: SDE of differently irradiated SNSPDs vs. \(T_{\rm c}\) of corresponding CLs of three different thicknesses. The relative uncertainty of the SDE is 2 %, the uncertainty of the temperature amounts to 50 mK (error bars not shown for clarity).
similar to a slab without voids as long as the voids are smaller than the coherence length of the superconductor. Of course, further investigation is necessary to better understand the role of surface sputtering and intermixing at the film/substrate interface as well as their influence on thickness, sheet resistance, and critical temperature of the thin film.
|
2310.20066
|
Higher-order tails and RG flows due to scattering of gravitational
radiation from binary inspirals
|
We establish and develop a novel methodology to treat higher-order non-linear
effects of gravitational radiation that is scattered from binary inspirals,
which employs modern scattering-amplitudes methods on the effective picture of
the binary as a composite particle. We spell out our procedure to study such
effects: assembling tree amplitudes via generalized-unitarity methods and
employing the closed-time-path formalism to derive the causal effective
actions, which encompass the full conservative and dissipative dynamics. We
push through to a new state of the art for these higher-order effects, up to
the third subleading tail effect, at order G5N and the 5-loop level, which
corresponds to the 8.5PN order. We formulate the consequent dissipated energy
for these higher-order corrections, and carry out a renormalization analysis,
where we uncover new subleading RG flow of the quadrupole coupling. For all
higher-order tail effects we find perfect agreement with partial observable
results in PN and self-force theories, where available.
|
Alex Edison, Michèle Levi
|
2023-10-30T22:31:34Z
|
http://arxiv.org/abs/2310.20066v2
|
# Higher-Order Tails and RG Flows due to Scattering of Gravitational Radiation from Binary Inspirals
###### Abstract
We establish and develop a novel methodology to treat higher-order non-linear effects of gravitational radiation that is scattered from binary inspirals, which employs modern scattering-amplitudes methods on the effective picture of the binary as a composite particle. We spell out our procedure to study such effects: assembling tree amplitudes via generalized-unitarity methods and employing the closed-time-path formalism to derive the causal effective actions, which encompass the full conservative and dissipative dynamics. We push through to a new state of the art for these higher-order effects, up to the third subleading tail effect, at order \(G_{N}^{5}\) and the 5-loop level, which corresponds to the 8.5PN order. We formulate the consequent dissipated energy for these higher-order corrections, and carry out a renormalization analysis, where we uncover new subleading RG flow of the quadrupole coupling. For all higher-order tail effects we find perfect agreement with partial observable results in PN and self-force theories, where available.
## 1 Introduction
### 1.1 Introduction
The discovery of the Standard Model (SM) in the Standard Model (SM) is a very important topic in the field of particle physics. The SM is a very important tool in particle physics. The SM is a very important tool in particle physics.
Introduction
The direct observation of gravitation waves (GWs) coming from binary black hole (BBH) merger events [1; 2; 3; 4; 5; 6] has shifted precision predictions of GW and BBH structure from theoretical curiosity to phenomenological imperative. With four gravitational-wave observatories active through the LIGO-VIRGO-KAGRA network [7; 8; 9] and new ground- and space-based detectors on the way [10; 11; 12], the increasing scope and depth of incoming gravitational-wave data threatens to exceed our currently available predictions. To meet this looming demand, the last few decades have seen an explosive growth on the theoretical frontier.
The longest-running framework for studying GW sources during the significant inspiral phase is post-Newtonian (PN) General Relativity, which deals with simultaneously weakly interacting and slowly moving bodies; we refer the interested reader to Ref. [13] for a comprehensive Living Review on the subject. With both the orbital velocity and the gravitational coupling as small parameters, PN computations build on classical two-body Newtonian dynamics, and compute the perturbative corrections in these small parameters induced by GR (hence the "post-Newtonian" moniker). Note that the PN approximation treats the perturbation constants as \(G_{N}\sim v^{2}/c^{2}\ll 1\), and thus admits half-PN counting through single powers of \(v\). Due to the long-standing unique prominence of inspiraling binaries as GW sources, PN calculations have been the primary basis for the generation of theoretical gravitational waveforms.
The state of the art in PN theory is currently focused on the 5, 5.5, and 6PN order via multiple approaches, including traditional GR methods [14; 15; 16], as well as particle-physics inspired [17] effective field theory (EFT) methods using Feynman technology [18; 19; 20]. Starting at 2.5PN, radiative effects become essential. The leading dissipative contribution, originally derived by Einstein, and later by Burke and Throne [21; 22; 23], has come to be known as the "radiation-reaction" term. As of 4PN, the system dynamics must also account for a collection of phenomena known as"tail" effects, in which radiation from the system scatters off of the system's own potential background [24; 25]. The leading tail effect has also been well-studied in the EFT context [26; 27; 28; 29]. On the other hand, the subleading "tail-of-tail" has received limited direct study [26; 30], and the sub-subleading "tail-of-tail" (T\({}^{3}\)) has only been computed via traditional GR methods [31], without a counterpart EFT computation.
It is also possible to release the small-velocity approximation used in the PN expansion, and work instead with Special Relativity as the base theory on top of which gravitational sources produce small fluctuations, with \(G_{N}\) serving as the only perturbation parameter. This aptly named "post-Minkowskian" (PM) approximation has seen a surge of interest in recent years, thanks in part to the close similarity between gravitational-wave source calculations in PM and computing effective potentials via scattering amplitudes [32]. The state of the art in PM calculations has recently been pushed to 4PM [33; 34; 35; 36; 37; 38; 39; 40], and is currently one of the driving forces in understanding certain classes of Feynman integrals. Even though these computations are carried out in the scattering regime, there are methods for extracting quantities relevant to the bound problem from them [41; 42; 43; 44; 32; 45]. However, these
mappings between the bound and scattering regimes are expected to break down beyond 4PM as a result of the nonlocal-in-time contributions coming from the tails and similar higher-order effects [24; 25; 43; 46].
Thus, the study of tail effects is critical both for their direct relevance to the real-world PN sources, as well being a key piece for possibly understanding the connection between bound and unbound systems. This work extends and pushes our study of higher-order tail effects by building on our previous letter, Ref. [47]. In that paper, we briefly introduced a novel approach to calculating higher-order tail effects by exploiting generalized unitarity methods [48; 49; 50; 51]. Using this approach, we were able to compute the quadrupole-sourced tail effects up through \(\mathrm{T}^{3}\) at \(G^{4}\), which matched state-of-the-art results from traditional GR methods [31], and surpassed previous attempts using EFT techniques [26; 27; 28; 29]. In this work, we provide a significantly more detailed discussion of the new methodology, including laying out the constituent building-block amplitudes, as well as in-depth calculations through the tail-of-tail. As further novel results of this work, we calculate through to the tail-of-tail-of-tail (\(\mathrm{T}^{4}\)) contribution at \(\mathrm{G}^{5}\) to the quadrupole-quadrupole effective action, eq. (30), and energy dissipation, eqs. (29) and (31). With the new energy-loss term, we are able to compute new subleading RG flow of the quadrupole source, eq. (32), extending the results of Refs. [26; 52] to further allow for prediction of all _subleading_ logs in tail-induced energy-loss.
The structure of this paper is as follows. In the next couple of sections, we provide a somewhat disjoint review of relevant material of EFT and Amplitudes methods for our hybrid approach to the computation of tail effects. In section 2, we focus on the EFT setup for the tails problem. We discuss the relevant separation of scales and the emergence of tails as a phenomena that signals the breakdown of this separation. In section 2.1 we lay out the closed-time-path (CTP) formalism adapted by Galley et al [53; 54], from QFT to our classical context as a method of computing dissipative effective actions. Subsequently, in section 3 we present some of the basic results and modern methods common in the study of scattering amplitudes, and point out how they will be of use for the computation of tails.
Sections 4 and 5 contain the heart of this paper. In section 4 we elaborate on the elements of our novel methodology, and demonstrate it on the leading radiation-reaction and tail effects, which have been well-studied in terms of effective actions. In section section 5, we proceed to present the computation of new effective actions of higher-order tails up through \(\mathrm{T}^{4}\), the first ever computation of this effect, using generalized unitarity methods. We relegate details about integration to the appendices appendices A and B. In section 6, we first formulate the energy loss through the use of the CTP approach for the binary inspiral, and we explicitly extract the related contributions to the radiated energy. With this collection of dissipation corrections in hand, we proceed to analyze them to determine the renormalization and RG flow of the quadrupole source, finding agreement with previous leading EFT results [26; 27], and extending the RG flow to subleading order. Finally, in section 7 we cross-check our new higher-order results against partial ones, known from PN and self-force theory, where they overlap [31; 55; 56], and find perfect agreement.
EFT of Binary as Composite Particle
The effective field theory description of binary inspirals in PN gravity has been formally defined since Goldberger and Rothstein's seminal work [17]. We briefly review it here. We direct interested readers to Refs. [57; 58] for more recent comprehensive reviews of the subject.
Starting from the binary PN assumptions of small velocity and weak field, the two constituent massive objects have _non-relativistic_ momenta given by
\[p_{i}^{\mu}\sim(m_{i}v^{2},m_{i}v) \tag{1}\]
governed by two small quantities, approximately equated by the virial theorem,
\[v^{2}\sim\frac{G_{N}m}{r}\ll 1, \tag{2}\]
with \(m\) the characteristic mass of the gravitating particles, \(r\) their orbital separation, and \(v\) their characteristic orbital velocity. The gravitational field due to the interaction between the two inspiraling bodies can then be split into two graviton modes:
\[k^{\mu}\sim\begin{cases}(v/r,1/r)&\text{potential (near zone)}\\ (v/r,v/r)&\text{radiation (far zone)}\end{cases}\,, \tag{3}\]
and the recoil from each of the massive bodies interacting with the gravitons is assumed to be negligible, which allows to handle the components as classical sources on non-dynamical worldlines. The momentum of the potential modes has a dominant spatial component, and thus they are treated as space-like instantaneous mediators. As their name suggests, these modes are responsible for the gravitational binding of the two-body system. From these considerations, a full effective action at the orbital scale was defined in Ref. [17] with manifest power-counting following eq. (2). This effective action has been used extensively for computations of the binding energy of gravitational binaries, and even extended to include spin-induced effects of the binary [59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84].
Beyond the orbital-scale conservative sector, namely when radiation modes also participate in interactions, it is beneficial to consider as a starting point the entire binary system as a single point particle moving on a worldline, with its internal structure modeled by multipole moments coupled to gravity. This effective action of the binary as a composite particle, which is analogous to that of the single compact object with its spin-induced multipoles at the orbital scale, is given by [26; 58; 63; 68; 85]:
\[S_{\text{eff(c)}}[g_{\mu\nu},y_{c}^{\mu},e_{c\,A}^{\;\mu}]=-\frac{1}{16\pi G} \int d^{4}x\sqrt{g}\,R\left[g_{\mu\nu}\right]+\,S_{\text{pp(c)}}[g_{\mu\nu}(y _{c}),y_{c}^{\mu},e_{c\,A}^{\;\mu}](\sigma_{c})\,, \tag{4}\]
with
\[S_{\text{pp(c)}}[h_{\mu\nu},y_{c}^{\mu},e_{c\,A}^{\;\mu}](t)=- \int dt\sqrt{g_{00}}\left[E(t)+\frac{1}{2}\epsilon_{ijk}L^{k}(t) \left(\Omega_{\text{LF}}^{ij}+\omega_{\mu}^{ij}u^{\mu}\right)\right.\] \[\left.-\sum_{l=2}^{\infty}\biggl{(}\frac{1}{l!}I^{L}(t)\nabla_{L- 2}\mathcal{E}_{i_{l-1}i_{l}}-\frac{2l}{(l+1)!}J^{L}(t)\nabla_{L-2}\mathcal{B}_ {i_{l-1}i_{l}}\biggr{)}\right] \tag{5}\]
in terms of the time coordinate \(t\) as the composite-particle worldline parameter. The point-particle action includes gravitational couplings to the particle's total energy \(E(t)\), its angular momentum \(L^{k}(t)\), and higher multipoles of charge and current type with definite parity, \(I^{L}(t)\) and \(J^{L}(t)\) respectively, bearing symmetric traceless SO(3) (spatial Euclidean) tensor indices. \(\mathcal{E}\) and \(\mathcal{B}\) are the respective even- and odd-parity components of the gravitational curvature tensor 1. In the current work, we limit ourselves to the leading (static) gravitating energy, \(E(t)\to E+\mathcal{O}(G_{N})\), ignoring various subleading corrections due to the gravitational interactions.
Footnote 1: \(\Omega_{\rm LF}\) is the generalized angular velocity in the local frame, and \(\omega\) the Ricci rotation coefficient or spin connection, though they will not be relevant to the present work.
Since gravity is self-interacting, integrating out the gravitational field, starting from this EFT of the composite particle, will involve fully analyzing interactions that include both potential and radiation modes, as the separation of scales inevitably breaks down at a sufficiently high perturbative order of the EFT. The simplest class of such effects is the scattering of a radiation-mode graviton with one or more potential-mode gravitons. The interaction with a single potential mode is referred to as the "tail" effect. We refer to interactions with \(n\) potential modes as (tail-of-)\({}^{n-1}\)tail or T\({}^{n}\) for brevity 2.
Footnote 2: Note that the nomenclature used by Blanchet [13, 30] makes a distinction between, for instance, tail2 and tail-of-tail. In our EFT approach it is not particularly useful to make such a distinction.
Successive terms in the multipole expansion carry increasing powers of radiation-mode momenta. As such, the tails related with each of these multipole sources enter at staggered orders in perturbation theory. In this work we analyze effects that are sourced only by quadrupoles, which yield the leading PN contributions in growing orders of the gravitational coupling constant, \(G_{N}\). As we will see below, the analysis of this EFT of the composite particle at the radiation scale requires regularizing and renormalizing ultraviolet divergences. One can simply follow the standard method to handle renormalization in an EFT by introducing renormalized couplings, which means in this case modifying the coefficients of the multipole source terms to absorb the divergences. Matching with the orbital-scale EFT would align such ultraviolet divergences with infrared divergences in the small-scale theory.
Two approaches to addressing the tails have been presented in the literature. The first we refer to as the "one-point" formalism, which was used by Goldberger and Ross to construct the gravitational radiation from the tail and tail-of-tail [26]. In this setup, one computes the graviton one-point amplitude, \(\mathcal{A}_{h}(k^{\mu})\sim\varepsilon_{ij}(k^{\mu})I^{ij}(k^{0})\), with the (Fourier transform of the) quadrupole, \(I^{ij}(\omega)\), as a classical source. From the one-point amplitude it is simple to construct a graviton differential emission rate, which can then be appropriately weighted and integrated to extract radiation effects, for instance the radiated four-momentum. This approach is elegant and well-motivated physically when all one desires is extracting dissipative features. However, it is ill-suited for extracting the effect on the _conservative dynamics_ of the binary induced by the tails, which are of phenomenological import [14, 15, 19, 20, 27, 57, 58, 86].
The other approach, in which we base the current work and which does also account for the conservative dynamics, computes the effective "two-point function" of the
quadrupole on the worldline that results from _integrating out_ gravitational interactions with the quadrupole source. In this picture we can consider the gravitational field integrated out as an inaccessible degree of freedom, from which the quadrupole can gain or lose energy. The final result will be an effective action which takes the following form in frequency domain:
\[S^{\rm eff}=\int{\rm d}\omega\ f(\omega)I^{ij}(\omega)I_{ij}(-\omega)\,, \tag{6}\]
where the shorthand \(\kappa(\omega)\equiv I^{ij}(\omega)I_{ij}(-\omega)\) will be useful [87]. Because the result is an effective action for the evolution of a quadrupole, we have access to both the conservative dynamics through a Lagrangian, and to radiative observables, via some generalized calculus of variations within the Closed-Time-Path (CTP) formalism, see the following section 2.1 and later section 6.1. This CTP approach was put forward and popularized by Galley et al, starting in [88], and has since been adopted as the standard approach in EFT computations of tails [27, 89, 90, 91, 92]. Prior to our recent letter [47], there had been no attempt to use this approach to tackle higher-order (and unknown) tails.
### Closed-Time-Path Formalism
The nonconservtaive sector requires a more intricate treatment since the radiating binary is in fact an open system as its leaking energy via gravitational waves, so time reversal no longer holds beyond the conservative sector. As we noted above, while specific setups can be used to model the radiative features of the binary [26], these still run into difficulties disentangling the causal radiation effects [52]. Over the last decade, it has become clear that care must be taken when handling the nonconservative effects present in inspiraling binaries [53, 54, 88, 89]. In fact, the approach detailed in Galley et al [54, 89] successfully dealt with radiation reaction and tails at the level of the effective action [88, 27, 53]. This approach is based on the closed-time-path (CTP) (or "in-in") formalism [93, 54, 53, 92], and also fully accounts for the conservative effects due to tails.
The CTP approach adopted to our worldline EFT provides a classical method of integrating out the gravitational field degrees of freedom while maintaining time-asymmetry at the level of the resulting effective action. This is achieved by formally _doubling all degrees of freedom_ in the initial full action of the system, and defining the initial CTP action as:
\[S_{\rm CTP}[\{\dots\}_{1},\{\dots\}_{2}]=S[\{\dots\}_{1}]-S^{*}[\{\dots\}_{2} ]\,, \tag{7}\]
where \(\{\dots\}_{i}\) denotes the full set of degrees of freedom of the system, including all worldline degrees of freedom, and all the field modes which we plan on integrating out. After integrating out the field degrees of freedom, we will obtain a CTP effective action of the form:
\[S^{\rm eff}_{\rm CTP}=\int{\rm d}t\ \left[L(\overline{\{\dots\}}_{1},t)-L( \overline{\{\dots\}}_{2},t)+K(\overline{\{\dots\}}_{1},\overline{\{\dots\}}_ {2},t)\right] \tag{8}\]
where \(\overline{\{\dots\}}_{1/2}\) are the remaining worldline degrees of freedom, \(L\) is identified as the _conservative_ Lagrangian, and \(K\) represents the _nonconservative_ potential. While the initial action does not contain such a history-mixing term, the process of integrating out some of
the degrees of freedom will produce an effective mixing contribution. Once we have obtained the effective action in terms of the doubled variables, we extract physical dynamics and observables by varying the action with respect to the \(\overline{\{\dots\}}_{1}\) variables, e.g., and then taking the _physical limit_ (PL), \(\overline{[\{\dots\}}_{1}-\overline{\{\dots\}}_{2}]|_{\rm PL}\equiv 0\).
Prior to integrating out it is more useful to perform a change of variables to \(\{\dots\}_{+}\equiv[\{\dots\}_{1}+\{\dots\}_{2}]/2\), and \(\{\dots\}_{-}\equiv\{\dots\}_{1}-\{\dots\}_{2}\), which leads to a modified propagator matrix:
\[G_{+-}=G_{\rm adv},\quad G_{-+}=G_{\rm ret};\quad G_{++}=G_{--}=0, \tag{9}\]
where the scalar component of the graviton propagator is given by:
\[G_{\rm ret/adv}(x-x^{\prime})=\int\frac{{\rm d}^{D}p}{(2\pi)^{D}}\frac{e^{-ip _{\mu}(x-x^{\prime})^{\mu}}}{(p^{0}\pm i0)^{2}-|\vec{p}|^{2}} \tag{10}\]
in the mostly-minus metric convention, with \(p^{0}_{\rm ret}\equiv p^{0}+i0\) for retarded, \(p^{0}_{\rm adv}\equiv p^{0}-i0\) for advanced, and where \(D\equiv d+1\) with \(d\) the number of spatial dimensions of \(\vec{p}\). In this basis, the conservative contribution to the resulting effective action in eq. (8) is identified as the part that is symmetric under \(\overline{\{\dots\}}_{+}\leftrightarrow\overline{\{\dots\}}_{-}\), while the remaining terms are identified as the nonconservative \(K\). Observables in this basis are extracted by performing the calculus of variations with respect to the \(\overline{\{\dots\}}_{-}\) variables, after which the physical limit \(\overline{\{\dots\}}_{+}\to\overline{\{\dots\}}_{\rm PL},\ \overline{\{\dots\}}_{-}\to 0\) is applied. In section 6.1 below, we will derive the dissipated energy in the CTP formalism, also specialized in particular to the case of tails in binary inspirals.
As discussed in section 2, the effective action due to tails, which amounts to a two-point function of the mass quadrupole, results from integrating out its coupling to the gravitational field. Carrying out this task in the CTP framework is rather straightforward [88; 89; 27]. First, we endow the quadrupoles with CTP labels, \(I^{ij}(\omega)\to I^{ij}_{a}(\omega)\), \(\kappa(\omega)\to\kappa_{ab}(\omega)\) (we use \(a\),\(b\) for CTP indices, reserving Latin letters near \(i\),\(j\) for space-like indices). Then we sum over the possible CTP labels for the two quadrupoles, while making consistent CTP label choices for the internal radiation-mode gravitons. In the case of tails, this consistent label choice amounts to having all radiation-mode propagators, \(G^{\rm rad}_{ab}\), aligned with the quadrupole labels, e.g.
\[I^{ij}_{-}(\omega)G^{\rm rad}_{-+}G^{\rm rad}_{-+}\dots I^{ij}_{+}(-\omega)\,. \tag{11}\]
Because the CTP propagators address causal propagation and the potential-mode gravitons are taken to be instantaneous, we do not dress them with CTP labels.
## 3 Amplitudes and Generalized Unitarity
The study of scattering amplitudes via the unitarity paradigm has a long and storied history, and this work is not intended as a review of the field. Instead we mention a few specific points that are relevant for the work at hand, so that non-experts have a point of reference. For readers interested in more details, we refer to Refs. [94; 95; 96; 97] as broad introductions to the subject. For discussions particularly centered on multi-loop unitarity methods, we refer the reader to Refs. [98; 99; 100; 101; 102; 103; 104; 105; 106].
### Tree Amplitudes
Tree amplitudes have a number of textbook properties that nonetheless serve as focus points for their study. They are a description of _local_ scattering interactions between _on-shell_ external particles that are _gauge invariant_ and obey _factorization_ rules. By on-shell, we mean that all of the external particles have energy-momentum vectors obeying \(p_{i}^{2}=m_{i}^{2}\) where \(m_{i}\) is the particle's rest mass. Amplitudes describing massless spin-1 particles are invariant under linearized gauge transformations, i.e. if we explicitly factor polarization vectors out of an amplitude, they will obey the Ward identity
\[\mathcal{A}_{n}=\mathcal{A}_{n}^{\mu_{1}\cdots\mu_{n}}\prod\varepsilon_{i}^{\mu _{i}}\to p_{i}^{\mu_{i}}\mathcal{A}_{n}^{\mu_{1}\cdots\mu_{n}}\prod_{j\neq i} \varepsilon_{j}^{\mu_{j}}=0\,. \tag{10}\]
Amplitudes of massless spin-2 particles (gravitons) are invariant under linear diffeomorphisms, realized via
\[\mathcal{M}_{n}=\mathcal{M}_{n}^{\mu_{1}\nu_{1}\cdots\mu_{n}\nu_{n}}\prod \varepsilon_{i}^{\mu_{i}}\varepsilon_{i}^{\nu_{i}}\to p_{i}^{\mu_{i}} \varepsilon_{i}^{\nu_{i}}\mathcal{M}_{n}^{\mu_{1}\cdots\mu_{n}}\prod_{j\neq i} \varepsilon_{j}^{\mu_{j}}\varepsilon_{j}^{\nu_{j}}=0\,. \tag{11}\]
Here we have also introduced a standard notation for dealing with spin-2 amplitudes: since we require graviton polarizations to be symmetric traceless tensors we write them as an outer product of two identical null polarization vectors
\[\varepsilon^{\mu\nu}\to\varepsilon^{\mu}\varepsilon^{\nu}\qquad\varepsilon^{ \mu}\varepsilon_{\mu}=0\,. \tag{12}\]
Locality is the statement that the amplitude can be expressible in terms of point-like interactions, which in momentum space translates to only allowing interactions that are polynomials of momenta and polarizations. Similarly, factorization is a statement about the types of pole structures that appear in momentum space. Concretely, factorization requires that the residue of a momentum-space amplitude on a configuration where a sum of external momenta goes on-shell must be equal to a product of lower-point amplitudes summed over all theory-allowed intermediate on-shell states
\[\underset{(p_{1}+\cdots+p_{j})^{2}=0}{\text{Res}}\,\mathcal{A}(1,\ldots,j,j+1, \ldots,n)=\sum_{\text{states of }i}\delta(p_{i}^{2})\mathcal{A}(1,\ldots,j,i) \mathcal{A}(i,j+1,\ldots,n)\,. \tag{13}\]
We provide an example of the sum over states below in eq. (10). Graphically, we often represent factorization as
\[\raisebox{-0.0pt}{\includegraphics[scale=0.4]{fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/
In addition to these standard properties, amplitudes involving non-Abelian color charges, e.g. Yang-Mills theory, exhibit color-kinematics duality [103; 108]. The amplitudes in these theories can be written in terms of numerators dressing only cubic diagrams, where the kinematic piece of the numerators obeys the same algebraic relations as the non-Abelian color charge factors. These numerators can then be "double-copied" by replacing the color factors with another set of kinematic numerators, leading to amplitudes in uncolored theories. The most relevant example for the current work is that tree amplitudes for gravitational interactions, both self-interactions and coupling to matter, can be generated as the double-copy of Yang-Mills amplitudes, also potentially involving matter [103; 109; 110].
Over the years, different tree construction methods have been developed which manifest different properties of amplitudes. For instance, direct calculation via textbook Feynman rules manifests locality, factorization, and connection to path integrals and action principles but obscures gauge invariance and relationships between theories. On the other hand, Britto-Cachazo-Feng-Witten recursion-based constructions [111; 112; 113; 114] manifest gauge invariance, on-shell conditions, and high-energy behavior at the cost of no longer manifesting locality. Such recursion relations have been applied in studying candidates for "black hole + gravity" Compton amplitudes [115; 116; 117; 118]. Of primary relevance to the current work is the Cachazo-He-Yuan formulation of scattering amplitudes [119; 120], which manifests the double-copy relations between gauge theory amplitudes and gravitational amplitudes at the cost of introducing an auxiliary space. One of the current authors and Fei Teng developed the publicly available package IncreasingTrees for efficiently computing gauge and gravity tree amplitudes, including minimal matter couplings, in this formalism [121]. Using it, we are able to extract all of the building-block amplitudes needed for the tail computations, which we discuss below in section 4.1.
### Generalized Unitarity Cuts
The central concept of the unitarity program is the combining of tree amplitudes into loop data via _generalized unitarity cuts_. The core of generalized unitarity methods comes from two connected ideas. First, through tensor reduction and integration relations [122; 123; 124; 125], loop amplitudes can be written in terms of a basis of scalar (and often purely-propagator) integrals. When written in this basis, the coefficient of each basis integral is a theory-dependent algebraic function of the external data and spacetime dimension. Writing the amplitude in this way often exposes significant simplifications and patterns. An important and well-studied example is the four-point one-loop amplitude for any theory, which can be written as [126; 127; 128; 129; 130]
\[\mathcal{A}_{4}^{(1)}=\raisebox{-14.226378pt}{\includegraphics[width=14.22637 8pt]{fig/A1.eps}}^{1}=c_{\text{box}}\int\raisebox{-14.226378pt}{ \includegraphics[width=14.226378pt]{fig/A1.eps}}^{2}+c_{\text{s-bub}}\int \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{fig/A1.eps}}^{1} \raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{fig/A1.eps}}^{2} +\text{perms} \tag{13}\]
in which \(c_{\text{box}}\), \(c_{\text{s-bub}}\), and what is covered by "perms" are all theory dependent. While expressing amplitudes in terms of scalar integral bases is a useful organizing principle on its own, its real power comes with the help of the second observation: the perturbative
QFT optical theorem can be used to directly construct the basis coefficients \(c_{i}\). The key idea, developed by Bern, Dixon, Dunbar, and Kosower (BDDK) [48; 49], is that unitarity of the \(S\)-matrix perturbatively requires
\[-i(T-T^{\dagger})=T^{\dagger}T\Rightarrow 2\,\mathrm{Im}\left(\raisebox{-14.226378pt}{ \includegraphics[scale=0.4]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/fig/figfig/figfig/figfig/fig/figfig/fig/figfig/figfig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/fig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfig/figfig/figfigfig/figfigfig/figfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfig/figfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfigfig/figfigfigfigfig/figfigfigfigfigfig/figfigfigfigfigfig/figfigfigfigfigfig
to the particular case of only internal gravitons. In this situation, evaluating the state sum for each of the legs crossing the cut involves inserting a complete set of graviton states in \(D\) dimensions via
\[\sum_{\text{states}}\varepsilon_{k}^{\mu\nu}\varepsilon_{k}^{ \alpha\beta}\equiv\mathcal{P}_{k}^{\mu\nu;\alpha\beta} =\frac{1}{2}\left(P_{k}^{\mu\alpha}P_{k}^{\nu\beta}+P_{k}^{\mu \beta}P_{k}^{\nu\alpha}-\frac{2}{D-2}P_{k}^{\mu\nu}P_{k}^{\alpha\beta}\right) \tag{29}\] \[P_{k}^{\mu\nu}\equiv\eta^{\mu\nu}-\frac{k^{\mu}q^{\nu}+k^{\nu}q^ {\mu}}{k\cdot q} \tag{30}\]
in which \(q^{\mu}\) is a null reference vector. The presence of \(q\) serves to make eq. (29) analogous to a gauge-agnostic graviton propagator (with \((k^{2})^{-1}\) replaced with an implicit \(\delta(k^{2})\)). In fact, gauge invariance of a cut with respect to the internal states manifests as \(q\)-independence of the cut. Explicitly removing the \(q\) dependence can be computationally intensive for complicated cuts. Ref. [134] provides an excellent discussion for effective ways of dealing with these types of \(D\)-dimensional state sums.
When a basis of integrals for a particular problem is known, generalized unitarity cuts can be used to identify the basis coefficients via
\[\frac{\text{Cut}_{G}}{|G|}=\sum_{\begin{subarray}{c}\mathcal{I}_{i}\text{ has propagators}\\ \text{compatible with }E(G)\end{subarray}}c_{i}\mathcal{I}_{i} \tag{31}\]
with \(|G|\) the number of symmetries of the graph and where the sum is over integral basis elements which have at least the same propagators as \(E(G)\). Integral basis identification and construction is often a highly nontrivial task involving many subtleties. As such, significant effort has been put into identifying particularly good choices of bases [99] and developing basis-agnostic formalisms [135; 136]. Luckily, we will see below that the tails have relatively simple and easy-to-identify integral bases. Even better, the matching between cuts and integral basis coefficients is nearly a direct equality, modulo details about "non-planar" channels that will be discussed in situ.
## 4 The Tail Effect
The effective field theory approach has been extremely successful at using analogies with particle physics to improve the understanding of gravitational dynamics. Further bringing modern amplitudes insights to bear has pushed the frontier in the hyperbolic approach problem [33; 34; 35; 36; 37; 38; 39; 40], and led to new developments in direct observable computations [137; 138]. Following this spirit, in this work as well as our previous [47], we advocate applying the particle analogies and amplitudes methods _even to the level of the composite binary_. The long standing link between spin-\(l/2\) fundamental particles, \(S^{l}\) classical spin terms, and \(l\)-th multipole moments [139; 68] suggests that we model the multipole moments of the binary itself in terms of fundamental particles interacting with gravity. By working with scattering amplitudes for the particle interactions, rather than Feynman rules, we will be able to construct the tail effective actions by combining gauge-invariant on-shell objects via the method of generalized unitarity. Doing so removes the need to care about graviton
gauge choices, allows exploiting developments in amplitudes construction and integration, and more directly highlights the patterns that appear throughout the tails.
It is also worthwhile to discuss the link between the zero-point and one-point approaches to tails, which ties in with a unitarity-based perspective. As discussed in section 2, the one-point approach deals with calculating the unpolarized cross-section of a graviton one-point function in the normal way, \(\sum_{h}|\mathcal{A}_{h}|^{2}\). However, this process is almost exactly equivalent to computing _generalized unitarity cuts_ (see section 3) in the zero-point formalism (up to shuffling of terms related to the regularization schemes), as it entails _inserting a complete set of graviton states between two on-shell amplitudes_
\[\int\mathrm{d}\mathrm{LIPS}\sum_{h}|\mathcal{A}_{h}|^{2}=\raisebox{-14.226378pt}{ \includegraphics[scale=0.4]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/ fig//fig/fig/ fig//fig/ fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig//// fig//// fig/// fig/// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig///// fig///// fig//// fig//// fig///// fig///// fig//// fig//// fig///// fig//// fig//// fig///// fig//// fig///// fig///// fig///// fig///// fig//// fig///// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig/// fig//// fig//// fig/// fig//// fig/// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig/// fig/// fig//// fig//// fig//// fig/// fig///// fig//// fig/// fig//// fig//// fig///// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig/// fig/// fig/// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig/// fig/// fig//// fig//// fig/// fig//// fig/// fig/// fig//// fig/// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig/// fig//// fig/// fig//// fig/// fig//// fig/// fig/// fig/// fig//// fig/// fig//// fig/// fig//// fig//// fig/// fig//// fig///// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig//// fig//// fig//// fig//// fig//// fig///// fig//// fig//// fig//// fig//// fig///// fig//// fig//// fig//// fig///// fig///// fig///// fig///// fig//// fig//// fig//// fig///// fig///// fig//// fig///// fig//// fig//// fig///// fig///// fig///// fig////// fig///// fig//// fig//// fig//// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig//// fig//// fig///// fig//// fig///// fig///// fig//// fig///// fig//// fig///// fig///// fig//// fig///// fig///// fig///// fig///// fig//// fig//// fig///// fig///// fig///// fig///// fig//// fig//// fig//// fig///// fig//// fig///// fig//// fig///// fig//// fig//// fig///// fig//// fig//// fig///// fig//// fig///// fig///// fig///// fig//// fig///// fig///// fig//// fig///// fig///// fig///// fig//// fig///// fig////// fig///// fig///// fig////// fig////// fig///// fig///// fig///// fig///// fig///// fig///// fig////// fig///// fig///// fig////// fig///// fig///// fig////// fig///// fig///// fig//// fig//// fig///// fig///// fig//// fig////// fig///// fig///// fig///// fig//// fig///// fig///// fig//// fig///// fig//// fig///// fig//// fig//// fig/// fig//// fig//// fig/// fig/// fig//// fig/// fig/// fig//// fig//// fig//// fig/// fig///// fig/// fig// fig//// fig/// fig/// fig/// fig/// fig//// fig//// fig/// fig//// fig/// fig/// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig/// fig/// fig/// fig//// fig//// fig/// fig/// fig//// fig//// fig//// fig//// fig//// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig// fig//// fig//// fig/// fig/// fig//// fig/// fig//// fig/// fig/// fig/// fig/// fig//// fig/// fig/// fig//// fig// fig//// fig//// fig//// fig//// fig//// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig//// fig/// fig/// fig/// fig/// fig// fig/// fig//// fig/// fig// fig/// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig/// fig// fig//// fig// fig/// fig/// fig// fig/// fig// fig/// fig// fig// fig// fig/// fig// fig// fig/// fig/// fig/// fig/// fig// fig// fig/// fig/// fig/// fig// fig/// fig// fig// fig/// fig// fig/// fig// fig/// fig//// fig// fig// fig// fig// fig// fig/// fig// fig// fig/// fig// fig// fig/// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig/ fig// fig/ fig// fig// fig// fig// fig/ fig// fig// fig// fig/ fig/ fig// fig/ fig// fig/ fig// fig// fig/ fig// fig// fig/ fig/ fig/ fig// fig/ fig// fig// fig/ fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig/ fig// fig/ fig// fig/ fig// fig// fig/ fig// fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig// fig/ fig/ fig/ fig// fig// fig/ fig// fig/ fig/ fig// fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig// fig// fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig
spin-1 massive particle radiating a graviton. The fact that the quadrupole is a symmetric traceless \(SO(3)\) tensor allows us to represent it as a formal product between two \(SO(3)\) vectors
\[I^{ij}\equiv I^{i}I^{j}\quad I^{i}I_{i}=0\,, \tag{4.2}\]
which may have additional internal structure that we supress here. The \(SO(3)\) vectors can be covariantized so that in the rest frame of the binary we have \(I^{\mu}=(0,I^{i})\). The isomorphism between \(SU(2)\) and \(SO(3)\) also guides us to putting the spin-1 particle in the \((\frac{1}{2},\frac{1}{2})\) Lorentz representation. The amplitude for such a particle coupling to gravity is straightforward to compute via the double copy of a fermion coupled to a massless vector
\[\mathcal{M}^{(\frac{1}{2},\frac{1}{2})}_{vgv}=\frac{\lambda}{4}\left(\chi_{1} \not{\varepsilon}_{g}\chi_{2}\right)^{2}\,, \tag{4.3}\]
with \(\lambda\) the coupling constant and \(\chi\) the fermion polarizations. We then need to make appropriate identifications between the fermion polarizations and the quadrupole vector element. Comparing mass dimensions of the objects in question leads us to \(\chi\chi\sim m^{2}I^{\mu}\), which can be realized covariantly, similar to Ref. [139], as
\[\chi_{1}^{\alpha}\chi_{2}^{\beta}\to(\not{\varepsilon}_{2}\not{\varepsilon}_ {1}\not{I})^{\beta\alpha}\,. \tag{4.4}\]
Inserting the identification back into eq. (4.3) leads to
\[\mathcal{M}_{Ig}\equiv\frac{\lambda}{4}\operatorname{tr}(\not{\varepsilon}_{2 }\not{\varepsilon}_{1}\not{\varepsilon}_{g})^{2}=\tilde{\lambda}((\varepsilon _{g}\cdot k_{1})(I\cdot k_{g})-(\varepsilon_{g}\cdot I)(k_{1}\cdot k_{g}))^{2} \tag{4.5}\]
after applying momentum conservation and absorbing constants into \(\tilde{\lambda}\). Evaluating in the rest frame of particle 1 yields
\[\mathcal{M}_{Ig} \to\tilde{\lambda}m_{1}^{2}(I^{i}k_{g}^{i}\varepsilon_{g}^{0}-I^ {i}\varepsilon_{g}^{i}\omega_{g})^{2}\] \[=\lambda_{I}I^{ij}(\omega_{g}k_{g}^{i}\varepsilon_{g}^{0} \varepsilon_{g}^{j}+\omega_{g}k_{g}^{j}\varepsilon_{g}^{0}\varepsilon_{g}^{i }-k_{g}^{i}k_{g}^{j}\varepsilon_{g}^{0}\varepsilon_{g}^{0}-\omega_{g}^{2} \varepsilon_{g}^{i}\varepsilon_{g}^{j})\,. \tag{4.6}\]
Up to alignment of coupling constants this exactly agrees with the one-point quadrupole-gravity operator used throughout other EFT-based approaches (see Refs. [17, 26, 88, 139, 141] for foundational discussions of these types of operators). We will often write
\[\mathcal{M}_{Ig}\equiv\lambda_{I}J_{I}^{\mu\nu}\varepsilon_{\mu\nu}=\ _{I^{ij}}\ \raisebox{-1.5pt}{\includegraphics[scale=.5]{figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures//figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/figures//figures/figures//figures/figures/figures/figures/figures//figures/figures/figures//figures//figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures//figures/figures//figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures//figures/figures/
which in the rest frame of the scalar evaluates to \(\lambda_{E}\varepsilon_{00}\), agreeing with the EFT definition of the static potential-mode source \(Eh^{00}\) after appropriate alignment of coupling constants. In addition to the three point amplitude (analogous to a one-point source) for the potential-mode coupling, we will also require four-point amplitudes to capture the possible contact terms (analogous to a two-point source) in a gauge-invariant manner. The obvious choice for the desired amplitude is the two-graviton two-scalar extension of eq. (4.1), which is computable as the double-copy of a two-scalar two-gluon amplitude using IncreasingTrees as
\[\mathcal{M}_{ggE}\stackrel{{?}}{{=}}\mathcal{M}_{ sggs} =\frac{\lambda_{E}}{m_{s}^{2}}\lambda_{g}\left((\varepsilon_{2} \cdot k_{1})(\varepsilon_{3}\cdot k_{12})-\frac{1}{2}(\varepsilon_{2}\cdot \varepsilon_{3})(k_{1}\cdot k_{2})\right)\Bigg{[} \tag{4.1}\] \[\qquad\left((\varepsilon_{2}\cdot k_{1})(\varepsilon_{3}\cdot k_ {12})-\frac{1}{2}(\varepsilon_{2}\cdot\varepsilon_{3})(k_{1}\cdot k_{2}) \right)\left(\frac{1}{2(k_{1}\cdot k_{2})}+\frac{1}{2(k_{2}\cdot k_{3})}\right)\] \[\qquad+\left((\varepsilon_{2}\cdot k_{13})(\varepsilon_{3}\cdot k _{1})-\frac{1}{2}(\varepsilon_{2}\cdot\varepsilon_{3})(k_{1}\cdot k_{3}) \right)\frac{1}{2(k_{2}\cdot k_{3})}\Bigg{]}+(2\leftrightarrow 3)\,,\]
in which particles \(1\) and \(4\) are the scalars, and \(2,3\) the gravitons, all particles have the same orientation, and \(\lambda_{g}\) is the graviton self-coupling constant. However, the amplitude as-is contains too much information: it encodes not only the graviton self-interactions and contact terms, but also propagation of an off-shell scalar particle via the \(p_{1}\cdot p_{2}\) and \(p_{1}\cdot p_{3}\) poles. Including the off-shell propagation is in direct tension with wanting to interpret the scalar as a classical massive object. The tension can be resolved by appealing to the identification of the scalar with a black hole: we treat the scalar mass as the dominant scale in the problem, and expand the four-point amplitude in said limit. Doing so in the rest frame of massive particle leads to
\[\mathcal{M}_{ggE}=\mathcal{M}_{sggs}(m_{s}\to\infty)= \frac{\lambda_{g}\lambda_{E}}{\omega_{2}^{2}}\frac{\delta(\omega_{2 }-\omega_{3})}{2(k_{2}\cdot k_{3})}\Big{[}(k_{2}\cdot k_{3})\varepsilon_{2}^{ 0}\varepsilon_{3}^{0}+\omega_{2}((\varepsilon_{3}\cdot k_{2})\varepsilon_{2}^ {0}\] \[\quad-(\varepsilon_{2}\cdot k_{3})\varepsilon_{3}^{0})-\omega_{2}^ {2}(\varepsilon_{2}\cdot\varepsilon_{3})\Big{]}^{2}+\mathcal{O}(m_{s}^{-1})\,,\] (4.2) \[=\ \raisebox{-14.226378pt}{\includegraphics[scale=0.4]{figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs /figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs /figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs /figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs /figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figsfigs/figs/figs/figsfigs/figsfigs/figs/figsfigs/figs/figsfigs/figs/figsfigs/figs/figs/figsfigs/figs/figsfigs/figs/figs/figs/figsfigs/figsfigs/figsfigs/fig
of specific kinematic structures to 1 rather than against a particular choice of coupling, so a coupling factor of \(\lambda_{g}^{n-2}\) must be included on all of the graviton tree amplitudes.
Throughout the above discussion, we introduced coupling constants that should be matched with appropriate references for comparisons to be accurate. We choose to specifically match against the conventions of Refs. [26; 27], making our coupling constants:
\[\begin{split}\lambda_{I}&=\sqrt{2\pi G_{N}}\,,\\ \lambda_{E}&=-E\sqrt{8\pi G_{N}}\,,\\ \lambda_{g}&=-\sqrt{32\pi G_{N}}\,.\end{split} \tag{4.12}\]
Notably we will use the fixed-dimension standard definition for \(G_{N}\), and will introduce a renormalization scale \(\mu\) that accounts for the scale-dependence of working in dimensional regularization.
### Radiation-Reaction
The quadrupole-sourced radiation reaction (no interaction with the background potential) and leading tail have been studied extensively [13; 15; 24; 25; 26; 27; 28; 89; 91; 143], so serve as verifications for our proposed methods. They are also simple enough that we can present the majority of intermediate steps in detail.
Beginning with radiation reaction, there is only one diagram topology that we need to consider in both the EFT/Feynman Rule and Unitarity perspectives, namely fig. 1. This is rather straightforward to see. When we label the momentum flow of the gravitons in the Feynman diagram, fig. (a)a, and impose the (in this case non-existent) momentum conservation we find only a single momentum \(\ell^{\mu}\) and only one possible momentum product, \(\ell^{2}=\omega^{2}-\ell_{E}^{2}\). Thus, the relevant integral family consists of
\[\int\frac{\mathrm{d}^{d}\ell_{E}}{(2\pi)^{d}}\frac{\ell_{E}^{i_{1}}\ldots\ell _{E}^{i_{n}}}{(-\ell_{E}^{2}+\omega^{2})^{\lambda}}\qquad\lambda\in\mathbb{Z}_ {+},\,n\geq 0\,. \tag{4.13}\]
In fact, symmetry and integral relations always allow us to write any integral of this type in terms of \(\omega^{n}\delta^{i_{1}\ldots i_{n}}\) (see appendix A for a brief discussion of one systematic method for dealing with the tensor reductions appearing in the tails) and the single basis integral
\[F^{(1)}(1;\omega^{2})=\int\frac{\mathrm{d}^{d}\ell_{E}}{(2\pi)^{d}}\frac{1}{(- \ell_{E}^{2}+\omega^{2})}=-\frac{\Gamma(1-d/2)(-\omega^{2})^{d/2-1}}{(4\pi)^{ d/2}}\,, \tag{4.14}\]
Figure 1: The quadrupole radiation reaction diagrams
where \(d\) is the dimensional regularization dimension \(d=3+\epsilon\) and \(\omega\) has an imaginary part set by the \(i0\) prescription. In Feynman prescription, the effective action would then be written as
\[S_{\rm RR}=\int\frac{{\rm d}\omega}{2\pi}c_{\rm RR}F^{(1)}(1;\omega^{2}+i0)\,. \tag{4.15}\]
Working in the CTP prescription, we instead need to sum over advanced and retarded propagators. In turn, this means that the radiation reaction contribution to the CTP effective action _must_ be expressible as
\[S_{\rm RR}=\int\frac{{\rm d}\omega}{2\pi}\left(c_{\rm RR}^{-+}F^{(1)}(1;\omega_ {R}^{2})+c_{\rm RR}^{+-}F^{(1)}(1;\omega_{A}^{2})\right)\,. \tag{4.16}\]
Our goal is now to determine \(c_{\rm RR}\) using unitarity methods. It is also worth pointing out that the different \(i0\) prescriptions change the way that the \(\sqrt{-\omega^{2}}\) in eq. (4.14) (and later \(\log(-\omega^{2})\)) will be analytically continued. The Feynman prescription tells us to do the continuation using a _fixed_ imaginary part of \(\omega^{2}\), while the advanced and retarded prescriptions tell us to treat the imaginary part as _depending on the sign of_\(\omega\).
Following the ideas discussed in section 3, \(c_{\rm RR}\) should be directly related to the generalized unitarity cut of the diagram divided by the symmetry factor for the diagram. We calculate the generalized unitarity cut, following eq. (3.8), as the product of two quadrupole amplitudes, eq. (4.7), summed over the \(D=d+1\)-dimensional states of the internal graviton using eq. (3.10) to find
\[{\rm Cut}_{\rm RR}^{ab} =\sum_{\rm grav\ states}{\cal M}_{I_{a}(-\omega)}{\cal M}_{I_{b}( \omega)}\] \[=\lambda_{I}^{2}J^{\mu\nu}_{I_{a}(-\omega)}\left.{\cal P}^{\mu \nu;\alpha\beta}J^{\alpha\beta}_{I_{b}(\omega)}\right|_{\ell^{2}=\omega^{2}- \ell^{2}_{E}=0}\] \[=(2\pi G_{N})\delta(\omega^{2}-\ell^{2}_{E})\left(J^{\mu\nu}_{I_ {a}(-\omega)}J^{\mu\nu}_{I_{b}(\omega)}-\frac{J^{\mu\mu}_{I_{a}(-\omega)}J^{ \nu\nu}_{I_{b}(\omega)}}{D-2}\right)\,, \tag{4.17}\]
with CTP labels \(a,b\). Inserting the definitions for \(J^{\mu\nu}\) and performing tensor reductions following appendix A, we arrive at
\[{\rm Cut}_{\rm RR}^{ab}=(2\pi G_{N})\kappa_{ab}(\omega)\omega^{4}\frac{(d+1) (d-2)}{(2+d)(d-1)}\delta(\omega^{2}-\ell^{2}_{E})\,. \tag{4.18}\]
We see that the cut only depends on the CTP labels through \(\kappa^{ab}(\omega)\), and this turns out to be true for the rest of the calculations we will approach in this paper. As a result of this observation, we define the unindexed cut as
\[{\rm Cut}_{\rm RR}^{ab}\equiv{\rm Cut}_{\rm RR}\,\kappa_{ab}(\omega) \tag{4.19}\]
and restructure the CTP effective action slightly as
\[S_{\rm RR} =\int\frac{{\rm d}\omega}{2\pi}a_{\rm RR}\left(\kappa_{-+}(\omega) F^{(1)}(1;\omega_{R}^{2})+\kappa_{+-}(\omega)F^{(1)}(1;\omega_{A}^{2})\right)\] \[\equiv\int\frac{{\rm d}\omega}{2\pi}a_{\rm RR}F^{(1)}_{\rm CTP}(1 ;\omega^{2})\,. \tag{4.20}\]
This makes it clear that while a fully detailed treatment would involve separately matching coefficients between the different CTP branches, the net result of the matching in this case will be the same as if we had ignored the \(i0\) prescription which only contributes through the integration contour of the basis integrals. This continues to occur for all of the higher-order tails considered below as well.
We proceed with reconstructing \(a_{\rm RR}\) by matching the cut against the basis integral. Because the cut is already independent of the loop momentum, we do not have any reduction to perform. Thus the matching process is very simple
\[{\rm Cut}_{\rm RR}=(2\pi G_{N})\omega^{4}\frac{(d+1)(d-2)}{(2+d)(d-1)}\delta( \omega^{2}-\ell_{E}^{2})=|G_{\rm RR}|a_{\rm RR}\delta(\omega^{2}-\ell_{E}^{2} )\,. \tag{4.21}\]
We now need to determine the symmetry factor \(|G_{\rm RR}|\). The purpose of dividing by the symmetry factor is to compensate for possible over-counting of redundant information by the cut. Since our implementation of CTP sums over the advanced and retarded branches, we need to make sure that the cut is not double-counting contributions across branches. The simplest way to do so is to count the up/down (in our drawing convention) reflection symmetry as a true symmetry of the cut. Thus, in the current case we have \(|G_{\rm RR}|=2\). Many of the diagrams for the higher-order tails also include this reflection as part of the symmetry factor. Putting everything together leads to our CTP effective action for radiation reaction
\[S_{\rm RR}=\frac{(2\pi G_{N})}{2}\frac{(d+1)(d-2)}{(d+2)(d-1)}\int\frac{{\rm d }\omega}{2\pi}\omega^{4}F_{\rm CTP}^{(1)}(1;\omega^{2})\,. \tag{4.22}\]
Inserting the master integral definition from eq. (4.14), expanding in \(d=3+\epsilon^{4}\), and performing the CTP sum, we arrive at
\[S_{\rm RR}=-i\frac{G_{N}}{5}\int\frac{{\rm d}\omega}{2\pi}\,\omega^{5}\kappa_{ -+}(\omega)\!\left[1-\frac{\epsilon}{2}\left(i\pi\,{\rm sgn}\,\omega+\left[ \frac{9}{10}-\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi}\right) \right]\right)+\mathcal{O}(\epsilon^{2})\right]\, \tag{4.23}\]
in which we have introduced the textbook renormalization scale to the logarithm, and have retained the \(\mathcal{O}(\epsilon^{1})\) piece for later use in counterterm analysis. Note also the appearance of the \(i\pi\,{\rm sgn}\,\omega\) term, which is a result of using the modified \(i0\) prescription. The standard Feynman prescription would produce a definite sign rather than the \({\rm sgn}\,\omega\).
### Tail
We are now ready to approach the leading tail calculation. The broad strokes are similar to the radiation reaction, with just a few new pieces necessary for the higher tails. From now on, we drop the explicit \(E\) label on the loop momenta as we will always be working with integrated Euclidean loop momenta and an explicit frequency as the scale.
The first important deviation is that there is more than one Feynman diagram that contributes to the process (depending on gauge choices), shown in fig. 2, so we should take
some care with defining our basis of momentum invariants and integral family. A maximally convenient basis of momentum invariants to use is one which contains all possible inverse propagators of the diagrams under consideration. For the tail diagrams, we can use the labelings of \(\ell_{1}\) and \(\ell_{2}\) as shown in fig. 1(a) to define the inverse propagator basis
\[Q_{1}=\omega^{2}-\ell_{1}^{2}\qquad Q_{2}=\omega^{2}-\ell_{2}^{2}\qquad Q_{3}=-( \ell_{1}+\ell_{2})^{2} \tag{4.24}\]
with signs set by a mostly-minus metric to best align with the standard integration convention for propagator signs, see appendix B for more details, and where we have now made the fact that the \(\ell_{i}\) are the Euclidean spatial part of the momenta implicit. Important to note is that \(Q_{3}\) is chosen as a purely spatial momentum: it is the "potential-mode" propagator expected from the interaction of the quadrupole with the static background potential. From this basis, it is obvious that the integral family we need to consider is
\[F^{(2)}(\lambda_{1},\lambda_{2},\lambda_{3})=\int\prod_{i=1}^{2}\left(\frac{ \mathrm{d}^{d}\ell_{i}}{(2\pi)^{d}}\right)Q_{1}^{-\lambda_{1}}Q_{2}^{-\lambda_ {2}}Q_{3}^{-\lambda_{3}} \tag{4.25}\]
as it will cover any possible contributions from both diagram topologies. Analyzing the integral family, we again find a single basis integral, \(F^{(2)}(1,1,0)\), implying that the effective action can be written as
\[S_{\mathrm{T}} =\int\frac{\mathrm{d}\omega}{2\pi}\left(c_{\mathrm{T}}^{-+}F^{(2 )}(1,1,0;\omega_{R}^{2})+c_{\mathrm{T}}^{+-}F^{(2)}(1,1,0;\omega_{A}^{2})\right)\] \[\equiv\int\frac{\mathrm{d}\omega}{2\pi}a_{\mathrm{T}}F^{(2)}_{ \mathrm{CTP}}(1,1,0)\,. \tag{4.26}\]
There are two interesting observations about the integral basis. First, the basis integral actually factorizes! \(Q_{1}\) and \(Q_{2}\) depend separately on the two loop momenta so we have
\[F^{(2)}(1,1,0)=\left(\int\frac{\mathrm{d}^{d}\ell_{1}}{(2\pi)^{d}}\frac{1}{Q_ {1}}\right)\left(\int\frac{\mathrm{d}^{d}\ell_{2}}{(2\pi)^{d}}\frac{1}{Q_{2}} \right)=F^{(1)}(1;\omega^{2})^{2}\,. \tag{4.27}\]
Since \(F^{(1)}(1;\omega^{2})\) is finite in dimensional regularization, so is \(F^{(2)}(1,1,0)\). Second, we have the important integral relation
\[F^{(2)}(1,1,1)=-\frac{(d-2)}{2(d-3)\omega^{2}}F^{(2)}(1,1,0)\,. \tag{4.28}\]
Figure 2: The possible Feynman diagrams needed to evaluate the leading tail contribution
This relation highlights that the only source of divergences in the tail comes from terms in which all three propagators are present.
Now that we have identified the basis integral, we set out to evaluate the corresponding cut. From the basis integral, we know that the needed cut must contain two radiation mode propagators. Within the tail framework, there is only one possible configuration of tree amplitudes with two radiation propagators: the quadrupole-mass-quadrupole contraction with a two-point mass amplitude, as shown in fig. (a)a. Assembling the cut as the product of tree amplitudes from eqs. (101) and (119) and inserting the sum over graviton states, we have
\[\text{Cut}_{\text{T}}^{ab} =\sum_{\text{states}}\mathcal{M}_{I(-\omega)}\mathcal{M}_{ggE} \mathcal{M}_{I(\omega)}\Big{|}_{Q_{1}=0,Q_{2}=0}\] \[=\lambda_{I}^{2}J^{\mu\nu}_{I_{a}(-\omega)}P^{\mu\nu;\alpha\beta} \mathcal{M}^{\alpha\beta;\gamma\sigma}_{ggE}P^{\gamma\sigma;\rho\tau}J^{\rho \tau}_{I_{b}(\omega)}\delta(Q_{1})\delta(Q_{2})\,. \tag{120}\]
Evaluating the contractions and performing the tensor reduction to \(\kappa_{ab}(\omega)\) (but not employing IBP reductions yet), we arrive at
\[\text{Cut}_{\text{T}}=\frac{4\pi^{2}G_{N}^{2}E(d-2)}{(d-1)^{3}(d +2)\omega^{2}}\delta(Q_{1})\delta(Q_{2})\Bigg{(}(d-2)Q_{3}^{3}+8(d-2)Q_{3}^{2 }\omega^{2}+4(d^{2}+4d-9)Q_{3}\omega^{4}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad+16(d^{2}-1)\omega^{6}+\frac{8(d^{2}-1)(d+1)\omega^{8}}{Q_{3}}\Bigg{)}\,. \tag{121}\]
Importantly, if we had used the full two-scalar two-graviton amplitude, eq. (120), instead of \(\mathcal{M}_{ggE}\) then the "virtual black hole" poles would have entered the cut carrying dependence on the reference vector \(q\) from the physical state projector, eq. (118). The dependence on \(q\) drops exactly for the leading-order term in the large-\(m_{s}\) expansion. We have written the cut as a Laurent series in the uncut momentum invariant \(Q_{3}\) to manifest the factorization property. This allows us to verify the \(Q_{3}^{-1}\) part of the cut by evaluating a different cut, the one shown in fig. (b)b. This new cut is built from the one-point mass term, eq. (121), as
Figure 3: The unitarity cut diagrams used for analysis of the tail.
well as a three-point all-graviton amplitude. Evaluating it, we find
\[\text{Cut}_{\text{fig.~{}\ref{fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig
### Tail-of-Tail
We now proceed to the tail-of-tail calculation. As in the previous cases, our first job is to identify our basis of momentum invariants and the associated integral family. With two energy couplings, we will have 3 free loop momenta from which to build momentum invariants. A quick counting shows us that whatever basis of invariants we choose needs to cover \(3\)\(\ell_{i}^{2}\) and \(\binom{3}{2}=3\) independent choices of \(\ell_{i}\cdot\ell_{j}\). In line with what we did for the tail, it is useful to analyze the cubic tail-of-tail diagrams, shown in fig. 4, to cover as much of the momentum basis as possible using inverse propagators. All diagrams have 5 propagators, but only 4 can be chosen in common between fig. 3(a) and fig. 3(b): \(\omega^{2}-\ell_{1}^{2}\), \(\omega^{2}-\ell_{3}^{2}\), \(-\ell_{4}^{2}\), \(-\ell_{5}^{2}\) are propagators in both. However, this means that the unique propagators in each diagram, \(\omega^{2}-\ell_{2}^{2}\) in fig. 3(a) and \(-(\ell_{1}+\ell_{3})^{2}\) in fig. 3(b) can make up the fifth and sixth needed invariants. We make the particular labeling choice
\[Q_{1} =\omega^{2}-\ell_{1}^{2} Q_{2} =\omega^{2}-\ell_{3}^{2} \tag{5.1}\] \[Q_{3} =-\ell_{4}^{2} Q_{4} =-\ell_{5}^{2}=-(\ell_{1}+\ell_{4}+\ell_{3})^{2}\] \[Q_{5} =\omega^{2}-\ell_{2}^{2}=\omega^{2}-(\ell_{1}+\ell_{4})^{2} Q_{6} =-(\ell_{1}+\ell_{3})^{2}\,.\]
At this point, it is also worth pointing out that there is an additional "non-planar" propagator \(\omega^{2}-(\ell_{1}+\ell_{5})^{2}=Q_{1}+Q_{2}+Q_{3}+Q_{4}-Q_{5}-Q_{6}\) that could appear through a diagram like fig. 3(c), which appears to spoil the ability to define a single integral family to cover all possible propagator structures. In particular it will show up later through the \(u\)-channel pole of a four-graviton amplitude. However, since we are working to leading order in \(G_{N}\), we expect the energy couplings to be time-independent and thus indistinguishable. This means that we can "uncross" the legs in fig. 3(c) (and similar diagrams) which results in fig. 3(a) (or similar) _except with \(\ell_{4}\leftrightarrow\ell_{5}\)_, allowing us to rewrite any integrals that appear with a \(\omega^{2}-(\ell_{1}+\ell_{5})^{2}\) pole back in terms of the propagator basis using the relabeling \(Q_{5}\to Q_{1}+Q_{2}+Q_{3}+Q_{4}-Q_{5}-Q_{6}\).
Figure 4: The cubic tail-of-tail diagrams that are used to select an advantageous basis of momentum invariants
With the non-planar consideration dealt with, we have successfully identified all propagators that can appear and used them to span the set of momentum invariants. Thus, we capture all possible contributions coming from fig. 4 or their contact diagrams using the single integral family
\[F^{(3)}(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4},\lambda_{5},\lambda_{6})= \int\left(\frac{\mathrm{d}^{d}\ell}{(2\pi)^{d}}\right)^{3}\frac{1}{Q_{1}^{ \lambda_{1}}Q_{2}^{\lambda_{2}}Q_{3}^{\lambda_{3}}Q_{4}^{\lambda_{4}}Q_{5}^{ \lambda_{5}}Q_{6}^{\lambda_{6}}} \tag{100}\]
with integer values of \(\lambda_{i}\). This integral family can be reduced in terms of two relevant basis integrals 5
Footnote 5: The basis technically has additional integrals in it, for instance \(F^{(3)}(1,0,0,1,1,1)\), due to a rotational symmetry of the graphical representation of the family. This rotational symmetry is broken in the actual computation by the identification of which radiation propagator is sourced from which quadrupole.
\[\mathcal{I}^{(3)}=\{F^{(3)}(1,1,0,0,1,0),F^{(3)}(1,1,1,1,0,0)\} \tag{101}\]
with topologies corresponding to the unitarity cut diagrams show in fig. 5. Thus the effective action for the tail-of-tail will be
\[S_{\mathrm{TT}}=\int\frac{\mathrm{d}\omega}{2\pi}\left[a_{\mathrm{TT},1}F_{ \mathrm{CTP}}^{(3)}(1,1,0,0,1,0)+a_{\mathrm{TT},2}F_{\mathrm{CTP}}^{(3)}(1,1, 1,1,0,0)\right]\,. \tag{102}\]
We now proceed to evaluate the cuts in fig. 5 to determine \(a_{\mathrm{TT},1}\) and \(a_{\mathrm{TT},2}\).
The process for evaluating the first cut, fig. (a)a, proceeds almost identically to evaluating the tail cut. We have
\[\mathrm{Cut}_{\mathrm{fig.~{}\ref{fig:c1}a}}=\lambda_{I}^{2} \delta(Q_{1})\delta(Q_{2})\delta(Q_{5})J_{I(-\omega)}^{\mu_{1}\nu_{1}}\nu_{ 1};\mu_{2}\nu_{2}\mathcal{M}_{ggE,1}^{\mu_{2}\nu_{2};\mu_{3}\nu_{3}}\] \[\times P^{\mu_{3}\nu_{3};\mu_{4}\nu_{4}}\mathcal{M}_{ggE,2}^{\mu_{4} \nu_{4};\mu_{5}\nu_{5}}P^{\mu_{5}\nu_{5};\mu_{6}\nu_{6}}J_{I(\omega)}^{\mu_{6 }\nu_{6}}, \tag{103}\]
where we have added subscripts to the \(\mathcal{M}_{ggE}\) to distinguish the two momentum labelings. Inserting the relevant definitions and evaluating all of the index contractions results in an expression that is hundreds of terms long, but has the general structure
\[\mathrm{Cut}_{\mathrm{fig.~{}\ref{fig:c1}a}}=(512G_{N}^{3}E^{2} \pi^{3})\delta(Q_{1})\delta(Q_{2})\delta(Q_{5})\Big{(}\frac{1}{Q_{3}Q_{4}}g_{ 1}(Q_{6},\omega,d)+\frac{1}{Q_{3}}g_{2}(Q_{4},Q_{6},\omega,d)\] \[+\frac{1}{Q_{4}}g_{3}(Q_{3},Q_{6},\omega,d)+g_{4}(Q_{3},Q_{4},Q_{ 6},\omega,d)\Big{)}\,, \tag{104}\]
Figure 5: The unitarity cut diagrams needed to evaluate the tail of tail
where each of the \(g_{i}\) are polynomials in the \(Q\)s, but rational in \(d\) and \(\omega\). This specific form of the cut highlights the available factorization channels that can be cross-checked: \(g_{2}\) and \(g_{3}\) are partial contacts that can be checked by calculating the relevant cuts; \(g_{1}\) is a channel that _overlaps_ with fig. 5b and thus serves both as a check and as demonstration for why symmetry factors are necessary in the cut matching process, which we will show after constructing the other cut.
To constrain the integral basis coefficients using the cut, we must reduce it to the basis. As before, we do so by treating the \(\delta(Q_{i})\)s as propagators and reducing using standard methods, but since we have more than one basis element, the \(\delta(Q_{i})\)s instruct us to only keep the parts of the reduction which have at least \(Q_{1}\), \(Q_{2}\), and \(Q_{5}\) as propagators. In particular, the \(g_{2}\), \(g_{3}\), and \(g_{4}\) terms produce only the needed contributions, but the reduction of the \(g_{1}\) term will produce both basis elements. We only keep the one involving \(F^{(3)}(1,1,0,0,1,0)\). Performing the reduction in this manner, we arrive at
\[\overline{\text{Cut}}_{\text{fig.~{}\ref{fig:c1}a}} =(512G_{N}^{3}E^{2}\pi^{3})\frac{(d-2)(12-2d+5d^{2}-4d^{3}+d^{4} )^{2}\omega^{4}}{4(d-3)^{2}(d-1)^{3}d^{2}(d+1)(d+2)}\delta(Q_{1})\delta(Q_{2}) \delta(Q_{5})\] \[=(512G_{N}^{3}E^{2}\pi^{3})\frac{(d-2)\mathcal{P}_{4}^{2}\omega^{ 4}}{4(d-3)^{2}(d-1)^{3}d^{2}(d+1)(d+2)}\delta(Q_{1})\delta(Q_{2})\delta(Q_{5})\,. \tag{100}\]
We now turn our attention to the second cut, fig. 5b, which is constructed as
\[\text{Cut}_{\text{fig.~{}\ref{fig:c1}b}} =\lambda_{I}^{2}(J_{I(-\omega)}P)^{\mu_{1}\nu_{1}}(\mathcal{M}_{ sgs,1}P)^{\mu_{2}\nu_{2}}\mathcal{M}_{4}^{\mu_{1}\nu_{1}...\mu_{4}\nu_{4}}(P \mathcal{M}_{sgs,2})^{\mu_{3}\nu_{3}}(PJ_{I(\omega)})^{\mu_{4}\nu_{4}}\,. \tag{101}\]
This cut is also almost one hundred terms long, but has the schematic form
\[\text{Cut}_{\text{fig.~{}\ref{fig:c1}b}} =(512G_{N}^{3}E^{2}\pi^{3})\delta(Q_{1})\delta(Q_{2})\delta(Q_{3} )\delta(Q_{4})\Big{(}\frac{1}{Q_{5}}h_{1}(Q_{6},\omega,d)+\frac{1}{Q_{6}}h_{2 }(Q_{5},\omega,d)\] \[\qquad\qquad\qquad\qquad+\frac{1}{Q_{5}{+}Q_{6}}h_{3}(Q_{5}-Q_{6},\omega,d)+h_{4}(Q_{5},Q_{6},\omega,d)\Big{)}\,, \tag{102}\]
with the \(h_{i}\) having similar properties as the \(g_{i}\) above. The arrangement is again chosen to manifest pole structures and factorization: \(h_{2}\) contains all contributions with the pole structures of fig. 4b, \(h_{1}\) all those with the poles of fig. 4a, \(h_{3}\) from fig. 4c, and \(h_{4}\) the contact contribution.
From here we can delve into the overlapping channels to highlight the internal consistency checks and the need for relative symmetry factors. The objects of interest for the discussion are the overlapping channels from eq. (100) and eq. (102)
\[g_{1}(Q_{6},\omega,d) =h_{1}(Q_{6},\omega,d)=-h_{3}(Q_{5}-Q_{6},\omega,d)\] \[=\frac{1}{8(d-1)^{3}(d+2)}\Big{[}(-2+d)^{2}Q_{6}^{4}+4(-2+d)^{2}Q_ {6}^{3}\omega^{2}\] \[+4(d-1)(-9+d+d^{2})Q_{6}^{2}\omega^{4}+8(-2+d)(-1+d)(1+d)Q_{6} \omega^{6}\] \[+8(-2+d)(-1+d)^{2}(1+d)\omega^{8}\Big{]}\,. \tag{103}\]
We immediately see that the overlapping channels agree: when we evaluate the \(Q_{3}=Q_{4}=0\) residue of eq. (100), essentially picking out the \(g_{1}\) contribution, we get exactly the same
expression as when evaluating the \(Q_{5}=0\) residue of eq. (111). However, when we "uncross" the \(Q_{5}+Q_{6}\) pole using \(Q_{5}\to-Q_{5}-Q_{6}\) (the consequence of the \(\ell_{4}\leftrightarrow\ell_{5}\) relabeling, with the help of the on-shell conditions), the cut becomes
\[\text{Cut}_{\text{fig.~{}5b}}=(512G_{N}^{3}E^{2}\pi^{3}) \delta(Q_{1})\delta(Q_{2})\delta(Q_{3})\delta(Q_{4})\Big{(}\frac{1}{Q_{5}} 2h_{1}(Q_{6},\omega,d)+\frac{1}{Q_{6}}h_{2}(Q_{5},\omega,d)\] \[+h_{4}(Q_{5},Q_{6},\omega,d)\Big{)}\,. \tag{112}\]
Through the \(Q_{5}^{-1}2h_{1}(Q_{6},\omega,d)\) term, we see that \(\text{Cut}_{\text{fig.~{}5b}}\) is actually double-counting its contribution with respect to \(\text{Cut}_{\text{fig.~{}5a}}\). This misalignment is exactly what is compensated for by the \(|G|\) normalization of cut matching, eq. (110). In the current case, the diagram of fig. 5b has an extra symmetry of swapping the energy sources (or \(\ell_{4}\leftrightarrow\ell_{5}\) as we actually use it) that fig. 5a does not have.
After the uncrossing, it is now also straightforward to reduce the cut. Again we only keep the reductions with propagators aligning with the \(\delta(Q_{i})\), namely those producing \(F^{(3)}(1,1,1,1,0,0)\). Doing so yields
\[\overline{\text{Cut}}_{\text{fig.~{}5b}} =(512G_{N}^{3}E^{2}\pi^{3})\omega^{6}\delta(Q_{1})\delta(Q_{2}) \delta(Q_{3})\delta(Q_{4}) \tag{113}\] \[\times\frac{(2d-3)\left(960-1696d+424d^{2}-476d^{3}+330d^{4}-39d^ {5}+53d^{6}-45d^{7}+9d^{8}\right)}{3(d-3)(d-1)^{3}d(d+1)(d+2)(3d-4)(3d-2)}\,.\]
We similarly name the \(d\) polynomial in the numerator
\[\mathcal{P}_{8}=960-1696d+424d^{2}-476d^{3}+330d^{4}-39d^{5}+53d^{6}-45d^{7}+ 9d^{8}\,. \tag{114}\]
With both cuts in hand, we can now construct the effective action. The cuts are again in one-to-one correspondence with the coefficients,
\[a_{TT,1}\,\delta(Q_{1})\delta(Q_{2})\delta(Q_{5}) =\frac{\overline{\text{Cut}}_{\text{fig.~{}5a}}}{|G_{\text{fig.~ {}5a}}|}=\frac{\overline{\text{Cut}}_{\text{fig.~{}5a}}}{2} \tag{115a}\] \[a_{TT,2}\,\delta(Q_{1})\delta(Q_{2})\delta(Q_{4})\delta(Q_{5}) =\frac{\overline{\text{Cut}}_{\text{fig.~{}5b}}}{|G_{\text{fig.~ {}5b}}|}=\frac{\overline{\text{Cut}}_{\text{fig.~{}5b}}}{4}\,. \tag{115b}\]
The integrals themselves are straightforward to evaluate. \(F^{(3)}(1,1,0,0,1,0)\) is just the cube of a one-propagator integral, while \(F^{(3)}(1,1,1,1,0,0)\) is evaluatable via bubble iteration. More details on the evaluations are provided in appendix B.1. We thus have all of the information required to construct the tail-of-tail effective action via eq. (109). Expanding in \(d=3+\epsilon\) and performing the CTP sum yields
\[S_{\text{TT}}=\frac{214}{525}G_{N}^{3}E^{2}\int\frac{\text{d} \omega}{2\pi} \omega^{7}\kappa_{-+}(\omega)\Bigg{\{}\frac{i}{\epsilon}+\left[\frac{3}{ 2}i\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi}\right)+\frac{3\pi\, \text{sgn}(\omega)}{2}-\frac{420}{107}i\zeta_{2}-\frac{675359}{89880}i\right]\] \[+\epsilon\Bigg{[}\pi\,\text{sgn}(\omega)\left(\frac{9}{4}\log \left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi}\right)-\frac{(352800\zeta_ {2}+675359)}{59920}\right)\] \[+i\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi}\right) \left(\frac{9}{8}\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi}\right)- \frac{(352800\zeta_{2}+675359)}{59920}\right)\] \[+\frac{4569}{856}i\zeta_{2}-\frac{1050}{107}i\zeta_{3}+\frac{1259 125247}{37749600}i\Bigg{]}+\mathcal{O}(\epsilon^{2})\Bigg{\}} \tag{116}\]
with \(\zeta_{n}\) the Riemann zeta values: \(\zeta_{2}=\frac{\pi^{2}}{6}\), \(\zeta_{3}=1.20206\dots\), \(\zeta_{4}=\frac{\pi^{4}}{90}\), etc. This result, first reported in Ref. [47], is the first time the tail-of-tail effective action as been computed in the "zero-point" formalism.
### Tail-of-Tail-of-Tail
There are few new features to the process of calculating TTT. It is primarily "more" of everything. First, we have a momentum product basis with \(\binom{4}{2}+4=10\) elements that we need to choose. We use the cubic diagrams shown in fig. 6 to define the set of propagators we select as a basis. There are significantly more "uncrossing" relabelings that can be used to return non-planar pole structures back to the basis, all of which we will need to employ when expanding higher-point graviton amplitudes. Thus we still only need a single integral family, given by
\[F^{(4)}(\lambda_{1},\dots,\lambda_{10})=\int\left(\frac{\mathrm{d}^{d}\ell}{(2 \pi)^{d}}\right)^{4}\frac{1}{\prod_{i=1}^{10}Q_{i}^{\lambda_{i}}}\,. \tag{5.16}\]
The relevant basis integrals are
\[\mathcal{I}^{(4)}=\{F^{(4)}(1,1,1,1,0,0,0,0,0,0),F^{(4)}(1,0,0,1, 1,1,1,0,0,0),\] \[\qquad\qquad F^{(4)}(1,0,1,1,1,1,0,0,0,0),F^{(4)}(1,1,0,1,0,1,1, 0,0,0)\}\,, \tag{5.17}\]
which correspond to the topologies in fig. 7. The first, third, and fourth basis integrals are all factorizable, while the third is bubble iterable. Notably, the last two integrals are actually equal: the integrals both factorize in exactly the same way so it doesn't matter whether the one-propagator piece occurs first or last in the evaluation. Thus, at the level of the CTP sum, the \(-+\) orientation of one diagram exactly matches the \(+-\) of the other. We could use this symmetry to remove one of the two integrals and corresponding cut from the basis, at which point it would _no longer carry the reflection symmetry factor_. Instead we keep both contributions and the reflection symmetry factor will remove the over-count of keeping both. This allows an explicit verification that both diagrams have identical contributions so either approach would produce the same result.
Figure 6: The cubic TTT diagrams that are used to select a basis of momentum invariants
The cuts are still one-to-one with the basis coefficients, so we simply need the symmetry factors of each cut, which come from the number of equivalent rearrangements of the \(E\)-connected lines along with the reflection symmetry
\[|G_{\text{fig.~{}7a}}|=2\qquad|G_{\text{fig.~{}7b}}|=2\times 3!=12\qquad|G_{\text{fig.~{}7c}}| =|G_{\text{fig.~{}7d}}|=2\times 2!=4 \tag{111}\]
along with each of the reduced cuts. The corresponding cuts are all thousands of terms long prior to reduction, but after reduction each one produces a basis coefficient that is a relatively simple rational function of \(d\) and \(\omega\):
\[a_{\text{TTT,fig.~{}7a}} =\frac{-(8192G_{N}^{4}E^{3}\pi^{4})\omega^{4}(d-2)\mathcal{P}_{4} ^{3}}{8(d-3)^{3}(d-1)^{4}d^{3}(d+1)^{2}(d+2)}\,, \tag{112a}\] \[a_{\text{TTT,fig.~{}7b}} =\frac{(8192G_{N}^{4}E^{3}\pi^{4})\omega^{6}(d-2)(3d-5)\mathcal{P} _{11}}{12(d-3)^{2}(d-1)^{4}d(d+1)^{2}(d+2)(2d-3)(3d-4)(3d-2)}\,,\] (112b) \[a_{\text{TTT,fig.~{}7c}} =\frac{-(8192G_{N}^{4}E^{3}\pi^{4})\omega^{6}(2d-3)\mathcal{P}_{ 4}\mathcal{P}_{8}}{12(d-3)^{2}(d-1)^{4}d^{2}(d+1)^{2}(d+2)(3d-4)(3d-2)}\,,\] (112c) \[a_{\text{TTT,fig.~{}7d}} =\frac{-(8192G_{N}^{4}E^{3}\pi^{4})\omega^{6}(2d-3)\mathcal{P}_{ 4}\mathcal{P}_{8}}{12(d-3)^{2}(d-1)^{4}d^{2}(d+1)^{2}(d+2)(3d-4)(3d-2)}\,, \tag{112d}\]
with \(\mathcal{P}_{4}\) and \(\mathcal{P}_{8}\) from eq. (105) and eq. (106) respectively, and
\[\mathcal{P}_{11} =-3024+3720d+2980d^{2}+996d^{3}-2426d^{4}-737d^{5}+799d^{6}\] \[-284d^{7}-36d^{8}+223d^{9}-117d^{10}+18d^{11}\,. \tag{113}\]
Assembling the effective action by evaluating the integrals (see appendix B.1), combining
Figure 7: The tail of tail of tail unitarity cuts.
with the coefficients, and expanding in \(d=3+\epsilon\) results in
\[S_{\rm TTT} =-\frac{214}{525}G_{N}^{4}E^{3}\int\frac{\mathrm{d}\omega}{2\pi} \omega^{8}\kappa_{-+}(\omega)\Bigg{\{}\frac{1}{\epsilon^{2}}+\frac{1}{\epsilon }\left[2\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi}\right)-2i\pi{\rm sgn }(\omega)-\frac{252583}{29960}\right]\] \[+\Bigg{[}\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi} \right)\left(2\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi}\right)- \frac{252583}{14980}\right)+\left(\frac{252583}{14980}-4\log\left(\frac{\omega ^{2}e^{\gamma_{E}}}{\mu^{2}\pi}\right)\right)i\pi{\rm sgn}(\omega)\] \[\qquad-\frac{29}{2}\zeta_{2}-\frac{840}{107}\zeta_{3}+\frac{1583 459537}{37749600}\Bigg{]}\] \[+\epsilon\Bigg{[}\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^{ 2}\pi}\right)\bigg{\{}\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi} \right)\left[\frac{4}{3}\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi} \right)-\frac{252583}{14980}\right]\] \[\qquad+\left(-29\zeta_{2}-\frac{1680\zeta_{3}}{107}+\frac{158345 9537}{18874800}\right)\bigg{\}}\] \[\qquad+\frac{7324907\zeta_{2}}{59920}+\frac{14309\zeta_{3}}{642}+ \frac{420\zeta_{4}}{107}-\frac{104414536729}{634193280}\] \[\qquad-i\pi\,{\rm sgn}(\omega)\bigg{\{}2\log\left(\frac{\omega^{ 2}e^{\gamma_{E}}}{\mu^{2}\pi}\right)\left(2\log\left(\frac{\omega^{2}e^{\gamma _{E}}}{\mu^{2}\pi}\right)-\frac{252583}{14980}\right)\] \[\qquad-13\zeta_{2}-\frac{1680}{107}\zeta_{3}+\frac{1583459537}{ 18874800}\bigg{\}}\Bigg{]}+\mathcal{O}(\epsilon^{2})\Bigg{\}}\,. \tag{5.21}\]
We first reported this result in Ref. [47], which was the first time an EFT-related approach has ever attempted the tail-of-tail-of-tails.
### Tail-of-Tail-of-Tail-of-Tail
Analyzing all of the patterns between the previous tails, we are able to make predictions about both what cuts we need to evaluate, and what their value should be. For the cut basis, we expect diagrams involving no bulk graviton contacts, as well as all diagrams involving at least four-point bulk contacts. Thus a reasonable initial guess for the needed integrals and cuts would be those shown in figs. 7(a) to 7(g). However, it turns out that this set is slightly incomplete. We use these initial seven diagrams to help define the integral family, with 9 of the \(\binom{5}{2}+5=15\) momentum invariants accounted for by the explicit propagators in figs. 7(a) to 7(g) via
\[F^{(5)}(\text{fig.~{}\ref{fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig
and the remaining 6 from the internal "planar" potential-mode propagators of \({\cal M}_{6}\)6. Note that again we are including contributions that can be identified with each other,
Footnote 6: Since they do not appear as part of the integral basis, the choice of these propagators is for convenience when handling the intermediate steps during reduction.
\[F^{(5)}(\text{fig.~{}\ref{fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:fig:figfig:fig:fig:fig:figfig:fig:fig:fig:fig:fig:fig:
The explicit rational functions of propagators for each of the cuts is tens of thousands of terms long. The cuts for figs. 7(a) to 7(f) were all just barely computable using a personal computer. Figure 7(g) required the use of the Quest computing cluster at Northwestern University. For all of the cuts except for fig. 7(g), we can observe patterns occuring in the reduced cuts and basis coefficients. Starting with the "all-radiation" cuts, figs. 2(b), 2(a), 2(a) and 2(a), we see that they follow a simple iteration which suggests that
\[\overline{\text{Cut}}_{\text{fig.~{}8a}}=(2\pi G_{N})\left(\frac{(d+1)(d-2)}{ (d+2)(d-1)}\omega^{4}\right)\left(\frac{(-8\pi G_{N}E)\mathcal{P}_{4}}{(d-3)(d -1)d(d+1)}\right)^{4}\prod_{i=1}^{5}\delta(Q_{i})\,. \tag{100}\]
We have verified this prediction through direct calculation of the cut. Similarly, comparing eqs. (101) and (102) to eq. (100), we are led to
\[\overline{\text{Cut}}_{\text{fig.~{}8b}} =\overline{\text{Cut}}_{\text{fig.~{}8c}}=\overline{\text{Cut} }_{\text{fig.~{}8d}}\] \[=\frac{(512G_{N}^{3}E^{2}\pi^{3})\omega^{6}(2d-3)\mathcal{P}_{8} \left(\frac{(-8\pi G_{N}E)\mathcal{P}_{4}}{(d-3)(d-1)d(d+1)}\right)^{2}}{3(d- 3)(d-1)^{3}d(d+1)(d+2)(3d-4)(3d-2)}\,, \tag{101}\]
where from now on the \(\delta(Q_{i})\) are implicit. These equalities are borne out through explicit computation. Moving on to figs. 7(e) and 7(f), we find that their reduced cuts are
\[\overline{\text{Cut}}_{\text{fig.~{}8e}}=\overline{\text{Cut}}_{\text{fig.~ {}8f}}=\frac{-(8192G_{N}^{4}E^{3}\pi^{4})\omega^{6}(d-2)(3d-5)\mathcal{P}_{11 }\left(\frac{(-8\pi G_{N}E)\mathcal{P}_{4}}{2(d-3)(d-1)d(d+1)}\right)}{(d-3) ^{2}(d-1)^{4}d(d+1)^{2}(d+2)(2d-3)(3d-4)(3d-2)}\,, \tag{102}\]
yet again in line with an iteration, this time between the tail-type factor and a factor from \(\mathcal{M}_{5}\). Figure 7(h) is the last of the "easy" cuts. While we mentioned that kinematically it is contained within fig. 7(g), we explicitly include it with the other iterated cuts because it also turns out to be iterative. Specifically, we find that the direct calculation of the cut yields
\[\overline{\text{Cut}}_{\text{fig.~{}8h}} =\frac{(131072G_{N}^{5}E^{4}\pi^{5})(2d-3)^{2}\omega^{8}\mathcal{ P}_{8}^{2}}{9(d-3)^{2}(d-2)(d-1)^{5}d^{2}(d+1)^{3}(d+2)(3d-4)^{2}(3d-2)^{2}} \tag{103}\] \[=(2\pi G_{N})\left(\frac{(d+1)(d-2)}{(d+2)(d-1)}\omega^{4}\right) \left(\frac{(256G_{N}^{2}E^{2}\pi^{2})(2d-3)\omega^{2}\mathcal{P}_{8}}{3(d-3) (d-2)(d-1)^{2}d(d+1)^{2}(3d-4)(3d-2)}\right)^{2}\,,\]
which is the square of eq. (100) with an additional prefactor matching the radiation-reaction term eq. (101).
Last, we turn our attention to fig. 7(g). Since we have already determined fig. 7(h), we report only the part of \(\overline{\text{Cut}}_{\text{fig.~{}8g}}\) that dresses \(F^{(5)}\)(fig. 7(g))
\[\overline{\text{Cut}}_{\text{fig.~{}8g}}\big{|}_{F^{(5)}(\text{fig.~{}8g})} =(131072G_{N}^{5}E^{4}\pi^{5})\omega^{2}\mathcal{P}_{22}\left(\frac{(d+1)(d- 2)}{(d+2)(d-1)}\omega^{4}\right) \tag{104}\] \[\qquad\times\frac{1}{30(d-3)^{3}(d-2)^{2}(d-1)^{4}d^{2}(d+1)^{4}(2 d-3)(3d-4)^{2}(3d-2)^{2}(5d-8)(5d-6)}\]
with
\[\mathcal{P}_{22} =291600d^{22}-5364630d^{21}+43610967d^{20}-198558153d^{19}+478961469d^ {18}\] \[\quad+14340403d^{17}-4908635433d^{16}+21042703665d^{15}-53485147433d^ {14}\] \[\quad+101573518519d^{13}-174143104426d^{12}+309327324244d^{11}-55 5238832200d^{10}\] \[\quad+944837229872d^{9}-1503589473248d^{8}+2203112130496d^{7}-29116 08239232d^{6}\] \[\quad+3432661203456d^{5}-3454174264320d^{4}+2688616654848d^{3}-143 0856658944d^{2}\] \[\quad+447389982720d-60914073600\,. \tag{112}\]
The final piece remaining to assemble the effective action is to enumerate the symmetry factors. Since we are using the over-complete integral basis, all diagrams come with the reflection symmetry factor in addition to the permutation factor for the \(E\) coupling
\[|G_{\text{fig.~{}8a}}| =2\qquad|G_{\text{fig.~{}8b}}|=|G_{\text{fig.~{}8c}}|=|G_{\text{ fig.~{}8d}}|=2\times 2!\] \[|G_{\text{fig.~{}8e}}| =|G_{\text{fig.~{}8f}}|=2\times 3!\qquad|G_{\text{fig.~{}8g}}|=2 \times 4!\] \[|G_{\text{fig.~{}8h}}| =2\times(2!\times 2!)\,. \tag{12}\]
The integrals for figs. 8a to 8g are all straightforward to evaluate using the same techniques as the previous calculations. Unfortunately, for fig. 8h we must resort to numerical evaluation and reconstruction, discussed in appendix B.2. Combining all of the above data, expanding in \(d=3+\epsilon\) and evaluating the CTP sums results in the effective action contribution
\[S_{\text{TTTT}}=-\frac{5}{2}\frac{214^{2}}{525^{2}}G_{N}^{5}E^{4} \int\frac{\text{d}\omega}{2\pi}\omega^{9}\kappa_{-+}(\omega) \Bigg{\{}\frac{i}{\epsilon^{2}}+\frac{1}{\epsilon}\Bigg{[}\frac{5}{2}i\log \left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi}\right)+\frac{5\pi\operatorname {sgn}(\omega)}{2} \tag{13}\] \[\quad-i\left(\frac{840}{107}\zeta_{2}+\frac{5249287}{343470} \right)\Bigg{]}\] \[+\Bigg{[}\frac{5}{2}i\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{ \mu^{2}\pi}\right)\left(\frac{5}{4}\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{ \mu^{2}\pi}\right)-\frac{840}{107}\zeta_{2}-\frac{5249287}{343470}\right)\] \[\quad+\frac{5}{2}\pi\operatorname{sgn}(\omega)\left(\frac{5}{2} \log\left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi}\right)-\frac{840}{107} \zeta_{2}-\frac{5249287\pi\operatorname{sgn}(\omega)}{343470}\right)\] \[\quad+\frac{5541865i\zeta_{2}}{91592}-\frac{2940}{107}i\zeta_{3}- \frac{176400i\zeta_{4}}{11449}+\frac{90973869743i}{673201200}\Bigg{]}+\mathcal{ O}(\epsilon^{1})\Bigg{\}}\,.\]
This is the first time, to our knowledge, that the T\({}^{4}\) contribution has been derived in terms of generic quadrupoles.
## 6 Analysis of Dissipative Sector
The effective actions derived in sections 4 and 5 contain divergences in the dimensional regularization parameter \(\epsilon\). The divergences in the dissipative sector, namely of the part of the action that is odd-in-\(\omega\), can be handled completely within the tower of tails. This dissipative analysis takes place purely within the "binary-as-composite" EFT at the radiation
scale. As mentioned in section 2, the UV (small-scale) theory that defines the quadrupoles, namely the description of the binary dynamics at the orbital scale, allows one to directly compute at sufficiently high PN order the renormalization of the multipole couplings in terms of the IR divergences of the small-scale theory. To highlight that we are focusing on the purely dissipative contributions, we will work at the level of the dissipated energy and energy spectra, \(\Delta E\) and \(\frac{\mathrm{d}E}{\mathrm{d}\omega}\), respectively. We will then proceed to demonstrate the renormalization of these dissipative contributions up to new subleading orders. We take inspiration from Ref. [26], which is essentially textbook QFT renormalization.
### Energy Loss in the CTP Formalism
The CTP formalism provides a generic extension of Noether's theorem which accounts for the effects of the non-conservative dynamics on the total energy of the accessible degrees of freedom. Specifically, we have [54]:
\[\frac{\mathrm{d}E^{\mathrm{CTP}}}{\mathrm{d}t}=-\frac{\partial L}{\partial t} +\dot{q}^{I}\left[\frac{\partial K}{\partial q_{-}^{I}(t)}\right]_{\mathrm{PL} }+\ddot{q}^{I}\left[\frac{\partial K}{\partial\dot{q}_{-}^{I}(t)}\right]_{ \mathrm{PL}}+\frac{\partial}{\partial\begin{subarray}{c}\mathrm{higher}\\ \mathrm{tri}\end{subarray}} \tag{100}\]
for \(E^{\mathrm{CTP}}\) the energy of the accessible degrees of freedom \(q\). To apply this formula in the case of gravitational tails, we need to analyze the splitting into the conservative and non-conservative parts of the effective action. Then we must transform the time-domain expression of eq. (100) into a frequency-domain.
Splitting the full CTP Lagrangian into the conservative and non-conservative parts only requires knowledge about the functional structure of the generalized coordinates. In the current case of tails, the quadrupoles themselves, \(I_{\pm}(\omega)\) (suppressing \(SO(3)\) tensor indices in the following discussion), are the relevant generalized coordinates. We have seen in the computations in sections 4 and 5 that the CTP effective actions for the tails take the form:
\[S_{\mathrm{CTP}}=\int_{-\infty}^{\infty}\mathrm{d}\omega f(\omega)\kappa_{- +}(\omega), \tag{101}\]
in which \(\omega\) is purely real, but the function \(f(\omega)\) may be complex. The conservative part of the CTP effective Lagrangian is defined as the piece expressible as the difference between two distinct histories, which in this case means the part symmetric in the \((-,+)\) variables:
\[\kappa_{\mathrm{C}}(\omega)\equiv\kappa_{11}(\omega)-\kappa_{22}(\omega) \sim\kappa_{-+}(\omega)+\kappa_{+-}(\omega). \tag{102}\]
We then adopt an orthogonal change of basis to define:
\[\kappa_{\mathrm{NC}}(\omega)\equiv\kappa_{-+}(\omega)-\kappa_{+-}(\omega), \tag{103}\]
so that the CTP effective action becomes:
\[S_{\mathrm{CTP}}=\int_{-\infty}^{\infty}\mathrm{d}\omega\left[\frac{1}{2}f( \omega)(\kappa_{\mathrm{C}}+\kappa_{\mathrm{NC}})\right]. \tag{104}\]
Finally, we can use the parity properties of the integral, and \(\kappa_{-+}(-\omega)\to\kappa_{+-}(\omega)\) under \(\omega\leftrightarrow-\omega\) to write:
\[S_{\mathrm{CTP}}=\frac{1}{2}\int_{-\infty}^{\infty}\mathrm{d}\omega\Big{[}f_ {\mathrm{even}}(\omega)\kappa_{\mathrm{C}}(\omega)+f_{\mathrm{odd}}(\omega) \kappa_{\mathrm{NC}}(\omega)\Big{]}\,. \tag{105}\]
We then identify in eq. (6.1) \(L(\omega)=\frac{1}{2}f_{\rm even}(\omega)\kappa_{\rm C}(\omega)\), and \(K(\omega)=\frac{1}{2}f_{\rm odd}(\omega)\kappa_{\rm NC}(\omega)\). We assume that the conservative piece does not have an explicit time dependence, and so it does not contribute to eq. (6.1).
In order to apply eq. (6.1) to the Fourier-space non-conservative potential, we need to switch the from frequency dependence in the CTP quadrupole, \(I_{-}(\omega)\), back to time dependence via the Fourier transform
\[I_{a}(\omega)=\int{\rm d}te^{-i\omega t}I_{a}(t),\qquad I_{a}(t)=\frac{1}{2\pi} \int{\rm d}\omega e^{i\omega t}I_{a}(\omega)\,. \tag{6.7}\]
This allows to define \(K(t)\) as:
\[K(t)=\frac{1}{2}\int{\rm d}\omega f_{\rm odd}(\omega)\big{[}I_{+}(\omega)e^{i \omega t}-I_{+}(-\omega)e^{-i\omega t}\big{]}I_{-}(t)\,, \tag{6.8}\]
and in turn the needed pieces for eq. (6.1) read:
\[\bigg{[}\frac{\partial K}{\partial I_{-}(t)}\bigg{]}_{\rm PL} =\frac{1}{2}\int{\rm d}\omega f_{\rm odd}(\omega)\left(I(\omega) e^{i\omega t}-I(-\omega)e^{-i\omega t}\right), \tag{6.9}\] \[\frac{{\rm d}I(t)}{{\rm d}t} =\frac{1}{2\pi}\frac{{\rm d}}{{\rm d}t}\int{\rm d}\omega^{\prime }e^{i\omega^{\prime}t}I(\omega^{\prime})=\frac{1}{2\pi}\int{\rm d}\omega^{ \prime}(i\omega^{\prime})e^{i\omega^{\prime}t}I(\omega^{\prime})\,. \tag{6.10}\]
Importantly, this step allows us to ignore terms that contain \(I_{+}(-\omega)I_{+}(\omega)\) and \(I_{-}(-\omega)I_{-}(\omega)\), since the first carries no dependence on \(I_{-}\), and the second vanishes in the physical limit \([\ldots]_{\rm PL}\) after taking the derivative.
We then assemble
\[\Delta E =\int{\rm d}t\frac{{\rm d}E^{\rm CTP}}{{\rm d}t}\] \[=\frac{1}{2}\frac{1}{2\pi}\int{\rm d}t\,{\rm d}\omega\,{\rm d} \omega^{\prime}(i\omega^{\prime})e^{i\omega^{\prime}t}I(\omega^{\prime})f_{ \rm odd}(\omega)\big{[}I(\omega)e^{i\omega t}-I(-\omega)e^{-i\omega t}\big{]}\,. \tag{6.11}\]
Resolving the Fourier transforms of the delta functions that arise:
\[\int{\rm d}te^{it(\omega+\omega^{\prime})}=2\pi\delta(\omega+\omega^{\prime}) \,,\quad\int{\rm d}te^{it(\omega-\omega^{\prime})}=2\pi\delta(\omega-\omega^{ \prime})\,, \tag{6.12}\]
leads to
\[\Delta E=\int{\rm d}\omega\Big{[}(-i\omega)f_{\rm odd}(\omega)\kappa(\omega) \Big{]}\,. \tag{6.13}\]
We reiterate that this analysis accounts for the energy change of the binary system, and that the _radiated energy_ carried by the gravitational field must, by conservation of energy, be opposite.
### Radiated Energy from Tails
We begin by applying energy loss formula, eq. (6.13), to the CTP effective actions derived in sections 4 and 5. Note that since we have not performed the renormalization at the level of the effective action, these initial energy contributions will still carry dimensional
regularization divergences. We will perform the renormalization at the level of the energy loss in the following subsections.
We begin with the radiation reaction term, eq. (104). Applying eq. (103), we can easily extract the energy loss of the quadrupoles into the gravitational field
\[\left(\Delta E\right)_{\text{RR}}=-\frac{G_{N}}{5\pi}\int_{-\infty}^{\infty} \text{d}\omega\kappa(\omega)\omega^{6}\left[1-\frac{\epsilon}{2}\left(\frac{9} {10}-\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi}\right)\right)+ \mathcal{O}(\epsilon^{2})\right]\,. \tag{105}\]
By energy balance, this must be opposite to the energy carried away by the gravitational field in the form of gravitational waves. The finite part of this energy loss is the Fourier transform of the well-known Einstein quadrupole radiation formula. We present the \(\mathcal{O}(\epsilon^{1})\) term for later use in renormalization. Similarly, from the tail effective action, eq. (103), we compute the energy loss contribution
\[\left(\Delta E\right)_{\text{T}}=-\frac{2}{5}G_{N}^{2}E\int_{-\infty}^{\infty} \text{d}\omega\kappa(\omega)\omega^{7}\left[1+\epsilon\left(\log\left(\frac{ \omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi}\right)-\frac{41}{30}\right)+\mathcal{O} (\epsilon^{2})\right]\,. \tag{106}\]
This correction is in agreement with previous derivations [24; 25; 26; 27; 28; 145], again up to sign conventions.
The energy loss induced from the tail-of-tail comes from eq. (106), giving
\[\left(\Delta E\right)_{\text{TT}}=\frac{214G_{N}^{3}E^{2}}{525\pi} \int_{-\infty}^{\infty}\text{d}\omega\kappa(\omega)\omega^{8}\Bigg{\{} \frac{1}{\epsilon}+\left[\frac{3}{2}\log\left(\frac{\omega^{2}e^{\gamma_{E}} }{\mu^{2}\pi}\right)-\frac{420\zeta_{2}}{107}-\frac{675359}{89880}\right]\] \[\qquad+\epsilon\Bigg{[}\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{ \mu^{2}\pi}\right)\left(\frac{9}{8}\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{ \mu^{2}\pi}\right)-\frac{(352800\zeta_{2}+675359)}{59920}\right)\] \[\qquad\qquad+\frac{4569\zeta_{2}}{856}-\frac{1050\zeta_{3}}{107} +\frac{1259125247}{37749600}\Bigg{]}+\mathcal{O}(\epsilon^{2})\Bigg{\}}\,. \tag{107}\]
From \(\text{T}^{3}\), eq. (104), we find
\[\left(\Delta E\right)_{\text{TTT}}=\frac{428}{525}G_{N}^{4}E^{3} \int_{-\infty}^{\infty}\text{d}\omega\kappa(\omega)\omega^{9} \Bigg{\{}\frac{1}{\epsilon}+\left[2\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{ \mu^{2}\pi}\right)-\frac{252583}{29960}\right]\] \[\qquad\qquad+\epsilon\Bigg{[}\log\left(\frac{\omega^{2}e^{\gamma_ {E}}}{\mu^{2}\pi}\right)\left(2\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^ {2}\pi}\right)-\frac{252583}{14980}\right)\] \[\qquad\qquad-\frac{13}{2}\zeta_{2}-\frac{840}{107}\zeta_{3}+ \frac{1583459537}{37749600}\Bigg{]}+\mathcal{O}(\epsilon^{2})\Bigg{\}}\,. \tag{108}\]
Finally, the \(\text{T}^{4}\) contribution to the energy loss is computed from eq. (105) as
\[\left(\Delta E\right)_{\text{TTTT}}=-\frac{5}{2}\frac{214^{2}}{52 5^{2}\pi}G_{N}^{5}E^{4}\int_{-\infty}^{\infty}\text{d}\omega\kappa(\omega) \omega^{10}\Bigg{\{}\frac{1}{\epsilon^{2}}+\frac{1}{\epsilon}\Bigg{[}\frac{5} {2}\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi}\right)-\frac{840}{107 }-\frac{5249287}{343470}\Bigg{]}\] \[\qquad\qquad+\Bigg{[}\frac{5}{2}\log\left(\frac{\omega^{2}e^{ \gamma_{E}}}{\mu^{2}\pi}\right)\left(\frac{5}{4}\log\left(\frac{\omega^{2}e^{ \gamma_{E}}}{\mu^{2}\pi}\right)-\frac{840}{107}\zeta_{2}-\frac{5249287}{3434 70}\right)\] \[\qquad\qquad+\frac{5541865\zeta_{2}}{91592}-\frac{2940\zeta_{3}}{ 107}-\frac{176400\zeta_{4}}{11449}+\frac{90973869743}{673201200}\Bigg{]}+ \mathcal{O}(\epsilon^{1})\Bigg{\}}\,. \tag{109}\]
With the emitted energy contributions computed, we can begin renormalization analysis.
### Going to Subleading RG Flow
The first appearance of a \(\epsilon\) divergence in the dissipated energy occurs in the tail-of-tail as a simple pole
\[(\Delta E)_{\rm TT}\Big{|}_{\epsilon^{-1}}=\frac{2}{3\pi\epsilon}\times\frac{107 }{175}G_{N}^{3}E^{2}\int_{-\infty}^{\infty}\mathrm{d}\omega\,\omega^{8}\kappa( \omega)\,. \tag{111}\]
Since no pole appears in the dissipative sector at the tail level, any counterterms and the renormalization must carry a factor of \((G_{N}E)^{2}\) to skip tail orders. With this in mind, we introduce a renormalized coupling to the quadrupoles via:
\[\kappa(\omega)\to\kappa^{\prime}(\omega)\equiv\kappa(\omega,\mu)\left(1+\frac{ (G_{N}E)^{2}X(\omega)}{\epsilon}+\dots\right)\,, \tag{112}\]
where \(X\) is an unknown, independent of \((G_{N}E)^{2}\) and \(\epsilon\), \(\mu\) is the renormalization scale of the logs, and the ellipsis indicate higher-order terms in \(G_{N}E\). To find \(X\), we substitute eq. (112) into eq. (108), and demand that the _total_ energy dissipation:
\[(\overline{\Delta E})_{\rm TT}\equiv\left[\left(\Delta E\right)_{\rm RR}+ \left(\Delta E\right)_{\rm T}+\left(\Delta E\right)_{\rm TT}\right]\Big{|}_{ \kappa\to\kappa^{\prime}} \tag{113}\]
is free from \(\epsilon\) poles up to the TT at order \(G_{N}^{3}\). Notably, since the pole in \(\kappa^{\prime}\) carries a factor of \(G_{N}^{2}\), the \(\epsilon^{-1}\) part will only contribute to \((\overline{\Delta E})_{\rm TT}\) at the appropriate order in \(G_{N}\) via \((\Delta E)_{\rm RR}\), as \(G_{N}^{2}(\left(\Delta E\right)_{\rm T}+\left(\Delta E\right)_{\rm TT})\) is beyond \(\mathcal{O}(G_{N}^{3})\). This pole cancellation requires:
\[X(\omega)=\frac{214}{105}\omega^{2}\Rightarrow\kappa^{\prime}(\omega)\equiv \kappa(\omega,\mu)\left(1+\frac{214\omega^{2}(G_{N}E)^{2}}{105\epsilon}+\dots \right)\,. \tag{114}\]
Moving on to the tail-of-tail-of-tail (T\({}^{3}\)) we might suspect a new term in \(\kappa^{\prime}\) carrying \(G_{N}^{3}\). However, performing the explicit calculation using only eq. (114), we find:
\[(\overline{\Delta E})_{\rm TTT} =\int_{-\infty}^{\infty}\mathrm{d}\omega\kappa(\omega,\mu)\Bigg{[} -\frac{\omega^{6}G_{N}}{5\pi}-\frac{2}{5}\omega^{7}G_{N}^{2}E\] \[+\frac{1}{\pi}G_{N}^{3}E^{2}\omega^{8}\left(\frac{214}{525}\log \left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi}\right)-\frac{634913}{220500} -\frac{8\zeta_{2}}{5}\right)\] \[+G_{N}^{4}E^{3}\omega^{9}\left(\frac{428}{525}\log\left(\frac{ \omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi}\right)-\frac{634913}{110250}\right)+ \mathcal{O}(G_{N}^{5})\Bigg{]}\,, \tag{115}\]
which is also completely free of \(\epsilon\) poles. This means there is no term in the renormalized coupling, eq. (112), of \(\mathcal{O}(G_{N}^{3})\). Further terms in eq. (112) must only contribute then at \(G_{N}^{4}\), and thus enter via T\({}^{4}\).
With a renormalized coupling comes a renormalization-group (RG) flow. Using counterterm analysis to determine RG flow equations in gravity is ambiguous due to the presence of topological operators like the Gauss-Bonnet term [146]. Instead, we will study the renormalization-scale dependence of the observable \((\overline{\Delta E})_{\rm TTT}\) directly. All of the logarithms in \((\overline{\Delta E})_{\rm TTT}\) carry a renormalization scale \(\mu\), that comes from compensating for the misalignment between the mass dimension of \(G_{N}\), and the required mass dimension of the coupling constant in a dimensionally-regulated action. We also introduced a scale
dependence in \(\kappa\) via the renormalization of the source coupling for similar reasons. The RG flow then follows from demanding that \((\overline{\Delta E})_{\rm TTT}\), a perturbative observable, must be invariant under shifts of the scale:
\[\frac{\mathrm{d}}{\mathrm{d}\mu}(\overline{\Delta E})_{\rm TTT}=0+{\cal O}(G_{N} ^{5})\,, \tag{108}\]
which leads to the RG equation:
\[\frac{\mathrm{d}}{\mathrm{d}\log\mu}\kappa(\omega,\mu)=-\frac{428 }{105}(G_{N}E\omega)^{2}\kappa(\omega,\mu)+{\cal O}(G_{N}^{4})\,, \tag{109}\] \[\Rightarrow\kappa(\omega,\mu)=\left(\frac{\mu}{\mu_{0}}\right)^{ -\frac{428}{105}(G_{N}E\omega)^{2}}\kappa(\omega,\mu_{0})\,, \tag{110}\]
where \(\mu_{0}\) is an arbitrary but fixed reference scale at which \(\kappa\) is measured (or otherwise known, e.g. through a matching calculation with the small-scale theory). This is in exact agreement with the RG flow originally found by Goldberger and Ross [26], which can be seen by substituting in \(\kappa(\omega,\mu)=I_{ij}(-\omega,\mu)I^{ij}(\omega,\mu)\), that introduces a factor of 2 on the LHS of eq. (109) but not on the RHS.
With the leading dissipative renormalization dealt with, we now turn to the subleading corrections induced by the TTTT terms. As we saw in eq. (100), the unrenormalized \((\Delta E)_{\rm TTTT}\) has both a double pole, \(\epsilon^{-2}\), as well as a single pole. Since we successfully removed the divergence in the construction of \((\overline{\Delta E})_{\rm TTT}\) with only a \(G_{N}^{2}\) counterterm, we know that one of the new terms in \(\kappa^{\prime}\) must be of the form \(Y(G_{N}E\omega)^{4}\epsilon^{-2}\). This term will bring the \({\cal O}(G_{N})\) term from eqs. (102) and (111) up to \(G_{N}^{5}\) while shifting the \(\epsilon^{0}\) term into a double pole. We find that after adding the new term to \(\kappa^{\prime}\), the coefficient of the double pole of \((\overline{\Delta E})_{\rm TTTT}\) is given by:
\[(\overline{\Delta E})_{\rm TTTT}\Big{|}_{\epsilon^{-2}}=\frac{G_{N}^{5}E^{4}} {5\pi}\int_{-\infty}^{\infty}\mathrm{d}\omega\kappa(\omega,\mu)\omega^{10} \left[\underbrace{-Y}_{\rm RR}+\underbrace{\frac{45796}{11025}}_{\rm TT}- \underbrace{\frac{22898}{11025}}_{\rm TTTT}\right]\,. \tag{111}\]
Cancellation of this pole then requires:
\[Y=\frac{22898}{11025}=2\frac{107^{2}}{105^{2}}\,, \tag{112}\]
in accordance with the expected iteration of the previous counterterm.
However, the iterated counterterm is not the only required correction to \(\kappa^{\prime}\). The single pole has been altered, but not completely removed:
\[(\overline{\Delta E})_{\rm TTTT}\Big{|}_{\epsilon^{-1}}=\frac{G_{N}^{5}E^{4}} {5\pi}\times\frac{1695233}{105^{3}}\int_{-\infty}^{\infty}\mathrm{d}\omega \kappa(\omega,\mu)\omega^{10}\,. \tag{113}\]
Removing this pole necessitates a second new term in \(\kappa^{\prime}\) of the form \(Z(G_{N}E\omega)^{4}\epsilon^{-1}\). Incorporating this new correction to \((\overline{\Delta E})_{\rm TTTT}\) will allow the finite piece of RR, eq. (102),
to also contribute a \(\epsilon^{-1}\) pole at \(G_{N}^{5}\). We then find that \(Z=\frac{1695233}{105^{3}}\). Thus, with two new terms, \(\kappa^{\prime}\) becomes:
\[\kappa^{\prime}(\omega)\equiv\kappa(\omega,\mu)\left(1+2\left(\frac{107}{105} \frac{(G_{N}E\omega)^{2}}{\epsilon}+\frac{107^{2}}{105^{2}}\frac{(G_{N}E\omega )^{4}}{\epsilon^{2}}\right)+\frac{1695233}{105^{3}}\frac{(G_{N}E\omega)^{4}}{ \epsilon}+\ldots\right)\,, \tag{111}\]
and the inclusive energy loss is:
\[(\overline{\Delta E})_{\rm TTTT} =(\overline{\Delta E})_{\rm TTTT}+\frac{G_{N}^{5}E^{4}}{5\pi} \int_{-\infty}^{\infty}{\rm d}\omega\kappa(\omega,\mu)\omega^{10}\Bigg{\{}32 \zeta_{4}+2^{4}\frac{107}{105}\zeta_{3}-\frac{1132438}{105^{2}}\zeta_{2}\] \[\quad-\frac{275977249}{1944810}-2\frac{107^{2}}{105^{2}}\log^{2 }\left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi}\right)\] \[\quad+\log\left(\frac{\omega^{2}e^{\gamma_{E}}}{\mu^{2}\pi}\right) \left[\frac{8301847}{257250}+2^{4}\frac{107}{105}\zeta_{2}\right]\Bigg{\}}\,. \tag{112}\]
Since the TTTT energy loss required a new counterterm (and has a subleading log), there will be a new term in the RG flow associated to it. We again simply demand that:
\[\frac{{\rm d}}{{\rm d}\mu}(\overline{\Delta E})_{\rm TTTT}=0+\mathcal{O}(G_{N} ^{6})\,, \tag{113}\]
which leads to the RG equation:
\[\frac{{\rm d}}{{\rm d}\log\mu}\kappa(\omega,\mu)=-(2G_{N}E\omega)^{2}\kappa( \omega,\mu)\left(\frac{107}{105}+\frac{1695233}{105^{3}}(G_{N}E\omega)^{2} \right)+\mathcal{O}(G_{N}^{5})\,. \tag{114}\]
This new RG equation necessarily includes the leading RG flow, and now allows for prediction of subleading logs at all higher-order tails.
## 7 Post-Newtonian and Self-Force Results
In this section, we present comparisons of our energy loss with those derived via traditional GR results in PN and self-force theory for further checks that go beyond the tail. Observable results from these GR approaches are presented in a PN expansion, eventually specified to a quasi-circular orbit. Our results are also in the PN regime due to the multipole expansion of the inspiral. Thus, matching against the known PN results primarily entails inserting a PN-expanded quadrupole expression, and then aligning related scheme choices.
### PN Mapping to the Binary Inspiral
We will focus on the leading PN expansion, which for a binary system is simply a circular orbit. WLOG, we take the circular orbit to be in the \(x\)-\(y\) plane, in which the frequency-space quadrupole components can be chosen as
\[I_{xx}(\omega) =I_{yy}(\omega)=\nu Er^{2}\pi/2(\delta(\omega-2\Omega)+\delta( \omega+2\Omega))\,, \tag{115}\] \[I_{xy}(\omega) =I_{yx}(\omega)=-i\nu Er^{2}\frac{\pi}{2}(\delta(\omega-2\Omega)- \delta(\omega+2\Omega))\,,\] \[I_{zj}(\omega) =0\,,\]
with \(\omega\) the radiation frequency, \(\Omega>0\) the orbital frequency, \(r\) the radius of the circular orbit, \(\nu\) the symmetric mass ratio of the binary, and \(E\) the energy of the binary. From these expressions, we easily arrive at the quadrupole-quadrupole "two-point" contraction:
\[\kappa(\omega)=E^{2}\pi^{2}r^{4}\nu^{2}\left(\delta(\omega-2\Omega)^{2}+\delta( \omega+2\Omega)^{2}\right)\,. \tag{110}\]
Upon integration in \(\omega\), the \(\delta(\omega\pm 2\Omega)^{2}\) will leave behind a \(\delta(0)\). These are resolved by noting that the PN energy loss for the binary is actually computed as the time-averaged energy loss of the system over a sufficiently long period of time [13]. Invoking one of the definitions of the frequency space \(\delta(0)\),
\[\delta(0)\equiv\lim_{T\to\infty}\frac{1}{2\pi}\int_{-T/2}^{T/2}e^{-it0}\,{\rm d }t=\lim_{T\to\infty}\frac{T}{2\pi}\,, \tag{111}\]
we can formally align the time-averaging interval with the \(T\) in eq. (111), canceling said \(T\) dependence from the final result. The net result is that we can effectively use
\[\kappa(\omega)=\frac{E^{2}\pi r^{4}\nu^{2}}{2}\left(\delta(\omega-2\Omega)+ \delta(\omega+2\Omega)\right) \tag{112}\]
as the quadrupole contraction in order to match against the PN results.
We will also eventually need Kepler's law, \(GE/r=(r\Omega)^{2}\), to rewrite expressions in terms of the PN parameter \(x\equiv(GE\Omega)^{2/3}\). All of the above are leading-order expressions in the PN expansion, which have subleading corrections that we ignore here. These leading expressions are sufficient for us to verify critical features of our results: the leading logs and leading transcendental numbers.
### Direct Comparisons
We begin by looking at the radiation-reaction and the tail. Since these actions contain no divergences or logarithms in their dissipative part, there is no subtlety about aligning choices of logarithm scales. Thus, we simply insert eq. (112) into eqs. (109) and (110) which gives:
\[(\overline{\Delta E})^{\text{LO PN}}_{G_{N},\,G_{N}^{2}}=-\frac{32x^{5}\nu^{ 2}}{5G_{N}}-\frac{256\pi x^{13/2}\nu^{2}}{5G_{N}}+\ldots \tag{113}\]
in agreement (up to the energy balance sign) with the long-known results of Blanchet and Damour [13; 24; 145].
For higher-order tails, we need to carefully process the total PN energies, including the renormalization of the quadrupoles, and align subtraction schemes. The RG flows from section 6.3 trade out dependence on \(\log(\mu)\), the renormalization flow parameter, for \(\log(\mu_{0})\), the scale at which we perform EFT matching to the short-scale theory, the orbital separation \(r\) in our case. Thus we take \(\mu_{0}^{-1}\to r\). A proportionality constant is left undetermined, and amounts to different choices of subtraction scheme, which we consider next.
We must align subtraction schemes between our results and traditional GR literature. In section 6, the counterterms we introduced only absorbed the divergences, and not
any additional constants. Thus we are technically working in a pure minimal subtraction scheme. However, we have explicitly packaged the logarithms into \(\log\frac{\omega^{2}\exp(\gamma_{E})}{\mu^{2}\pi}\), which makes it easy to switch to \(\overline{\text{MS}}\) subtraction by sending \(\mu^{2}\to\frac{\exp(\gamma_{E})}{4\pi}\mu^{2}\), or to other nonstandard schemes via similar replacements. For the TT and TTT, we compare against the work of Blanchet et al [13, 30, 31, 147, 148], whose results include \(\gamma_{E}\) but not \(\log\pi\), suggesting that their implicit renormalization scheme is not equivalent to either \(\text{MS}\) or \(\overline{\text{MS}}\). We will thus adopt a generic subtraction via \(\mu^{2}\to A\pi^{-1}\mu^{2}\), and determine the proper choice of \(A\) to align schemes.
Pushing our \(\mathcal{O}(G_{N}^{3})\) term from eq. (6.23) through the transformation to PN variables, including the generic subtraction and renormalization considerations, we arrive at:
\[(\overline{\Delta E})^{\text{LO PN}}_{G_{N}^{3}}\to-\frac{32}{5G_{N}}\nu^{2}x^ {8}\left[\frac{634913}{11025}+32\zeta_{2}-\frac{856}{105}\left(\log x+2\log 2+ \gamma_{E}-\log A\right)\right]\,. \tag{7.6}\]
Comparing with known PN results [13, 30, 147, 148, 31]
\[\mathcal{F}_{\nu^{2}x^{8}}=\frac{32}{5G_{N}}\nu^{2}x^{8}\left[\frac{6643739519 }{69854400}+32\zeta_{2}-\frac{856}{105}\left(\log x+4\log 2+2\gamma_{E} \right)\right]\,. \tag{7.7}\]
The \(\log x\) terms match exactly, up to the energy balance sign. Since all of the transcendental numbers are newly-appearing at this order, we expect to also match them exactly up to choice of subtraction scheme and the energy balance sign. We see that choosing a subtraction scheme with \(A=(4\exp(\gamma_{E}))^{-1}\) leads to the desired matching. Note that this subtraction scheme is equivalent to using a dimensionally-regulated gravitational constant:
\[\mu^{2}\to\frac{\mu^{2}}{4\pi e^{\gamma_{E}}}\Rightarrow G_{N}\to G_{d}\equiv G _{N}\left(\frac{\sqrt{4\pi e^{\gamma_{E}}}}{\mu}\right)^{d-3}\,, \tag{7.8}\]
which is also motivated by PN calculations in the conservative sector, see for instance Refs. [149, 78]. Matching the rational number would require inserting the higher-order PN terms of \(\kappa(\omega)\) (as well as \(E\)) into the RR contribution, which would allow the rational terms at \(\mathcal{O}(G_{N})\) to contribute at \(\mathcal{O}(x^{8})\).
Similarly, we extract the PN terms coming from our \(\mathcal{O}(G_{N}^{4})\) contribution to eq. (6.23) using the above fixed subtraction scheme:
\[(\overline{\Delta E})^{\text{LO PN}}_{G_{N}^{4}}\to-\frac{32}{5G_{N}}\nu^{2}x^ {19/2}4\pi\left[\frac{634913}{11025}-\frac{856}{105}\left(\log x+4\log 2+2 \gamma_{E}\right)\right]\,, \tag{7.9}\]
and compare against the known PN result [31]:
\[\mathcal{F}_{\nu^{2}x^{19/2}}=\frac{32}{5G_{N}}\nu^{2}x^{19/2}4\pi\left[\frac{ 265978667519}{2980454400}-\frac{856}{105}\left(\log x+4\log 2+2\gamma_{E} \right)\right]\,, \tag{7.10}\]
with which we again find agreement between all terms, except for the rational contribution, where the piece from TTT is partial to the full PN correction.
For T\({}^{4}\), the only available results are from self-force theory. Results at the appropriate order we need here are available in Refs. [55, 56], written in terms of the orbit velocity
\(v\sim x^{1/2}\). With our leading quadrupole in the PN expansion we should exactly match terms like \(\zeta_{4}\) and \(\log^{2}x\). The \(\log 3\) and \(\log 5\) come from higher-order multipoles, while the other terms receive contributions from higher-order PN terms in the quadrupole bringing terms from RR and TT up to \(x^{11}\sim v^{22}\). We organize the results so that the terms we should match appear first, the ones that we cannot appear later, and we ignore completely the \(\log 3\) and \(\log 5\) terms from Refs. [55; 56]. From our results in eq. (6.31), we obtain:
\[(\overline{\Delta E})^{\rm LO~{}PN}_{\rm S} =-\frac{32}{5G_{N}}\nu^{2}v^{22}\bigg{[}-512\zeta_{4}-\frac{27392 }{105}\zeta_{3}+\frac{1465472}{11025}(\log(v)+\gamma_{E}+2\log 2)^{2}\] \[\quad-\frac{54784}{105}\zeta_{2}\left(\log v+\gamma_{E}+2\log 2\right)\] \[\quad+\frac{2207817992}{972405}-\frac{132829552}{128625}\gamma_{ E}-\frac{265659104}{128625}\log 2-\frac{132829552}{128625}\log v\] \[\quad+\frac{18119008}{11025}\zeta_{2}\bigg{]}\,, \tag{7.11}\]
which we compare against the expressions from Refs. [55; 56]:
\[\frac{\mathrm{d}E^{(12)}}{\mathrm{d}t} =\bigg{(}\frac{\mathrm{d}E}{\mathrm{d}t}\bigg{)}_{\rm N}\bigg{[}- 512\zeta_{4}-\frac{27392}{105}\zeta_{3}+\frac{1465472}{11025}(\log(v)+\gamma_{ E}+2\log 2)^{2}\] \[\qquad-\frac{54784}{105}\zeta_{2}(\log v+\gamma_{E}+2\log 2)\] \[\qquad+\frac{2067586193789233570693}{60238740004430000}-\frac{2461 37536815857}{157329572400}\gamma_{E}\] \[\qquad-\frac{271272899815409}{157329572400}\log 2-\frac{2461375368158 7}{157329572400}\log v\] \[\qquad+\frac{3803225263}{1746360}\zeta_{2}\bigg{]}\,. \tag{7.12}\]
We find that all of the terms match as expected, namely the first two lines in both expressions.
## 8 Conclusions
In this paper we presented in detail a novel methodology to treat higher-order non-linear effects of gravitational radiation that is scattered from binary inspirals, where conservative and dissipative dynamics are inevitably intertwined. The primary new idea that was first introduced in [47], and enabled our uniquely distinct approach to these type of effects, is that we make our analysis directly at level of the whole binary taken as a single composite particle interacting with gravity. This distinguishes our current line of study from all the many amplitudes-driven works, which study the unbound 2-to-2 scattering problem rather than the actual bound two-body problem which is the primary focus of present and planned GW experiments.
We treat the \(l\)-th multipole moments of the whole radiating binary coupled to gravity in the EFT of the composite particle in analogy to massive elementary particles of spin \(l/2\) and their gravitational scattering amplitudes. In this paper we go one step forward
in grounding our approach, where in section 4.1 we start from pure tree amplitudes as our analogous building blocks from which we construct the necessary unitarity cuts, rather than using the EFT vertices as a given. We verified that these pure amplitude replacements work through to the highest orders reached in the present work. In section 4, we spelled out our new method for the well-studied lower-order effects of radiation-reaction and tail, where the CTP formalism adopted to our problem is layered on top of our integral basis and generalized-unitarity inspired procedure.
In view of the pressing need to push such predictions to high PN orders, we proceeded in section 5 to study higher-order tails all through to the third subleading tail effect: This is the 5-loop tail-of-tail-of-tail-of-tail, or \(\mathrm{T}^{4}\), at order \(G^{5}\) corresponding to 8.5PN. One interesting benefit of our method is that it naturally organizes the results at each tail level according to an iterative pattern. We pointed out explicit examples for the cut coefficients in section 5.3. However, there are other interesting hints at iterative and recursive structure. For instance, the number of _actually distinct_ cut diagrams and contributions at each tail order so far tracks the Fibonacci sequence:
\[\mathrm{RR}\to 1\quad\mathrm{T}\to 1\quad\mathrm{TT}\to 2\quad\mathrm{TTT}\to 3 \quad\mathrm{T}^{4}\to 5. \tag{110}\]
It would be interesting to explore these patterns and attempt to identify a structure which directly produces the various cut coefficients.
Let us highlight that also in contrast with all other amplitudes-driven related studies, we only deal with classical propagating gravitons, and land directly on causal effective actions, which encompass the full conservative and dissipative dynamics of these effects. For the lower-order results in section 4, these could be checked against previous EFT results [27; 88]. However, as of the TT level the causal effective actions we obtained in our approach in section 5 have never been previously derived. Yet, in section 6 after we formulate the consequent energy loss of the tails, we could verify through a renormalization analysis that the related leading RG flow of the quadrupole coupling is in perfect agreement with that of [26], where only the TT level was reached.
Additionally, the new \(\mathrm{T}^{4}\) corrections we obtained to the effective action, eq. (111), and its associated correction to the emitted energy, eq. (110), led us to identify a novel counterterm in the quadrupole coupling, eq. (112), and an associated new term in the RG flow of the renormalized quadrupoles, eq. (111). Because the new effects continue the pattern of skipping loop orders, it would be interesting to check our results by computing the \(\mathrm{T}^{5}\) contributions, which should produce the same counterterms and RG flows. Beyond that we could only establish that our energy emissions are consistent with those derived via traditional PN-theory results [13; 31], available only up through \(T^{3}\), as well as specific pieces of the results from self-force theory [55; 56].
###### Acknowledgments.
We thank John Joseph Carrasco, Sasank Chava, and Radu Roiban for feedback on the manuscript. AE is supported by the USDOE under contract DE-SC0015910 and by Northwestern University via the Amplitudes and Insight Group, Department of Physics and
Astronomy, and Weinberg College of Arts and Sciences. ML has been supported by the Science and Technology Facilities Council (STFC) Rutherford Grant ST/V003895 _"Harnessing QFT for Gravity"_, and by the Mathematical Institute University of Oxford.
This research was supported in part through the computational resources and staff contributions provided for the Quest high performance computing facility at Northwestern University which is jointly supported by the Office of the Provost, the Office for Research, and Northwestern University Information Technology.
fillTeXwas used as part of writing the bibliography [150].
## Appendix A Handling Tensor Reductions
Throughout our tail calculations, we encounter cuts which are functions of the loop momenta \(\ell_{x}\), the radiation frequency \(\omega\) and the Euclidean metric \(\delta^{op}\) with four Euclidean indices contracted against the quadrupoles,
\[\mathcal{N}^{ij;mn}(\{\ell_{x},\omega\})I^{ij}(\omega)I^{mn}(-\omega) \tag{104}\]
that will require tensor reduction to reach the final state factor \(\kappa(\omega)=I^{ij}(-\omega)I^{ij}(\omega)\). While we could perform this reduction term-by-term at the level of the individual numerator and propagator combinations that appear, this order of processing delays the introduction of \(\ell^{2}\) that should be zeroed by on-shell conditions in the construction of the cut. We will instead introduce a generic tensor reduction scheme that can easily be applied during the process of cut assembly by constructing a tensor \(\mathbf{U}^{ij;mn}\) such that
\[\mathcal{N}^{ij;mn}I^{ij}I^{mn}=\mathcal{N}^{ij;mn}\mathbf{U}^{ij;mn}\kappa(\omega) \tag{105}\]
subject to the symmetry and trace constraints
\[\mathbf{U}^{ij;mn} =\mathbf{U}^{ji;mn}=\mathbf{U}^{ij;nm} \tag{106}\] \[\mathbf{U}^{ii;mn} =\mathbf{U}^{ij;mn}=0\,. \tag{107}\]
With the spatial Euclidean metric as the only available (parity-even) tensor to construct \(\mathbf{U}\) from, we find a unique object
\[\mathbf{U}^{ij;mn}=-2\frac{\delta^{ij}\delta^{mn}}{(d+2)d(d-1)}+\frac{\delta ^{im}\delta^{jn}+\delta^{in}\delta^{jm}}{(d+2)(d-1)}\,. \tag{108}\]
We can then insert eq. (105) during the evaluation of a cut, after splitting the four-momenta into frequencies and spatial momenta.
We have explicitly checked that this method reproduces the term-by-term reduction method for a number of integrals relevant to the tails.
## Appendix B Evaluating Basis Integrals
### Analytic Evaluation and Bubble Iteration
The two most important integrals we need for evaluating all of the basis integrals appear in Chapter 10 of Smirnov's _Analytic Tools for Feynman Integrals_[125]. Important to note
is that Ref. [125] works in mostly-minus Minkowski signature, whereas the integrals we need to evaluate are in Euclidean signature. To compensate for this, we need to Wick rotate the Minkowski integrals to Euclidean signature via \(\ell_{0}\to i\ell_{E}\) which takes \(\ell^{2}\to-\ell_{E}^{2}\) and \(\mathrm{d}^{d}\ell\to i\,\mathrm{d}^{d}\ell_{E}\). The \(i\) from the change in measure cancels against the \(i\) as part of the \(i\pi^{d/2}\) normalization, and the change-in-sign of the propagators will induce an extra phase \((-1)^{\lambda}\) for each propagator, as well as _change the relative sign_ between \(\ell^{2}\) and \(m^{2}\) for the massive propagators. For example, the massive one-propagator integral, the "tadpole", in Minkowski signature is
\[\int\frac{\mathrm{d}^{d}k}{(-k^{2}+m^{2})^{\lambda}}=i\pi^{d/2}\frac{\Gamma( \lambda-d/2)}{\Gamma(\lambda)}\frac{1}{(m^{2})^{\lambda-d/2}}\,. \tag{111}\]
Switching to Euclidean signature, we get
\[\int_{E}\frac{\mathrm{d}^{d}k_{E}}{(-k_{E}^{2}-m^{2})^{\lambda}}=(-1)^{ \lambda}\pi^{d/2}\frac{\Gamma(\lambda-d/2)}{\Gamma(\lambda)}\frac{1}{(m^{2}) ^{\lambda-d/2}}\,. \tag{112}\]
From here on, we drop the explicit \(E\) label on the integrated momenta. The scale of the tadpole integrals that actually occurs throughout the basis integrals used in section 4 is really the graviton frequency \(\omega^{2}\), and always appears with the wrong sign compared with an actual mass as in eq. (112). In addition, we need to switch to the physical \(\pi\) normalization. Thus, the integral we need for evaluation is
\[F^{(1)}(\lambda;\omega^{2})=G^{(1)}(\lambda)=\int_{E}\frac{\mathrm{d}^{d}k}{(2 \pi)^{d}}\frac{1}{(-k^{2}-(-\omega^{2}))^{\lambda}}=\frac{\Gamma(\lambda-d/2)} {\Gamma(\lambda)}\frac{(-1)^{\lambda}(4\pi)^{-d/2}}{(-\omega^{2})^{\lambda-d/2 }}\,, \tag{113}\]
where we have supressed the explicit \(i0\) component of \(\omega\) because _we are not integrating over it as part of \(\mathrm{d}^{d}k\)._
Similarly, the \(m_{1}=m_{2}=m,m_{3}=0\) two-loop three-propagator integral in Euclidean signature is given by
\[\int_{E} \frac{\mathrm{d}^{d}k\ \mathrm{d}^{d}l}{(-k^{2}-m^{2})^{\lambda_{1}}(- l^{2}-m^{2})^{\lambda_{2}}[-(k+l)^{2}]^{\lambda_{3}}}\] \[=\left(\pi^{d/2}\right)^{2}(-1)^{\lambda_{1}+\lambda_{2}+\lambda _{3}}\frac{\Gamma(\lambda_{1}+\lambda_{3}-d/2)\Gamma(\lambda_{2}+\lambda_{3}- d/2)\Gamma(d/2-\lambda_{3})}{\Gamma(\lambda_{1})\Gamma(\lambda_{2})}\] \[\times\frac{\Gamma(\lambda_{1}+\lambda_{2}+\lambda_{3}-d)}{\Gamma (\lambda_{1}+\lambda_{2}+2\lambda_{3}-d)\Gamma(d/2)(m^{2})^{\lambda_{1}+ \lambda_{2}+\lambda_{3}-d}}\] \[=(\pi^{d/2})^{2}(-1)^{\lambda_{1}+\lambda_{2}+\lambda_{3}}B_{ \lambda_{1},\lambda_{2},\lambda_{3};d}(m^{2})^{-\lambda_{1}-\lambda_{2}- \lambda_{3}+d}\,. \tag{114}\]
As in the case of the tadpole, the basis integrals we encounter have the opposite relative sign between \(k^{2}\) and \(\omega^{2}\), meaning instead we are interested in
\[G^{(2)}(\lambda_{1},\lambda_{2},\lambda_{3})=(4\pi)^{-d}(-1)^{\lambda_{1}+ \lambda_{2}+\lambda_{3}}B_{\lambda_{1},\lambda_{2},\lambda_{3};d}(-\omega^{2} )^{-\lambda_{1}-\lambda_{2}-\lambda_{3}+d}\,. \tag{115}\]
The generic bubble integral is also useful as an intermediate step, namely
\[\int_{E}\frac{\mathrm{d}^{d}k}{(2\pi)^{d}}\frac{1}{(-k^{2})^{ \lambda_{1}}[-(q-k)^{2}]^{\lambda_{2}}} =\frac{(-1)^{d/2}}{(4\pi)^{d/2}}\frac{\Gamma(d/2-\lambda_{1}) \Gamma(d/2-\lambda_{2})\Gamma(\lambda_{1}+\lambda_{2}-d/2)}{\Gamma(\lambda_{1} )\Gamma(\lambda_{2})\Gamma(d-\lambda_{1}-\lambda_{2})(-q_{E}^{2})^{\lambda_{1 }+\lambda_{2}-d/2}}\] \[=(-1)^{d/2}(4\pi)^{-d/2}A_{\lambda_{1},\lambda_{2};d}(-q_{E}^{2} )^{-\lambda_{1}-\lambda_{2}+d/2}\,. \tag{116}\]
Importantly, the Euclidean bubble produces a new Euclidean propagator, and matching the sign choice for this new propagator to the one used for integration absorbs the ubiquitous phase \((-1)^{\lambda_{1}+\lambda_{2}}\). Note that, as expected from the topology, the expression is symmetric in \(\lambda_{1}\) and \(\lambda_{2}\).
With these ingredients, we can begin evaluating the higher-loop basis integrals recursively. The first non-trivial integral we need to evaluate is the TT bulk contact integral, \(F^{(3)}(1,1,1,1,0,0)\) from eq. (4.26). The two potential mode propagators can be integrated together as a bubble using eq. (B.6) with \(q=\ell_{1}+\ell_{3}\), resulting in a single new potential mode propagator. The remaining integral is now of the form eq. (B.4), with a shifted index on the scaleless propagator. Putting everything together, we have
\[F^{(3)}(1,1,1,1,0,0) =(-1)^{d/2}(4\pi)^{-d/2}A_{1,1;d}G^{(2)}(1,1,2-d/2)\] \[=(4\pi)^{-3d/2}A_{1,1;d}B_{1,1,2-d/2;d}(-\omega^{2})^{-4+3d/2}\,.\] (B.7)
This expression is readily expandable near \(d=3\). Evaluating the TTT and \(\mathrm{T}^{4}\) bulk contacts proceeds in a similar manner, just with more levels of bubble iteration, yielding
\[F^{(4)}(\text{fig.~{}\ref{fig:TT}}) =(-1)^{d}(4\pi)^{-d}A_{1,1;d}A_{1,2-d/2;d}G^{(2)}(1,1,3-d)\] \[=(4\pi)^{-2d}A_{1,1;d}A_{1,2-d/2;d}B_{1,1,3-d;d}(-\omega^{2})^{-5 +2d}\] (B.8) \[F^{(5)}(\text{fig.~{}\ref{fig:TT}}) =(4\pi)^{-5d/2}A_{1,1;d}A_{1,2-d/2;d}A_{1,3-d;d}B_{1,1,4-3d/2;d}(- \omega^{2})^{-6+5d/2}\,.\] (B.9)
### Evaluation of Figure 8h
We do not know of a way to analyitcally evaluate the \(\mathrm{T}^{4}\) integral corresponding to the \(\mathcal{M}_{4}\otimes\mathcal{M}_{4}\) topology, fig. 8h. However, we can exploit the fact that, like all the other integrals we consider, the \(\omega^{2}\) scale of the integral is completely factorizable
\[F^{(5)}(\text{fig.~{}\ref{fig:TT}})\equiv(-\omega^{2})^{5d/2-7}\mathcal{I}_{4 \otimes 4}(1)\,,\] (B.10)
so that we just need to numerically determine \(\mathcal{I}_{4\otimes 4}(1)\) as an expansion in the dimensional regularization parameter. We use the program AMFlow [151] in conjunction with Kira [152] to evaluate \(\mathcal{I}_{4\otimes 4}(1)\) up to \(\mathcal{O}(\epsilon^{2})\) with 500 digits of precision at each order. We can then use the PSLQ algorithm [153; 154] to reconstruct the transcendental numbers, using the transcendental numbers appearing in the other \(\mathrm{T}^{4}\) integrals as a guide for guessing the basis. We find, using AMFlow's definition of \(d=3-2\epsilon\) instead of the \(d=3+\epsilon\) that we
use in the rest of the paper,
\[\mathcal{I}_{4\otimes 4}(1) =\frac{1}{8(4\pi)^{5}}\Bigg{[}\frac{1}{\epsilon^{2}}+\frac{16+5( \log(\pi)-\gamma_{E})}{\epsilon}\] \[\qquad\qquad\qquad+\big{(}184+80(\log(\pi)-\gamma_{E})+\frac{25}{2 }(\log(\pi)-\gamma_{E})^{2}+\frac{47}{2}\zeta_{2}\big{)}\] \[\qquad\qquad+\epsilon\Big{(}1888+920(\log(\pi)-\gamma_{E})+200( \log(\pi)-\gamma_{E})^{2}+\frac{125}{6}(\log(\pi)-\gamma_{E})^{3}\] \[\qquad\qquad+408\zeta_{2}-\frac{611}{3}\zeta_{3}+\frac{235}{2} \zeta_{2}(\log(\pi)-\gamma_{E})\Big{)}\] \[\qquad\qquad\epsilon^{2}\Big{(}18544+9440(\log(\pi)-\gamma_{E})+2 300(\log(\pi)-\gamma_{E})^{2}\] \[\qquad\qquad+\frac{1000}{3}(\log(\pi)-\gamma_{E})^{3}+\frac{625}{ 24}(\log(\pi)-\gamma_{E})^{4}\] \[\qquad\qquad+5092\zeta_{2}+2040\zeta_{2}(\log(\pi)-\gamma_{E})+ \frac{1175}{4}\zeta_{2}(\log(\pi)-\gamma_{E})^{2}\] \[\qquad\qquad+\frac{42193}{40}\zeta_{2}^{2}-\frac{9872}{3}\zeta_{ 3}-\frac{3055}{3}\zeta_{3}(\log(\pi)-\gamma_{E})\Big{)}+\mathcal{O}(\epsilon^{ 3})\Bigg{]}\,.\] (B.11)
We have verified the reconstruction using an additional numerical evaluation in AMFlow to 1000 digits. This depth in the \(\epsilon\) expansion is more than sufficient, after matching \(\epsilon\) conventions, to obtain up through the \(\mathcal{O}(\epsilon^{0})\) part of the \(\mathrm{T}^{4}\) effective action.
|
2308.15297
|
The geometry and arithmetic of bielliptic Picard curves
|
We study the geometry and arithmetic of the curves $C \colon y^3 = x^4 + ax^2
+ b$ and their associated Prym abelian surfaces $P$. We prove a Torelli theorem
in this context and give a geometric proof of the fact that $P$ has
quaternionic multiplication (QM) by the quaternion order of discriminant $6$.
This allows us to describe the Galois action on the geometric endomorphism
algebra of $P$. As an application, we classify the torsion subgroups of the
Mordell-Weil groups $P(\mathbb{Q})$, as both abelian groups and
$\text{End}(P)$-modules.
|
Jef Laga, Ari Shnidman
|
2023-08-29T13:34:50Z
|
http://arxiv.org/abs/2308.15297v2
|
# The geometry and arithmetic of bielliptic Picard curves
###### Abstract.
We study the geometry and arithmetic of the curves \(C\colon y^{3}=x^{4}+ax^{2}+b\) and their associated Prym abelian surfaces \(P\). We prove a Torelli theorem in this context and give a geometric proof of the fact that \(P\) has quaternionic multiplication (QM) by the quaternion order of discriminant \(6\). This allows us to describe the Galois action on the geometric endomorphism algebra of \(P\). As an application, we classify the torsion subgroups of the Mordell-Weil groups \(P(\mathbb{Q})\), as both abelian groups and \(\operatorname{End}(P)\)-modules.
###### Contents
* 1 Introduction
* 2 Bielliptic Picard curves and their Pryms
* 3 A Torelli theorem
* 4 Shimura curves and quaternionic multiplication
* 5 Identifying bielliptic Picard curves in the pencil
* 6 Explicit quaternionic multiplication
* 7 6-torsion points in the Prym variety
* 8 Classifying rational torsion in Prym varieties
* 9 Classifying rational torsion in Pryms of \(\operatorname{GL}_{2}\)-type
## 1. Introduction
Let \(k\) be a field of characteristic neither \(2\) nor \(3\). A bielliptic Picard curve over \(k\) is a smooth projective curve \(C\) with an affine model of the form
\[y^{3}=x^{4}+ax^{2}+b\]
for some \(a,b\in k\). Such a curve is equipped with both a \(\mu_{3}\)-action \((x,y)\mapsto(x,\omega y)\) and a commuting involution \(\tau\colon(x,y)\mapsto(-x,y)\). The induced involution \(\tau^{*}\) on the Jacobian variety \(J=\operatorname{Jac}_{C}\) allows us to define the Prym variety \(P=\ker(1+\tau^{*})\). The abelian surface \(P/k\) inherits a \(\mu_{3}\)-action and carries a \((1,2)\)-polarization; it need not be principally polarizable over \(k\).
The goal of this paper is to explore the remarkably rich geometry and arithmetic of bielliptic Picard curves and their associated Pryms.
### Results
We first prove some foundational results for bielliptic Picard curves. We
1. prove a Torelli theorem for the association \(C\mapsto P\), once suitable data on both sides is fixed (Theorem 3.13);
2. show \(\operatorname{End}(P_{\tilde{k}})\) contains a maximal order \(\mathcal{O}\) in a discriminant \(6\) quaternion algebra (Proposition 6.1), in other words \(P_{\tilde{k}}\) has quaternionic multiplication (QM) by \(\mathcal{O}\);
3. give an explicit description of the Artin representation \(\rho_{\operatorname{End}}\colon\operatorname{Gal}_{k}\to\operatorname{Aut}( \mathcal{O})\hookrightarrow\operatorname{GL}_{4}(\mathbb{Q})\) describing the Galois action on the endomorphism ring \(\mathcal{O}\) (Corollary 6.6);
4. relate the moduli space of bielliptic Picard curves to a unitary Shimura curve \(Y/\mathbb{Q}\) and a quaternionic Shimura curve (Section 4.3);
5. determine the finitely many \(\bar{\mathbb{Q}}\)-isomorphism classes of Pryms \(P/\mathbb{Q}\) which are geometrically non-simple (Proposition 6.17), corresponding to the CM points on \(Y\).
Our Torelli result (1) is perhaps surprising because Barth [4] has shown that the Prym construction on all bielliptic genus \(3\) curves does not satisfy a Torelli theorem, but has one-dimensional fibres. There are other instances of algebraic families of abelian surfaces with QM in the literature [3, 22], but (2) is interesting because of the very simple nature of this family. Moreover, contrary to previous approaches our proof constructs the quaternionic action explicitly and geometrically, using the automorphisms and the polarization of \(P\). Part (3) gives an interesting source of abelian surfaces with large endomorphism field and, in the notation of [18], produces the first published examples of geometrically simple abelian surfaces with Sato-Tate group \(\mathpzc{J}(E_{3})\) and \(\mathpzc{J}(E_{6})\) (Example 6.7). Part (4) and Shimura reciprocity allow us to calculate all geometrically non-simple Prym varieties in (5).
To further illustrate the rich and accessible nature of these curves, we classify the finite torsion subgroups that arise in the Mordell-Weil groups \(P(\mathbb{Q})\) of Pryms of bielliptic Picard curves over \(\mathbb{Q}\).
**Theorem 1.1**.: _Let \(P/\mathbb{Q}\) be the Prym surface of a bielliptic Picard curve \(C/\mathbb{Q}\). Then_
\[P(\mathbb{Q})_{\operatorname{tors}}\simeq\begin{cases}\mathbb{Z}/n\mathbb{Z}& \text{for some }n\in\{1,2,3,6\},\text{ or}\\ \mathbb{Z}/n\mathbb{Z}\times\mathbb{Z}/n\mathbb{Z}&\text{for some }n\in\{2,3\},\text{ or}\\ \mathbb{Z}/6\mathbb{Z}\times\mathbb{Z}/3\mathbb{Z}.&\end{cases}\]
_Conversely, for each finite abelian group \(G\) above, there exist infinitely many \(\overline{\mathbb{Q}}\)-isomorphism classes of bielliptic Picard Prym surfaces \(P/\mathbb{Q}\) with \(P(\mathbb{Q})_{\operatorname{tors}}\simeq G\)._
Theorem 1.1 is the analogue of Mazur's classification of rational torsion points of elliptic curves [34] in our setting. As with elliptic curves, each finite group allowed by Theorem 1.1 can be realized by an explicit family (in fact, sometimes multiple families) of Pryms whose coefficients can be rationally parameterized. This is in accordance with the philosophy of Mazur and Ogg, namely that torsion should only occur as a consequence of the ambient geometry of the relevant moduli space; see [35] for a recent survey. Theorem 1.1 is significantly easier to prove than Mazur's theorem, essentially because \(P\) has everywhere potentially good reduction. Nonetheless, it appears to be the first classification of rational torsion on a universal abelian variety over a Shimura variety which is not a modular curve.
**Remark 1.2**.: In a separate work with Schembri and Voight [29], we significantly extend some of our arguments here to show that for _any_ abelian surface \(A/\mathbb{Q}\) such that \(\operatorname{End}(A_{\overline{\mathbb{Q}}})\) is a maximal order in a non-split quaternion algebra over \(\mathbb{Q}\), there is the uniform bound \(|A(\mathbb{Q})_{\operatorname{tors}}|\leq 18\). Theorem 1.1 shows that this bound is sharp. Contrary to [29, Theorem 1.3], Theorem 1.1 is a complete classification, which we achieve using arguments specific to the geometry of the abelian surfaces considered here.
**Example 1.3**.: By Proposition 7.16 below, for each \(t\in\mathbb{Q}\setminus\{0,1,-1\}\) the Prym \(P_{t}\) associated to the curve
\[2(t^{2}-1)^{2}y^{3}=(x^{2}-1)(x^{2}-t^{2})\]
satisfies \(P_{t}(\mathbb{Q})_{\operatorname{tors}}\simeq\mathbb{Z}/3\mathbb{Z}\times \mathbb{Z}/6\mathbb{Z}\), achieving the maximum order allowed by Theorem 1.1.
**Remark 1.4**.: Theorem 1.1 implies that the torsion subgroup \(J(\mathbb{Q})_{\mathrm{tors}}\) of the Jacobian of a bielliptic Picard curve is \(12\)-torsion. A complete classification of the groups \(J(\mathbb{Q})_{\mathrm{tors}}\) would require parameterizing points of order \(4\) in \(J(\mathbb{Q})\). We hope to return to this in follow-up work.
It is natural to ask how the torsion subgroup of \(P(\mathbb{Q})\) is affected by the presence of endomorphisms defined over \(\mathbb{Q}\). Generically, we have \(\mathrm{End}(P)=\mathbb{Z}\), but it can also happen that \(\mathrm{End}(P)\) is an order in a quadratic field. Such \(P\) are said to be of \(\mathrm{GL}_{2}\)-type, and are modular by work of Khare-Wintenberger [26]. After proving explicit conditions on \(a\) and \(b\) for \(P\) to be of \(\mathrm{GL}_{2}\)-type (see SS6.3), we classify all the possible \(\mathrm{End}(P)\)-modules that arise as \(P(\mathbb{Q})_{\mathrm{tors}}\).
**Theorem 1.5**.: _Let \(P/\mathbb{Q}\) be the Prym surface of a non-CM bielliptic Picard curve over \(\mathbb{Q}\), with \(\mathrm{End}(P)\neq\mathbb{Z}\). Then \(\mathrm{End}(P)\simeq\mathbb{Z}[\sqrt{D}]\) for some \(D\in\{2,6\}\). Furthermore, there is an isomorphism of \(\mathbb{Z}[\sqrt{D}]\)-modules_
\[P(\mathbb{Q})_{\mathrm{tors}}\simeq\begin{cases}\{0\}&\text{or}\\ \mathbb{Z}[\sqrt{D}]/\mathfrak{a}_{\mathfrak{p}}&\text{for some $p\in\{2,3\}$ },\end{cases}\]
_where \(\mathfrak{a}_{p}\) is the unique prime ideal of \(\mathbb{Z}[\sqrt{D}]\) above \(p\). Conversely, for each \(D\in\{2,6\}\), and for each of the three cyclic \(\mathbb{Z}[\sqrt{D}]\)-modules \(G\) that appear above, there are infinitely many \(\overline{\mathbb{Q}}\)-isomorphism classes of Pryms \(P/\mathbb{Q}\) such that \(\mathrm{End}(P)\simeq\mathbb{Z}[\sqrt{D}]\) and \(P(\mathbb{Q})_{\mathrm{tors}}\simeq G\) as \(\mathbb{Z}[\sqrt{D}]\)-modules._
When \(D=2\) and \(p=3\), Theorem 1.5 gives examples of \(\mathrm{GL}_{2}\)-type Pryms such that \(P(\mathbb{Q})_{\mathrm{tors}}\simeq\mathbb{F}_{9}\), showing that the upper bound proven in [29, Theorem 1.4] for the order of the rational torsion subgroup of a QM abelian surface of \(\mathrm{GL}_{2}\)-type over \(\mathbb{Q}\) is in fact sharp.
### Methods
The geometry of \((1,2)\)-polarized surfaces was analyzed in detail by Barth [4] and his results are crucial to our study of \(C\) and \(P\). The intersection of \(P\) with a well chosen theta divisor on \(J\) gives a curve \(\widehat{C}\subset P\) called the bigonal dual of \(C\), whose corresponding line bundle \(\mathscr{M}=\mathcal{O}_{P}(\widehat{C})\) represents the \((1,2)\)-polarization on \(P\). The curve \(\widehat{C}\) turns out to be a bielliptic Picard curve as well, with equation \(y^{3}=x^{4}+8ax^{2}+16(a^{2}-4b)\). The linear system \(|\mathscr{M}|\) is a pencil of bielliptic genus \(3\) curves on \(P\). The \(\mu_{3}\)-action on \(C\) induces a \(\mu_{3}\)-action on \(P\) and on the pencil. The technical heart of this paper is to analyze the interaction of this \(\mu_{3}\)-action with the pencil \(|\mathscr{M}|\), both theoretically and explicitly. Our Torelli theorem (1) boils down to showing that \(\widehat{C}\) is the unique bielliptic Picard curve in this pencil of the correct signature. For the QM property (2), we give two different proofs. The first is indirect, relating the moduli of bielliptic Picard curves to a unitary Shimura curve (in other words, we first prove (4) and then deduce (2)). The second proof constructs the QM explicitly, by showing that a specific sextic twist of the original curve \(C\) also lives in the pencil \(|\mathscr{M}|\), but carrying the opposite signature. This second proof allows us to prove (3) as well.
To prove Theorems 1.1 and 1.5 we must first exhibit explicit families with specified torsion subgroups, and then we must rule out all other finite abelian groups. To exhibit groups, we describe elements of \(P[2]\) and \(P[3]\) geometrically, as coming from the fixed points for the \(\mu_{6}\)-action. To rule out groups which are not \(6\)-torsion, we first use general arguments for QM abelian surfaces which are developed in greater generality in [29]. Instead of quoting the main results there, we give a simpler proof (tailored to our special family of surfaces), which also allows us to handle the geometrically non-simple case not treated in [29].1 However, these general arguments only go so far; for example, they are not enough to rule out \(4\)-torsion in general QM abelian surfaces \(A/\mathbb{Q}\). To eliminate groups such as \(\mathbb{Z}/4\mathbb{Z}\) and \((\mathbb{Z}/2\mathbb{Z})^{2}\times(\mathbb{Z}/3\mathbb{Z})\), we use arguments that are very much specific to the geometry of bielliptic Picard curves (see Propositions 8.5 and 8.6).
### Previous results
Petkova-Shiga [37] considered bielliptic Picard curves from a complex analytic viewpoint. Using period matrix computations, they show that bielliptic Picard Pryms over \(\mathbb{C}\) have quaternionic multiplication. Hashimoto-Murabayashi [22] and Baba-Granath [3] have studied genus two curves whose Jacobians \(J^{\prime}\) have QM by \(\mathcal{O}\). Over \(\overline{k}\), each surface \(J^{\prime}\) becomes isomorphic to a bielliptic Picard Prym \(P\), but generally not over \(k\).
### Future directions
To give a bielliptic Picard curve \(C\) over a field \(k\) is to give a triple \((E,X,\iota)\), where \(E/k\) is an elliptic curve with a \(\mu_{3}\)-action, \(X\subset E\) is a \(\mu_{3}\)-orbit of size \(3\), and \(\iota\) is an embedding of line bundles \(\mathcal{O}_{E}(-2O_{E})^{\otimes 2}\hookrightarrow\mathcal{O}_{E}\) whose cokernel has support \(X\cup O_{E}\). One reconstructs \(C\) as the double cover of \(E\) determined by \(\iota\), branched along \(X\cup O_{E}\). In the standard model for \(C\), we have \(E\colon y^{3}=x^{2}+ax+b\) and \(X=\{(0,s)\colon s^{3}=b\}\). It is natural to wonder: which properties of \(C\) can be easily read off from the data \((E,X,\iota)\)? A simple example is Proposition 7.6 below, which shows that there exists an embedding \((\mathbb{Z}/2\mathbb{Z})^{2}\hookrightarrow P(\mathbb{Q})\) if and only if \(E[2](\mathbb{Q})\neq 0\) and \(X\cap 2E(\mathbb{Q})\neq\varnothing\). Thus, divisibility properties of \(X\) in \(E(k)\) are related to torsion properties of \(J(k)=\operatorname{Pic}_{C}^{0}(k)\).
Remarkably, it also appears that torsion properties of \(X\) are directly related to torsion properties of Ceresa cycles of bielliptic Picard curves. Recall that the Ceresa cycle is the class
\[\kappa_{\infty}(C)=[\iota_{\infty}(C)]-(-1)^{*}[\iota_{\infty}(C)]\in \operatorname{CH}_{1}(J)\]
in the Chow group of \(1\)-cycles on \(J\), where \(\iota_{\infty}\colon C\to J\) is the Abel-Jacobi map sending the point at infinity to \(0\). There are now three known examples [5, 7, 31] of non-hyperelliptic genus \(3\) curves over \(\overline{\mathbb{Q}}\) which have torsion Ceresa cycle (assuming well-known conjectures). Whether there are more exceptions and what characterizes them remains an interesting open question. For bielliptic Picard curves, we suggest the following answer. Interestingly, it relates the Ceresa class of the bigonal dual \(\widehat{C}\) (and not of \(C\) itself) to the branch points of the original double cover \(C\to E\).
**Conjecture 1.6**.: _Let \(\widehat{J}\) be the Jacobian of \(\widehat{C}\). Then the Ceresa cycle \(\kappa_{\infty}(\widehat{C})\in\operatorname{CH}_{1}(\widehat{J})\) is torsion if and only if every branch point \((0,s)\) of the double cover \(C\to E\) is torsion in \(E(\overline{\mathbb{Q}})\)._
Conjecture 1.6 is motivated by a result of Lilienfeldt and the second author showing that the Ceresa cycle of any bielliptic Picard curve of the form \(y^{3}=x^{4}+b\) is torsion in the Griffiths group [31]. In this case the branch locus \(X\cup O_{E}\subset E\) is equal to \(E[2]\), hence consists of torsion points. It also draws inspiration from [42, SS1.3] via an analogy between bielliptic Picard curves over \(E\) and shtukas over a curve over \(\mathbb{F}_{p}\) (where the \(\mu_{3}\)-action plays the role of Frobenius). Adam Logan was kind enough to gather evidence for Conjecture 1.6 in both directions, including rigorously verifying the "only if" direction for many curves \(C_{a,b}\), using forthcoming work of Ellenberg-Logan-Srinivasan-Venkatesh. If Conjecture 1.6 is true, then there are infinitely many isomorphism classes of non-hyperelliptic genus \(3\) curves \(C/\overline{\mathbb{Q}}\) with torsion Ceresa cycle. On the other hand, only finitely many of them are defined over a given number field.
There are many other avenues of research related to bielliptic Picard curves and their Pryms that are ripe for investigation. For example, in forthcoming work we study the average rank of the Mordell-Weil group \(P(\mathbb{Q})\); the average rank of \(P(\mathbb{Q})\) in cubic and sextic twist families was already considered in [1] and [44]. To help explore other arithmetic questions, it would be worthwhile to develop more robust algorithms that are tailored towards these curves, such as explicit descent algorithms, explicit addition laws, an analogue of Tate's algorithm, etc. We hope this work encourages others to study some of these topics.
### Structure of paper
We start with some generalities on bielliptic Picard curves and their (dual) Prym varieties in SS2. In particular, we discuss bigonal duality in SS2.7. We prove our Torelli theorem in SS3. In SS4 we make the connection with unitary and quaternionic Shimura curves. In SS5 we perform some explicit calculations with a pencil of curves on the Prym variety. These
calculations are used in SS6 to explicitly construct the quaternionic multiplication. In SS7 we give explicit descriptions of the \(2\) and \(3\)-torsion subgroups of Pryms of bielliptic Picard curves. Finally in SS8 and SS9 we classify rational torsion subgroups and prove Theorems 1.1 and 1.5.
### Acknowledgements
We thank Adam Logan, Adam Morgan, Ciaran Schembri, Michael Stoll, and John Voight for helpful conversations and remarks. The second author was funded by the European Research Council (ERC, CurveArithmetic, 101078157). Part of this research was carried out while the first author was visiting the second author in Jerusalem. We thank the Hebrew University of Jerusalem and the Einstein Institute of Mathematics for their hospitality.
### Notation and conventions
* Our base field will typically be denoted by \(k\), with choice of separable and algebraic closure \(k^{\mathrm{sep}}\subset\bar{k}\) and absolute Galois group \(\mathrm{Gal}_{k}=\mathrm{Gal}(k^{\mathrm{sep}}/k)\). All fields in this paper will be assumed to have characteristic \(\neq 2,3\) unless explicitly stated otherwise. Galois actions will typically be right actions.
* A variety over a field \(k\) is a separated finite type scheme over \(k\). A variety is called nice if it is smooth, projective and geometrically integral.
* If \(X,Y/k\) are varieties, \(f\colon X_{k^{\mathrm{sep}}}\to Y_{k^{\mathrm{sep}}}\) is a morphism of \(k^{\mathrm{sep}}\)-varieties and \(\sigma\in\mathrm{Gal}_{k}\), then \(f^{\sigma}\) denotes the \(k^{\mathrm{sep}}\)-morphism \(x\mapsto f(x^{\sigma^{-1}})^{\sigma}\).
* For a nice variety \(X/k\), denote by \(\mathrm{Pic}(X)\) its Picard group and by \(\mathrm{Pic}_{X}\) its Picard scheme.
* If \(X/k\) is a nice curve, let \(\mathrm{Pic}^{n}(X)\subset\mathrm{Pic}(X)\) be the subset of line bundles of degree \(n\), let \(\mathrm{Jac}_{X}=\mathrm{Pic}_{X}^{0}\) denote its Jacobian, let \(\mathrm{Aut}(X)\) denote the \(k\)-automorphism group of \(X\) and let \(\mathbf{Aut}(X)\) be the _scheme_ of automorphisms of \(X\), with \(\mathbf{Aut}(X)(K)=\mathrm{Aut}(X_{K})\) for every field extension \(K/k\).
* For every integer \(n\geq 1\) we define the group scheme \(\mu_{n}=\mathrm{Spec}\,(k[t]/(t^{n}-1))\). If \(X/k\) a variety, we define a \(\mu_{n}\)-action on \(X\) to be a morphism of \(k\)-schemes \(\mu_{n}\times_{k}X\to X\) satisfying the axioms of a group action. If \(m,n\) are coprime, giving a \(\mu_{mn}\)-action is the same as giving commuting \(\mu_{m}\) and \(\mu_{n}\)-actions, using the isomorphism \(\mu_{m}\times\mu_{n}\to\mu_{mn}\) induced by the inclusion maps.
* We will typically write \(\omega\in\bar{k}\) for a choice of primitive third root of unity and \(k(\omega)\) for the smallest field extension of \(k\) containing such \(\omega\).
* If \(V\) is a vector space over a field \(k\), we let \(\mathbb{P}(V)=\mathrm{Proj}(\mathrm{Sym}^{\bullet}(V^{\vee}))\) be the projective space parametrizing lines in \(V\). If \(X/k\) is variety, a morphism \(X\to\mathbb{P}(V)\) is the same as a pair \((\mathscr{L},\phi)\), where \(\mathscr{L}\) is a line bundle on \(X\) and \(\phi\colon V^{\vee}\otimes_{k}\mathcal{O}_{X}\to\mathscr{L}\) is a surjection (up to a suitable notion of isomorphism).
* If \(\mathscr{L}\) is a line bundle on a variety \(X/k\) we denote by \(|\mathscr{L}|:=\mathbb{P}(\mathrm{H}^{0}(X,\mathscr{L})^{\vee})\) the linear system of effective divisors in \(\mathscr{L}\). If \(D\) is a Cartier divisor on \(X\) we also write \(|D|\) for \(|\mathcal{O}_{X}(D)|\). If \(\mathscr{L}\) has no base points, we thus get a natural morphism \(\phi_{\mathscr{L}}\colon X\to|\mathscr{L}|\).
* If \(A/k\) is an abelian variety, let \(A^{\vee}=\mathrm{Pic}_{A}^{0}\) be its dual abelian variety. If \(\mathscr{L}\in\mathrm{Pic}(A)\), denote by \(\lambda_{\mathscr{L}}\colon A\to A^{\vee}\) the homomorphism \(x\mapsto t_{x}^{*}\mathscr{L}\otimes\mathscr{L}^{-1}\).
## 2. Bielliptic Picard curves and their Pryms
### Basic definitions
Let \(k\) be a field (always assumed of characteristic \(\neq 2,3\)).
**Definition 2.1**.: _Let \(C/k\) be a nice genus \(3\) curve. We call a faithful \(\mu_{6}\)-action \(\gamma\colon\mu_{6}\hookrightarrow\mathbf{Aut}(C)\) on \(C\) bielliptic if the quotient \(C/\mu_{2}\) is a genus one curve. We say \(C\) is a bielliptic Picard curve over \(k\) if it admits a bielliptic \(\mu_{6}\)-action._
For \(a,b\in k\), consider the projective plane curve \(C_{a,b}\) over \(k\) with affine equation
\[C_{a,b}\colon y^{3}=x^{4}+ax^{2}+b. \tag{2.1}\]
The curve \(C_{a,b}\) has a unique (\(k\)-rational) point \(\infty\) at infinity, and \(C_{a,b}\) is smooth if and only if
\[\Delta_{a,b}:=16b(a^{2}-4b)\]
is nonzero. The \(\mu_{2}\)-action \(\pm 1\cdot(x,y)=(\pm x,y)\) and \(\mu_{3}\)-action \(\omega\cdot(x,y)=(x,\omega y)\) (where \(\omega^{3}=1\)) combine to a faithful bielliptic \(\mu_{6}\)-action \(\gamma_{a,b}\), given by \(\zeta\cdot(x,y)=(-x,\zeta^{4}y)\), for every sixth root of unity \(\zeta\in k^{\text{sep}}\). The unique fixed point for this action is \(\infty\).
**Theorem 2.2**.: _Let \(C/k\) be nice genus \(3\) curve with faithful \(\mu_{6}\)-action \(\gamma\). Then the following conditions are equivalent:_
1. \(\gamma\) _is a bielliptic_ \(\mu_{6}\)_-action (and consequently_ \(C\) _is a bielliptic Picard curve)._
2. \(\gamma\) _has a unique fixed point._
3. \(C/\mu_{3}\) _has genus zero._
4. \(C\) _is a plane curve._
5. _There is an isomorphism_ \(C\simeq C_{a,b}\) _for some_ \(a,b\in k\) _such that_ \(\gamma\) _corresponds to_ \(\gamma_{a,b}\) _or_ \(\gamma_{a,b}^{-1}\)_._
Proof.: For each \(i\) dividing \(6\), let \(n_{i}\) be the number of \(\mu_{i}\)-fixed points. By Riemann-Hurwitz, we have \(n_{2}\in\{0,4,8\}\), \(n_{3}\in\{2,5\}\), and \(n_{6}\leq 3\). Moreover, \(\mu_{2}\) permutes the \(\mu_{3}\)-fixed points and vice versa. So \(n_{2}-n_{6}\) is divisible by \(3\) and \(n_{3}-n_{6}\) is divisible by \(2\). There are then three possibilities:
1. \((n_{2},n_{3},n_{6})=(0,2,0)\), or
2. \((n_{2},n_{3},n_{6})=(4,5,1)\), or
3. \((n_{2},n_{3},n_{6})=(8,2,2)\).
The equivalence of (1), (2), and (3) follows immediately from the above cases. To show the equivalence of (1) and (4), first observe that case \((c)\) is hyperelliptic because \(C/\mu_{2}\) is genus \(0\). Case \((a)\) is also hyperelliptic since an unramified double cover of a genus two curve is again hyperelliptic [17]. It remains to show that \(C\) is non-hyperelliptic in case \((b)\). In that case, there is only one \(\mu_{6}\)-fixed point \(\infty\), which is therefore \(k\)-rational. Moreover, \(\infty\) must have different local monodromy type for the \(\mu_{3}\)-cover \(C\to C/\mu_{3}\simeq\mathbb{P}^{1}\) compared to the other four ramification points (which are permuted by the \(\mu_{2}\)-action). By Kummer theory, there is a model \(C\colon y^{3}=f(x)\), for some quartic polynomial \(f(x)\in k[x]\), and in particular \(C\) is a plane quartic curve. This shows that \((1)-(4)\) are equivalent.
It is clear that (5) implies all the rest, so it remains to show the converse. However, we have seen that such a \(C\) has a model \(y^{3}=f(x)\) which we may assume takes the form
\[y^{3}=x^{4}+ax^{2}+cx+d \tag{2.2}\]
with the \(\mu_{3}\)-action \(\omega\cdot(x,y)=(x,\omega y)\) or \(\omega\cdot(x,y)=(x,\omega^{-1}y)\) for every \(\omega\in\mu_{3}\). We see from this model (as in the proof of [11, Lemma 1.21(a)]) that \(C\) admits an involution commuting with the \(\mu_{3}\)-action only if \(c=0\) and this involution is given by \((x,y)\mapsto(-x,y)\), proving (5).
**Remark 2.3**.: A Picard curve is a plane quartic curve of the form (2.2). Such a curve is bielliptic (admitting a \(\mu_{3}\)-equivariant double cover to an elliptic curve) precisely when \(c=0\), explaining our terminology.
There is one \(k^{\text{sep}}\)-isomorphism class of bielliptic Picard curves that behaves so differently that it deserves its own name.
**Definition 2.4**.: _A bielliptic Picard curve \(C\) is special if it is \(k^{\text{sep}}\)-isomorphic to \(C_{0,1}:y^{3}=x^{4}+1\)._
A bielliptic Picard curve \(C\) is special precisely when \(\operatorname{Aut}(C_{k^{\operatorname{sep}}})\) is larger than \(\mathbb{Z}/6\mathbb{Z}\); in that case, it is a group of order 48 [30, Theorem 3.1], with GAP label (48,33). We will see in Lemma 2.9 that \(C_{a,b}\) is special if and only if \(a=0\).
### Marked bielliptic Picard curves
It is often useful to pin down a \(\mu_{6}\)-action on a bielliptic Picard curve with specified signature around the (unique, by Theorem 2.2) \(\mu_{6}\)-fixed point \(\infty\).
**Definition 2.5**.: _A marked bielliptic Picard curve is a bielliptic Picard curve \(C\) with bielliptic \(\mu_{6}\)-action \(\gamma\) such that \(\mu_{6}\) acts on the tangent space \(T_{\infty}C\) via the identity character \(\mu_{6}\to\mu_{6}\)._
An isomorphism of marked bielliptic Picard curves \((C,\gamma)\to(C^{\prime},\gamma^{\prime})\) is by definition an isomorphism of curves \(C\to C^{\prime}\) that is equivariant with respect to the \(\mu_{6}\)-actions. A calculation using the generator \(\frac{x}{y}\in(T_{\infty}C_{a,b})^{\vee}\) shows that the pair \((C_{a,b},\gamma_{a,b})\) defined in SS2.1 is a marked bielliptic Picard curve.
**Lemma 2.6**.: _Let \(C/k\) be a bielliptic Picard curve with bielliptic \(\mu_{6}\)-action \(\gamma\). Then one of \((C,\gamma^{\pm 1})\) is a marked bielliptic Picard curve, and if \(C\) is not special this marked \(\mu_{6}\)-action is the unique one._
Proof.: Since \(\gamma\) is faithful, it acts on \(T_{\infty}C\) via the identity character or its inverse, so either \(\gamma\) or \(\gamma^{-1}\) acts via the identity. If \(C\) is not special, \(\operatorname{\mathbf{Aut}}(C)\simeq\mu_{6}\) by [30, Theorem 3.1], proving the uniqueness in that case.
### Automorphisms
**Lemma 2.7**.: _Every isomorphism \((C_{a,b},\gamma_{a,b})\to(C_{a^{\prime},b^{\prime}},\gamma_{a^{\prime},b^{ \prime}})\) of marked bielliptic Picard curves over \(k\) is of the form \((x,y)\mapsto(\lambda^{3}x,\lambda^{4}y)\) for some \(\lambda\in k^{\times}\) such that \((a^{\prime},b^{\prime})=(\lambda^{6}a,\lambda^{12}b)\)._
Proof.: Let \(\phi\colon C_{a,b}\to C_{a^{\prime},b^{\prime}}\) be an isomorphism preserving the \(\mu_{6}\)-actions. Since \(C_{a,b}\) and \(C_{a^{\prime},b^{\prime}}\) are canonically embedded, \(\phi\) is induced by a linear isomorphism of the ambient projective space \(\mathbb{P}^{2}_{k}\). This isomorphism preserves the point at infinity (being the unique \(\mu_{6}\)-fixed point) and the \(\mu_{6}\)-eigenspaces of \(\operatorname{H}^{0}(\mathbb{P}^{2},\mathcal{O}(1))=\operatorname{span}\{1,x,y\}\). A short calculation shows that \(\phi\) must be of the form above.
**Lemma 2.8**.: _Every marked bielliptic Picard curve over \(k\) is of the form \((C_{a,b},\gamma_{a,b})\) for some \(a,b\in k\). Two marked bielliptic Picard curves \((C_{a,b},\gamma_{a,b})\) and \((C_{a^{\prime},b^{\prime}},\gamma_{a^{\prime},b^{\prime}})\) are isomorphic if and only if there exists \(\lambda\in k^{\times}\) such that \(a^{\prime}=\lambda^{6}a\) and \(b^{\prime}=\lambda^{12}b\)._
Proof.: The first part follows from Theorem 2.2(5). The second part follows from Lemma 2.7.
**Lemma 2.9**.: _The bielliptic Picard curve \(C_{a,b}\) is special if and only if \(a=0\)._
Proof.: We may assume that \(k\) is algebraically closed. Lemma 2.8 shows that \(C_{a,b}\) is special if \(a=0\). Conversely, suppose that \(C_{a,b}\) is special. By [30, Theorem 3.1], \(G=\operatorname{Aut}(C_{a,b})\) is a group of order 48 with GAP label (48,33). Moreover, \(\operatorname{Aut}(C_{a,b},\gamma_{a,b})\) is the centralizer in \(G\) of a cyclic order 6 subgroup. A group theory calculation similar to [11, Lemma 4.1.5(c)] shows that all cyclic order 6 subgroup of \(G\) are conjugate and have centralizer of order 12, so \(\operatorname{Aut}(C_{a,b},\gamma_{a,b})\) has order 12. On the other hand, Lemma 2.7 shows that \(\operatorname{Aut}((C_{a,b},\gamma_{a,b}))=\{\lambda\in k^{\times}\mid(a,b)=( \lambda^{6}a,\lambda^{12}b)\}\). If \(a\neq 0\), the latter set has size 6, so if \(C\) is special then \(a=0\).
**Lemma 2.10**.: _Let \((C,\gamma)\) be a marked bielliptic Picard curve over \(k\). Then \(\operatorname{\mathbf{Aut}}(C,\gamma)=\mu_{6}\) if \(C\) is not special and \(\operatorname{\mathbf{Aut}}(C,\gamma)=\mu_{12}\) if \(C\) is special._
Proof.: Combine Lemmas 2.7 and 2.9.
Lemma 2.8 shows that the moduli stack of marked bielliptic Picard curves can be identified with the weighted projective stack \(\mathbb{P}(6,12)\) minus the discriminant locus \(\Delta_{a,b}=0\). As with the moduli stack of elliptic curves, the coarse space has a simpler description. Given a bielliptic Picard curve \(C/k\), its \(j\)-invariant is defined as
\[j(C):=j_{a,b}:=\frac{4b-a^{2}}{4b}\in k\setminus\{0\}, \tag{2.3}\]
where \(a,b\in k\) are such that \(C\simeq C_{a,b}\). Lemmas 2.6, 2.8 and 2.9 show that this is well defined and that \(C_{k^{\mathrm{sep}}}\simeq C^{\prime}_{k^{\mathrm{sep}}}\) if and only if \(j(C)=j(C^{\prime})\), even if \(C\) or \(C^{\prime}\) is special.
### Twists
Let \((C,\gamma)\) be a marked bielliptic Picard curve over \(k\). By the twisting principle, every cocycle \(\xi\in\mathrm{H}^{1}(k,\mathbf{Aut}(C,\gamma))\) determines a bielliptic Picard curve \((C_{\xi},\gamma_{\xi})\) that is \(k^{\mathrm{sep}}\)-isomorphic to \((C,\gamma)\). Its \(k\)-isomorphism class is characterized by the existence of a \(k^{\mathrm{sep}}\)-isomorphism \(\phi\colon(C_{\xi},\gamma_{\xi})_{k^{\mathrm{sep}}}\to(C,\gamma)_{k^{\mathrm{ sep}}}\) such that \(\sigma\mapsto\phi^{\sigma}\circ\phi^{-1}\) represents the class of \(\xi\).
In particular, since \(\gamma\) determines a subgroup \(\mu_{6}\subset\mathbf{Aut}(C,\gamma)\) and since \(\mathrm{H}^{1}(k,\mu_{6})\simeq k^{\times}/k^{\times 6}\), every \(\delta\in k^{\times}\) gives rise to a sextic twist\((C_{\delta},\gamma_{\delta})\) of \((C,\gamma)\). Concretely, if \(C=C_{a,b}\) then the sextic twist of \(C_{a,b}\) by \(\delta\) is isomorphic to \(C_{\delta a,\delta^{2}b}\). If \(C\) is not special, Lemmas 2.6 and 2.7 show that \(\mathbf{Aut}(C,\gamma)=\mathbf{Aut}(C)=\mu_{6}\) and so every twist of \(C\) (as a marked and unmarked curve) is isomorphic to a sextic twist.
### The Prym variety \(P\)
Let \((C,\gamma)\) be a marked bielliptic Picard curve over \(k\). We associate to \(C\) an abelian surface \(P\) whose study will occupy the rest of this paper.
Restricting the \(\mu_{6}\)-action to \(\mu_{2}\) defines an involution \(\tau\colon C\to C\). The quotient \(\pi\colon C\to E:=C/\tau\) is a genus \(1\) curve. When endowed with the \(k\)-point \(\pi(\infty)\) it has the structure of an elliptic curve, so we can identify \(\mathrm{Jac}_{E}\) with \(E\). Let \(J=\mathrm{Jac}_{C}\) be the Jacobian of \(C\).
**Definition 2.11**.: _The Prym variety of \(C\) is defined as \(P:=\ker(1+\tau^{*}\colon J\to J)\)._
This is an abelian surface. We refer to [28, SS3.3] for more details concerning Prym varieties of bielliptic genus \(3\) curves. The properties we will use are summarized in the following commutative diagram:
It depicts two dual exact sequences, where \(\pi_{*}\) (resp. \(\pi^{*}\)) denotes pushforward (resp. pullback) of divisors. The principal polarization on \(J\) restricts to a \((1,2)\)-polarization \(\lambda\colon P\to A:=P^{\vee}\) with kernel \(P[\lambda](k^{\mathrm{sep}})\simeq(\mathbb{Z}/2\mathbb{Z})^{2}\). The intersection of \(P\) and \(\pi^{*}(E)\) is \(\pi^{*}(E[2])=P[\lambda]\). The sum of the two maps \(P\to J\) and \(E\to J\) is an isogeny \(P\times E\to J\) with kernel \(\{(\pi^{*}(x),x)\mid x\in E[2]\}\simeq E[2]\).
If \(C=C_{a,b}\) for some \(a,b\in k\), we may write \(E_{a,b}\), \(J_{a,b}\), \(\lambda_{a,b}\colon P_{a,b}\to A_{a,b}\) et cetera, but we drop the subscripts when convenient.
**Remark 2.12**.: \(P\) need not be principally polarizable over \(k\); indeed we prove in Corollary 6.13 that if \(P\) is geometrically simple, then it is principally polarizable if and only if \(16b(a^{2}-4b)\) is a sixth power in \(k^{\times}\). However, if \(b=s^{3}\) is a cube in \(k^{\times}\), then the Jacobian of the following genus two curve is in the isogeny class of \(P_{a,s^{3}}\):
\[-asy^{2}=(x^{2}+2x-2)(s^{3}x^{4}+4s^{3}x^{3}+2dx-d).\]
where \(d=a^{2}-4s^{3}\); this follows from [40, Theorem 1.1].
We now incorporate the \(\mu_{3}\)-action on \(C\) into the picture. By Albanese functoriality, there is a unique \(\mu_{6}\)-action on \(J\) such that the Abel-Jacobi map \(\operatorname{AJ}_{\infty}\colon C\hookrightarrow J\) is \(\mu_{6}\)-equivariant. This induces a \(\mu_{3}\)-action on the subvariety \(P\) and the quotient \(E\). We record the following important signature calculation.
**Lemma 2.13**.: _For every primitive third root of unity \(\omega\in k^{\operatorname{sep}}\), the action of \(\omega\) on \(\operatorname{H}^{0}(P_{k^{\operatorname{sep}}},\Omega^{1}_{P_{k^{ \operatorname{sep}}}})\) has characteristic polynomial \(T^{2}+T+1\)._
Proof.: We may assume that \((C,\gamma)=(C_{a,b},\gamma_{a,b})\) by Lemma 2.8 and that \(k=k^{\operatorname{sep}}\), so fix such a \(\omega\in k\). Then \(\omega\) acts on \(C_{a,b}\) via \((x,y)\mapsto(x,\omega y)\). The vector space \(\operatorname{H}^{0}(C,\Omega^{1}_{C})\) has basis \(\frac{dx}{y^{2}},x\frac{dx}{y^{2}},\frac{dx}{y}\), hence the \(\omega\)-action on this vector space has eigenvalues \(\omega,\omega,\omega^{2}\). The vector space \(\operatorname{H}^{0}(E,\Omega^{1}_{E})\) has basis \(\frac{dx}{y^{2}}\), hence \(\omega\) has eigenvalue \(\omega\). Since the map \(P\times E\to J,(p,e)\mapsto p+\pi^{*}(e)\) is an isogeny with kernel \(E[2]\), it induces a \(\mu_{3}\)-equivariant isomorphism \(\operatorname{H}^{0}(J,\Omega^{1}_{J})\simeq\operatorname{H}^{0}(P,\Omega^{ 1}_{P})\oplus\operatorname{H}^{0}(E,\Omega^{1}_{E})\). Combining the last three sentences with the isomorphism \(\operatorname{H}^{0}(C,\Omega^{1}_{C})\simeq\operatorname{H}^{0}(J,\Omega^{ 1}_{J})\) proves the lemma.
**Lemma 2.14**.: _The subgroup of \(\mu_{3}\)-fixed points of \(P\) is of size \(9\) and contained in \(P[3]\)._
Proof.: We may assume that \(k=k^{\operatorname{sep}}\). So let \(\omega\in k\) be a third root of unity and \(\alpha\colon P\to P\) the corresponding automorphism. Let \(\beta=1-\alpha\). By Lemma 2.13, \(\beta\) induces an isomorphism on differentials so \(\beta\) is an isogeny. Since \(\beta\circ(\alpha^{2}+\alpha+1)=0\) and \(\beta\) is surjective,
\[\alpha^{2}+\alpha+1=0. \tag{2.4}\]
Therefore \(\beta^{2}=1-2\alpha+\alpha^{2}=-3\alpha\), so \(\beta\) has degree \(9\) and \(P[\beta]\subset P[-3\alpha]=P[3]\).
### The dual Prym variety \(A\)
Keep the notations from SS2.5. Since \(P[\lambda]\subset P[2]\), there exists a unique isogeny2\(\hat{\lambda}\colon A\to P\) such that \(\hat{\lambda}\circ\lambda=[2]\). The next proposition describes this isogeny geometrically and characterizes \(A\) in terms of the curve \(C\) with its involution \(\tau\colon C\to C\).
Footnote 2: We warn the reader that \(\hat{\lambda}\) is not the same as the dual \(\lambda^{\vee}\) of \(\lambda\). Being a polarization, \(\lambda\) is in fact self-dual!
**Proposition 2.15**.: _Let \(i\colon C\to A\) be the composite of the Abel-Jacobi map \(C\to J\) with respect to \(\infty\in C(k)\) and the projection map \(J\to A\)._
1. _The morphism_ \(i\colon C\hookrightarrow A\) _is a closed embedding._
2. _The divisor_ \(i(C)\) _is ample and induces the polarization_ \(\hat{\lambda}\)_._
3. _If_ \(B/k\) _is an abelian surface and_ \(j\colon C\hookrightarrow B\) _a closed embedding mapping_ \(\infty\) _to_ \(0\) _and such that_ \([-1]\circ j=j\circ\tau\)_, then there exists a unique isomorphism of abelian surfaces_ \(\phi\colon A\to B\) _such that_ \(\phi\circ i=j\)_._
Proof.: This is due to Barth [4]; see [28, Proposition 3.5] for a detailed proof.
Let \(\operatorname{Sym}^{2}C\) be the symmetric square of \(C\), a nice surface over \(k\) parameterizing effective divisors of degree \(2\) on \(C\). In analogy with Jacobians of genus \(2\) curves, we will describe the fibres of the surjective map \(i^{(2)}\colon\operatorname{Sym}^{2}C\to A\) given by \(p+p^{\prime}\mapsto i(p)+i(p^{\prime})\); this is due to Ikeda [24].
To this end, we define two involutions on \(\mathrm{Sym}^{2}C\). The first is \(\tau^{(2)}(p+p^{\prime})=\tau(p)+\tau(p^{\prime})\). The second involution \(\kappa\) sends \(p+p^{\prime}\) to the unique effective degree \(2\) divisor linearly equivalent to \(4\infty-p-p^{\prime}\); this is well defined by Riemann-Roch and the fact that \(C\) is not hyperelliptic. Since \(\tau(\infty)=\infty\), the involutions \(\tau^{(2)}\) and \(\kappa\) on \(\mathrm{Sym}^{2}C\) commute. Finally, the double cover \(\pi\colon C\to E\) induces a map \(\pi^{*}\colon E\to\mathrm{Sym}^{2}C\), giving an embedding \(E\simeq\pi^{*}(E)\hookrightarrow\mathrm{Sym}^{2}C\).
**Proposition 2.16** (Ikeda).: _The map \(i^{(2)}\colon\mathrm{Sym}^{2}C\to A\) contracts \(\pi^{*}(E)\) to the origin. Two points \(D,D^{\prime}\in\mathrm{Sym}^{2}C\) not in \(\pi^{*}(E)\) map to the same point of \(A\) if and only if \(D^{\prime}=\kappa(\tau^{(2)}(D))\), i.e. \(D^{\prime}+\tau(D)\sim 4\infty\)._
Proof.: This follows from the proof of [24, Lemma 3.1].
We see that \(A\) is obtained from \(\mathrm{Sym}^{2}C/\langle\tau^{(2)}\circ\kappa\rangle\) by contracting the rational curve \(\pi^{*}(E)/\langle-1\rangle\). This description will be useful in Section 7.5, where we relate the bitangents of \(C\) to the group \(A[2]\).
### Bigonal duality
Given a marked bielliptic Picard curve \((C,\gamma)\), it turns out that we can define another marked bielliptic Picard curve \((\widehat{C},\hat{\gamma})\) such that the role of the Prym variety and its dual are reversed: the Prym variety of \(\widehat{C}\) is isomorphic to \(A=P^{\vee}\). This is called the bigonal dual of \(C\), originally defined by Pantazis [36] (inspired by ideas of Donagi [13]) and analyzed by Barth [4] in this specific situation; see [28, SS3.4] for more details. Let \(\Theta_{2\infty}\subset J\) be the image of the Abel-Jacobi map \(\mathrm{Sym}^{2}C\to J\) sending \(P+P^{\prime}\) to \(P+P^{\prime}-2\infty\).
**Definition 2.17**.: _The bigonal dual of \(C\) is defined by \(\widehat{C}:=P\cap\Theta_{2\infty}\)._
The \(\mu_{3}\)-action on \(P\) restricts to a \(\mu_{3}\)-action on \(\widehat{C}\). Inversion \([-1]\) on \(P\) restricts to an involution \(\hat{\tau}\) on \(\widehat{C}\). Using the isomorphism \(\mu_{2}\times\mu_{3}\to\mu_{6}\) induced by the two inclusions, we obtain a \(\mu_{6}\)-action \(\hat{\gamma}\) on \(\widehat{C}\).
**Lemma 2.18**.: _The pair \((\widehat{C},\hat{\gamma})\) is a marked bielliptic Picard curve._
Proof.: By [28, Lemma 3.5], \(\widehat{C}\) is a nice genus \(3\) curve and \(\widehat{C}/\mu_{2}\) is a genus \(1\) curve. Since the \(\mu_{3}\)-action on \(P\) has only isolated fixed points (Lemma 2.14), the \(\mu_{3}\)-action on \(\widehat{C}\) is nontrivial. Therefore \(\hat{\gamma}\) is a faithful bielliptic action and \(\widehat{C}\) is a bielliptic Picard curve. The origin \(0\in P(k)\) is a \(\mu_{6}\)-fixed point of \(\widehat{C}\), which is unique by Theorem 2.2.
It remains to show that \((\widehat{C},\widehat{\gamma})\) is a marked bielliptic Picard curve, in other words that \(\mu_{6}\) acts on \(T_{0}\widehat{C}\) via the identity character. Equivalently, we will show that \(\mu_{6}\) acts on \((T_{0}\widehat{C})^{\vee}\) via the _inverse_ of the identity character. Choose a uniformizer \(t\in\mathcal{O}_{C,\infty}\). This induces a \(k\)-basis \(\{t_{1},t_{2}\}\) of \((T_{(\infty,\infty)}(C\times C))^{\vee}=(T_{\infty}C)^{\vee}\oplus(T_{\infty}C )^{\vee}\) and an isomorphism of completed local rings \(\widehat{\mathcal{O}}_{C\times C,(\infty,\infty)}\simeq k[[t_{1},t_{2}]]\). In turn, this induces an isomorphism \(\widehat{\mathcal{O}}_{\mathrm{Sym}^{2}C,2\infty}\simeq k[[u,v]]\), where \(u=t_{1}+t_{2}\) and \(v=t_{1}t_{2}\). It follows that \((T_{2\infty}\mathrm{Sym}^{2}C)^{\vee}\simeq(T_{0}\Theta_{2\infty})^{\vee}\) has basis \(\{u,v\}\) and \([-1]\) sends \(u,v\) to \(-u,v\) respectively. Therefore \(T_{0}\widehat{C}=T_{0}\Theta_{2\infty}\cap T_{0}P\) has basis \(\{u\}\) which indeed has the correct character.
It follows that the objects defined in SS2.5 also apply to \((\widehat{C},\hat{\gamma})\), giving \(\widehat{J}=\mathrm{Jac}_{\widehat{C}}\), \(\widehat{P}\), \(\widehat{A}\), \(\hat{\pi}\colon\widehat{C}\to\widehat{E}\) in this context. The inclusion \(\widehat{C}\hookrightarrow P\) induces a homomorphism \(\widehat{J}\to P\).
**Proposition 2.19**.: _The homomorphism \(\widehat{J}\to P\) factors through the projection \(\widehat{J}\to\widehat{A}\) and induces an isomorphism \(\widehat{A}\xrightarrow{\sim}P\) of \((1,2)\)-polarized surfaces._
Proof.: See [28, Proposition 3.6]. The fact that the isomorphism preserves the polarizations means that the maps \(A\to P\) and \(\widehat{P}\to\widehat{A}\) that are both denoted by \(\hat{\lambda}\) are the same under these identifications.
Choosing an isomorphism \((C,\gamma)\simeq(C_{a,b},\gamma_{a,b})\) for some \(a,b\in k\) using Lemma 2.8, we can write down equations for \(\widehat{C}\)[28, SS3.4 Eq (3.7), (3.8)]:
\[C_{a,b}:y^{3} =x^{4}+ax^{2}+b, \tag{2.6}\] \[E_{a,b}:y^{2} =x^{3}+16(a^{2}-4b),\] (2.7) \[\widehat{C}_{a,b}:y^{3} =x^{4}+8ax^{2}+16(a^{2}-4b),\] (2.8) \[\widehat{E}_{a,b}:y^{2} =x^{3}+b. \tag{2.5}\]
The coordinates on \(E_{a,b}\) and \(\widehat{E}_{a,b}\) are chosen so that the double covers \(C_{a,b}\to E_{a,b}\) and \(\widehat{C}_{a,b}\to\widehat{E}_{a,b}\) are given by \((x,y)\mapsto(4y,8x^{2}+4a)\) and \((x,y)\mapsto(y/4,x^{2}/8+a/2)\) respectively.
The third equation (2.7) shows that bigonal duality takes the following elegant form on the \(j\)-invariant (2.3):
**Lemma 2.20**.: _Let \(C\) be a bielliptic Picard curve over \(k\). Then \(j(\widehat{C})=1/j(C)\)._
The quartic polynomial \(\hat{f}:=x^{4}+8ax^{2}+16(a^{2}-4b)\) defining the bigonal dual curve can also be constructed from the Galois theory of the original quartic \(f:=x^{4}+ax^{2}+b\). A calculation shows:
**Lemma 2.21**.: _Suppose that \(f\) has roots \(\{\pm\alpha,\pm\beta\}\) in \(k^{\mathrm{sep}}\). Then \(\hat{f}\) has roots \(\{\pm 2\alpha\pm 2\beta\}\)._
This description of \(\hat{f}\) will be useful when analyzing \(3\)-torsion in \(P\) in SS7.2.
## 3. A Torelli theorem
A bielliptic Picard curve \(C\) gives rise to two \((1,2)\)-polarized abelian surfaces with \(\mu_{3}\)-actions: the Prym surface \(P\) and its dual \(A\). We show that one can recover \(C\) from \(A\), together with its polarization and \(\mu_{3}\)-action; in other words, we prove a Torelli-type theorem (Theorem 3.13). We do this by studying the pencil of genus three curves in the linear system \(|C|\) on \(A\). In fact, we will prove that _any_ indecomposable \((1,2)\)-polarized abelian surface with compatible \(\mu_{3}\)-action is the dual Prym variety of a bielliptic Picard curve, so we work in this generality below.
### Generalities on \((1,2)\)-polarized surfaces
We recall and complement some results of Barth [4]. Barth works over \(\mathbb{C}\), but the proofs of the basic results quoted here are valid in arbitrary characteristic \(\neq 2\).
Let \((A,\lambda)\) be a \((1,2)\)-polarized abelian surface over \(k\). We assume that \((A,\lambda)\) is indecomposable, that is, not isomorphic to a product of two elliptic curves \((E,O)\times(E^{\prime},2O^{\prime})\) as polarized abelian varieties. Recall our notations concerning linear systems and abelian varieties in SS1.7.
**Lemma 3.1**.: _Let \(\mathscr{L}\in\operatorname{Pic}(A_{k^{\mathrm{sep}}})\) be a line bundle representing the polarization: \(\lambda\mathscr{L}=\lambda\)._
1. \(\dim_{k}\operatorname{H}^{0}(A,\mathscr{L})=2\)_._
2. _The linear system_ \(|\mathscr{L}|\) _has four base-points on which_ \(A[\lambda](k^{\mathrm{sep}})\) _acts simply transitively._
3. _At most finitely many members of the pencil are singular (arithmetic) genus_ \(3\) _curves, in which case they are curves of geometric genus_ \(2\) _with one node or the union of two genus_ \(1\) _curves_ \(X_{1},X_{2}\) _with intersection number_ \((X_{1}\cdot X_{2})=2\)_._
Proof.: See [4, SS1.2].
**Lemma 3.2**.: _There exists a unique \(\mathscr{L}\in\operatorname{Pic}(A_{k^{\mathrm{sep}}})\) representing \(\lambda\) whose base locus is \(A[\lambda]\). This line bundle is symmetric: \([-1]^{*}\mathscr{L}\simeq\mathscr{L}\). Moreover, for every divisor \(D\in|\mathscr{L}|\) we have \([-1]^{*}D=D\)._
Proof.: Denote the base locus of a line bundle \(\mathscr{L}\) by \(\operatorname{BL}(\mathscr{L})\). Given \(a\in A(k^{\mathrm{sep}})\), let \(t_{a}\colon A\to A\) be the translation map and \(\lambda(a)\in A^{\vee}(k^{\mathrm{sep}})=\operatorname{Pic}^{0}(A_{k^{\mathrm{sep} }})\) be its image under \(\lambda\). Then for any \(\mathscr{L}\) representing \(\lambda\), the following identity holds:
\[\operatorname{BL}(\lambda(a)\otimes\mathscr{L})=t_{a}^{*}\operatorname{BL}( \mathscr{L}). \tag{3.1}\]
Indeed, by definition \(\lambda(a)\otimes\mathscr{L}=t_{a}^{*}\mathscr{L}\). Let \(D,D^{\prime}\) be two distinct members of the pencil \(|\mathscr{L}|\). Then \(t_{a}^{*}D,t_{a}^{*}D^{\prime}\) are two distinct members of \(|t_{a}^{*}\mathscr{L}|\). So \(\operatorname{BL}(t_{a}^{*}\mathscr{L})=t_{a}^{*}D\cap t_{a}^{*}D^{\prime}=t_{ a}^{*}(D\cap D^{\prime})=t_{a}^{*}\operatorname{BL}(\mathscr{L})\), proving (3.1). This identity, together with Lemma 3.1(2), proves the existence and uniqueness of \(\mathscr{L}\). The remainder of the lemma follows from [4, Proposition 1.6].
For the remainder of this section fix \(\mathscr{L}\in\operatorname{Pic}(A_{k^{\mathrm{sep}}})\) satisfying the properties of Lemma 3.2. Since there is a unique such line bundle up to isomorphism and \(A\) has a \(k\)-point, \(\mathscr{L}\) is automatically Galois invariant and descends to a unique line bundle on \(A\) that we will continue to denote by \(\mathscr{L}\).
**Remark 3.3**.: This marks an interesting contrast with principal polarizations on abelian surfaces, which may not be represented by a \(k\)-rational line bundle. Indeed, this is the case for a positive proportion of genus \(2\) Jacobians over \(\mathbb{Q}\) ordered by height [38, Theorem 23].
The next proposition shows that every smooth member of the pencil \(|\mathscr{L}|\) realizes \(A\) as the dual Prym variety of a bielliptic genus \(3\) curve, as in Proposition 2.15. A bielliptic genus \(3\) curve \((X,\tau)\) is a nice genus \(3\) curve together with an involution \(\tau\) on \(X\) with \(4\) fixed points. For such a curve, let \(\operatorname{Prym}(X,\tau)=\ker(1+\tau^{*}\colon\operatorname{Jac}_{X}\to \operatorname{Jac}_{X})\) be its Prym variety.
**Proposition 3.4**.: _Suppose that \(X\in|\mathscr{L}|(k)\) is smooth, and let \(\tau\) be the restriction of \([-1]\) to \(X\). Then \((X,\tau)\) is a bielliptic genus \(3\) curve and the inclusion \(X\to A\) induces an isomorphism \(\operatorname{Prym}(X,\tau)^{\vee}\simeq A\) of \((1,2)\)-polarized surfaces._
Proof.: See [4, Proposition (1.8) and Theorem (1.12)].
The abelian surface \(A\) can also be reconstructed from the singular members of the pencil. By Lemma 3.1(3), there are two cases to consider.
For the first case, let \(X\in|\mathscr{L}|(k)\) be a curve with one node, whose normalization \(\tilde{X}\to X\) is a genus \(2\) curve. Let \(p\in X(k)\) be the node and \(q_{1},q_{2}\) be its preimages in \(\tilde{X}(k^{\mathrm{sep}})\). The morphism \(\tilde{X}\to X\hookrightarrow A\) induces a homomorphism of abelian varieties \(\phi\colon\operatorname{Jac}_{\tilde{X}}\to A\), sending a degree zero divisor class on \(\tilde{X}\) to its sum in \(A\).
**Proposition 3.5**.: _The homomorphism \(\phi\colon\operatorname{Jac}_{\tilde{X}}\to A\) is an isogeny with kernel \(\{0,q_{1}-q_{2}\}\)._
Proof.: The image of \(\phi\) is an abelian subvariety containing \(X\), so \(\phi\) is surjective. By construction, \(\phi(q_{1}-q_{2})=0\). It suffices to prove \(\phi\) has degree \(2\). This follows from an intersection number calculation. Given divisors \(D,E\) on a nice surface, denote by \([D],[E]\) their class in the Picard group and their intersection number by \((D\cdot E)\). The origin \(0\in A(k)\) lifts to a unique point \(\infty\in\tilde{X}(k)\) and we identify \(\tilde{X}\) with its image in \(\operatorname{Jac}_{\tilde{X}}\) using the point \(\infty\). Then \(\phi^{*}([X])=(\deg\phi)[\tilde{X}]\) and by the projection formula
\[(\deg\phi)(X,X)=(\phi^{*}X,\phi^{*}X)=(\deg\phi)^{2}(\tilde{X},\tilde{X}).\]
The adjunction formula tells us that \((X,X)=2\cdot 3-2=4\) and \((\tilde{X},\tilde{X})=2\). Therefore \(\deg\phi=2\), as claimed.
For the second case, let \(X=X_{1}\cup X_{2}\in|\mathscr{L}|(k)\) be a pair of genus \(1\) curves with intersection number \(2\) and let \(\tilde{X}=X_{1}\sqcup X_{2}\to X\) be its normalization. The map \(\tilde{X}\to X\hookrightarrow A\) induces a homomorphism \(\phi\colon\operatorname{Jac}_{\tilde{X}}:=\operatorname{Jac}_{X_{1}}\times \operatorname{Jac}_{X_{2}}\to A\).
**Proposition 3.6**.: _The map \(\phi\) is an isogeny of degree \(2\)._
Proof.: The proof is very similar to that of Proposition 3.5; we omit the details.
### Compatible \(\mu_{3}\)-actions on \((1,2)\)-polarized abelian surfaces
**Definition 3.7**.: _A \(\mu_{3}\)-abelian surface over \(k\) is a triple \((A,\lambda,\alpha)\), where \((A,\lambda)\) is a \((1,2)\)-polarized abelian surface and \(\alpha\colon\mu_{3}\to\operatorname{Aut}(A_{k^{\mathrm{sep}}})\) is a \(\operatorname{Gal}_{k}\)-equivariant group homomorphism such that for every primitive third root of unity \(\omega\in k^{\mathrm{sep}}\):_
1. \(\alpha(\omega)^{\vee}\circ\lambda\circ\alpha(\omega)=\lambda\)_;_
2. \(\alpha(\omega)\) _has characteristic polynomial_ \(X^{2}+X+1\) _on_ \(T_{0}A=\operatorname{H}^{0}(A,\Omega^{1}_{A})^{\vee}\)_._
The signature condition (2) is equivalent to \(\alpha(\omega)^{2}+\alpha(\omega)+1=0\), and is motivated by Lemma 2.13. Condition (1) can be reformulated as follows: after extending \(\mu_{3}\hookrightarrow\operatorname{Aut}(A_{k^{\mathrm{sep}}})\) to a \(\mathbb{Q}\)-algebra homomorphism \(\mathbb{Q}(\omega)\hookrightarrow\operatorname{End}(A_{k^{\mathrm{sep}}}) \otimes\mathbb{Q}\), the Rosati involution associated to \(\lambda\) restricts to the conjugation involution \(a+b\omega\mapsto a+b\bar{\omega}\) on \(\mathbb{Q}(\omega)\).
**Remark 3.8**.: It would be more accurate to call such triples '\((1,2)\)-polarized abelian surfaces with compatible \(\mu_{3}\)-action of signature \((1,1)\)', but we stick with \(\mu_{3}\)-abelian surfaces for brevity.
Let \((A,\lambda,\alpha)\) be a \(\mu_{3}\)-abelian surface over \(k\). Assume that \((A,\lambda)\) is indecomposable, so the results of SS3.1 apply. Let \(\mathscr{L}\) be the unique line bundle on \(A\) representing \(\lambda\) with base locus \(A[\lambda]\). The compatibility of \(\alpha\) and \(\lambda\) and the uniqueness of \(\mathscr{L}\) show that \(\alpha(\omega)^{*}\mathscr{L}\simeq\mathscr{L}\) for all \(\omega\in\mu_{3}(k^{\mathrm{sep}})\). Each such isomorphism is unique up to \(k^{\times}\)-scaling so determines a _canonical_ isomorphism \(|\alpha(\omega)^{*}\mathscr{L}|\simeq|\mathscr{L}|\) Via pullback this induces a \(\mu_{3}\)-action on the pencil \(|\mathscr{L}|\).
**Proposition 3.9**.: _Every \(\mu_{3}\)-fixed point of \(|\mathscr{L}|\) is a smooth curve._
Proof.: We may assume that \(k=k^{\mathrm{sep}}\). Let \(\omega\in k\) be a third root of unity and for ease of notation denote \(\alpha(\omega)\colon A\to A\) by \(\alpha\). Let \(X\in|\mathscr{L}|\) be fixed by \(\alpha\). Assume for the sake of contradiction that \(X\) is singular, with normalization \(\tilde{X}\to X\). By Lemma 3.1(3) and Propositions 3.5 and 3.6, the map \(\tilde{X}\to A\) induces an isogeny \(\phi\colon\operatorname{Jac}_{\tilde{X}}\to A\) whose kernel \(\{0,D\}\) has order \(2\). Since \(\alpha\) preserves \(X\), it induces an automorphism \(\tilde{\alpha}\) of \(\operatorname{Jac}_{\tilde{X}}\) and \(\phi\) is equivariant with respect to \(\tilde{\alpha}\) and \(\alpha\). It follows that \(\tilde{\alpha}\) fixes \(D\). Since \(\alpha^{2}+\alpha+1=0\) (Condition (2) in Definition 3.7) and \(\phi\) is an isogeny, \(\tilde{\alpha}^{2}+\tilde{\alpha}+1=0\). Therefore the degree of \(1-\tilde{\alpha}\) is a power of \(3\), contradicting the fact that \(\mathbb{Z}/2\mathbb{Z}\simeq\{0,D\}\subset\operatorname{Jac}_{\tilde{X}}[1- \tilde{\alpha}]\).
**Corollary 3.10**.: _Assume that \(k=k^{\mathrm{sep}}\). Then there are exactly \(2\) curves in \(|\mathscr{L}|\) that are preserved by the \(\mu_{3}\)-action._
Proof.: Let \(\omega\in k\) be a third root of unity. By Proposition 3.9, every fixed point of \(\alpha(\omega)\) on \(|\mathscr{L}|\) is smooth. Since \(|\mathscr{L}|\) always contains singular curves [4, SS1.2], this implies that \(\alpha(\omega)\) acts nontrivially on \(|\mathscr{L}|\). By the classification of order \(3\) elements of \(\operatorname{PGL}_{2}(k)\), we may choose coordinates on \(\mathbb{P}^{1}\) so that \(\alpha(\omega)\) is given by \((x:y)\mapsto(\omega x:y)\). This map clearly has \(2\) fixed points.
Let \(X_{1},X_{2}\in|\mathscr{L}_{k^{\mathrm{sep}}}|\) be the members of the pencil that are preserved by the \(\mu_{3}\)-action \(\alpha\). By Lemma 3.2, the involution \([-1]\) also preserves \(X_{1}\) and \(X_{2}\). These combine to a \(\mu_{6}\)-action \(\gamma_{i}\) on \(X_{i}\) for each \(i=1,2\). The next proposition shows that each \(X_{i}\) is a bielliptic Picard curve and the \(\mu_{6}\)-actions have 'opposite' signature.
**Proposition 3.11**.: _Reordering \(X_{1}\) and \(X_{2}\) if necessary, the pairs \((X_{1},\gamma_{1})\) and \((X_{2},\gamma_{2}^{-1})\) are marked bielliptic Picard curves. Moreover, these pairs are defined over \(k\)._
Proof.: The second sentence follows from the first, since \(\operatorname{Gal}_{k}\) preserves the \(\mu_{3}\)-fixed points of \(|\mathscr{L}_{k^{\operatorname{sep}}}|\) and the signatures of the \(\mu_{6}\)-action. To show that \((X_{1},\gamma_{1})\) and \((X_{2},\gamma_{2})\) are bielliptic Picard curves of opposite signature, we may assume that \(k=k^{\operatorname{sep}}\).
Let \((X,\gamma)=(X_{i},\gamma_{i})\) for some \(i=1,2\). Proposition 3.9 shows that \(X\) is smooth, and Proposition 3.4 shows that \((X,[-1]|_{X})\) is a bielliptic genus \(3\) curve. Since \(\alpha^{2}+\alpha+1=0\), the \(\mu_{3}\)-action on \(A\) has isolated fixed points, so the induced \(\mu_{3}\)-action on \(X\) is faithful. It follows that \(\gamma\) is a faithful bielliptic action and hence \(X\) is a bielliptic Picard curve. Note that \(0\in X(k)\subset A(k)\) is the unique \(\mu_{6}\)-fixed point.
To prove the claim about signatures, it suffices to show that the \(\mu_{3}\)-action on \(T_{0}X_{1}\) and \(T_{0}X_{2}\) act via inverse characters. Since \(\mathscr{L}\) has self-intersection \(4\) and \(|\mathscr{L}|\) has four base points (Lemma 3.1), \(X_{1}\) and \(X_{2}\) intersect transversally at all these base points. Since \(0\in A(k)\) is such a base point,
\[T_{0}A=T_{0}X_{1}\oplus T_{0}X_{2}. \tag{3.2}\]
Condition (2) of Definition 3.7 says that the \(\mu_{3}\)-action on \(T_{0}A\) is a direct sum of the two nontrivial characters of \(\mu_{3}\). Since the decomposition (3.2) is \(\mu_{3}\)-stable, the proposition follows.
### A Torelli theorem for marked bielliptic Picard curves
Let \((C,\gamma)\) be a marked bielliptic Picard curve over \(k\). The \(\mu_{6}\)-action on \(C\) induces a \(\mu_{6}\)-action on \(J=\operatorname{Jac}_{C}\) such that the Abel-Jacobi map \(C\hookrightarrow J\) is \(\mu_{6}\)-equivariant. This induces \(\mu_{3}\)-actions \(\alpha\) on \(P\) and \(\hat{\alpha}\) on3\(A\).
**Lemma 3.12**.: \(\operatorname{Prym}(C,\gamma):=(P,\lambda,\alpha)\) _and \(\operatorname{Prym}(C,\gamma)^{\vee}:=(A,\hat{\lambda},\hat{\alpha})\) are indecomposable \(\mu_{3}\)-abelian surfaces over \(k\)._
Proof.: The first condition of Definition 3.7 for \(\operatorname{Prym}(C,\gamma)^{\vee}\) follows from Proposition 2.15 and the fact that \(\hat{\alpha}\) preserves \(C\hookrightarrow A\). Moreover, since \(C\) is irreducible, \((A,\hat{\lambda})\) is indecomposable. The second condition for \(\operatorname{Prym}(C,\gamma)\) follows from Lemma 2.13. The lemma now follows from bigonal duality (Proposition 2.19).
The next theorem is a strong Torelli statement for the association \((C,\gamma)\mapsto\operatorname{Prym}(C,\gamma)^{\vee}\). To state it, we upgrade the set of marked bielliptic Picard curves (respectively \(\mu_{3}\)-abelian surfaces) to a category, where the morphisms are isomorphisms of curves (respectively polarized abelian surfaces) respecting the \(\mu_{6}\)-structure (respectively \(\mu_{3}\)-structure).
**Theorem 3.13**.: _The association \((C,\gamma)\mapsto\operatorname{Prym}(C,\gamma)^{\vee}\) induces an equivalence of categories:_
\[\left\{\begin{aligned} &\text{Marked bielliptic}\\ &\text{Picard curves over $k$}\end{aligned}\right\}\to \left\{\begin{aligned} &\text{Indecomposable}\\ &\mu_{3}\text{-abelian surfaces over $k$}\end{aligned}\right\}. \tag{3.3}\]
Proof.: This association is well defined by Lemma 3.12. It is functorial: an isomorphism of marked bielliptic Picard curves \((C,\gamma)\to(C^{\prime},\gamma^{\prime})\) induces an isomorphism \(\operatorname{Prym}(C,\gamma)\to\operatorname{Prym}(C^{\prime},\gamma^{\prime})\) of marked \(\mu_{3}\)-abelian surfaces. We explicitly construct a quasi-inverse. To a triple \((A,\lambda,\alpha)\), in the notation of Proposition 3.11, we associate the marked bielliptic Picard curve \((X_{1},\gamma_{1})\). Since the constructions of SS3.1 and SS3.2 are canonical, this association is also functorial. The fact that this is a quasi-inverse follows from Propositions 2.15 and 3.4.
By Lemma 2.8, the left hand side of (3.3) is in bijection with the set of equivalence classes \(\{(a,b)\in k\times k\mid b(a^{2}-4b)\neq 0\}/\sim\), where \((a,b)\sim(a^{\prime},b^{\prime})\) if \((a,b)=(\lambda^{6}a^{\prime},\lambda^{12}b^{\prime})\) for some \(\lambda\in k^{\times}\). If \(k=k^{\operatorname{sep}}\), the \(j\)-invariant (2.3) maps this set bijectively to the '\(j\)-line' \(k\setminus\{0\}\). Theorem 3.13 thus shows that this description also applies to the right hand side of (3.3). In particular, we may define
the \(j\)-invariant of a \(\mu_{3}\)-abelian surface \((A,\lambda,\alpha)\) as \(j((C,\gamma))\), where \((C,\gamma)\) is any bielliptic Picard curve with \(\operatorname{Prym}(C,\gamma)^{\vee}=(A,\lambda,\alpha)\).
**Remark 3.14**.: Since \(\operatorname{Prym}(C,\gamma)\simeq\operatorname{Prym}(\widehat{C},\hat{ \gamma})^{\vee}\) (Proposition 2.19) and bigonal duality induces an autoequivalence on the category of marked bielliptic Picard curves, Theorem 3.13 implies that the association \((C,\gamma)\mapsto\operatorname{Prym}(C,\gamma)\) also induces an equivalence of categories of the form (3.3). On the other hand, the association \((A,\lambda,\alpha)\mapsto(A,\lambda,\alpha^{-1})\) is again an auto-equivalence on the category of marked \(\mu_{3}\)-abelian surfaces. In Corollary 5.5 we will prove a precise relationship between these equivalences.
**Remark 3.15**.: Barth has shown that Torelli fails for the family of all bielliptic genus \(3\) curves \((X,\tau)\). Suppose that \(k\) is algebraically closed of characteristic zero and \(A\) is an indecomposable \((1,2)\)-polarized abelian surface \((A,\lambda)/k\). Then there exist many bielliptic genus \(3\) curves \((X,\tau)\) over \(k\) with dual Prym variety isomorphic to \((A,\lambda)\); by Proposition 3.4 any smooth member of the pencil \(|\mathscr{L}|\) on \(A\) will do. Theorem 3.13 shows that we can correct this when \(C\) is a bielliptic Picard curve and we fix the \(\mu_{6}\)-action on \(C\) and the \(\mu_{3}\)-action on \(A\).
**Remark 3.16**.: Let \(S\) be a \(\mathbb{Z}[1/6]\)-scheme. A marked bielliptic Picard curve over \(S\) is a smooth proper morphism \(C\to S\) together with a \(\mu_{6}\)-action \(\gamma\colon\mu_{6}\times_{S}C\to C\) such that for every geometric point \(\bar{s}\in S\), \((C_{\bar{s}},\gamma_{\bar{s}})\) is a marked bielliptic Picard curve. With the evident notion of isomorphisms, this defines the stack \(\mathfrak{M}\) of (marked) bielliptic Picard curves over \(\mathbb{Z}[1/6]\) (in the fppf topology), isomorphic to an open substack of the weighted projective stack \(\mathbb{P}(6,12)\). A \(\mu_{3}\)-abelian surface over \(S\) is a \((1,2)\)-polarized abelian scheme \((A,\lambda)\) over \(S\) together with a \(\mu_{3}\)-action \(\alpha\colon\mu_{3}\times_{S}A\to A\) such that for every geometric point \(\bar{s}\in S\), \((A_{\bar{s}},\lambda_{\bar{s}},\alpha_{\bar{s}})\) is a \(\mu_{3}\)-abelian surface. This defines the stack \(\mathfrak{Y}\) of \(\mu_{3}\)-abelian surfaces over \(\mathbb{Z}[1/6]\). There exists an open substack \(\mathfrak{Y}^{\mathrm{indec}}\subset\mathfrak{Y}\) of those surfaces such that each geometric fiber is indecomposable. Theorem 3.13 then generalizes to the statement that the association \((C,\gamma)\mapsto\operatorname{Prym}(C,\gamma)^{\vee}\) induces an isomorphism of stacks \(\mathfrak{M}\xrightarrow{\sim}\mathfrak{Y}^{\mathrm{indec}}\).
## 4. Shimura curves and quaternionic multiplication
We relate the moduli space of bielliptic Picard curves to certain unitary and quaternionic Shimura curves, which will allow us to deduce that these bielliptic Picard Pryms surfaces \(P\) have quaternionic multiplication over \(\overline{k}\) (Corollary 4.8). This will also help us study which \(P\) are geometrically non-simple in SS6.4, since these correspond to CM points on the Shimura curve side.
### The quaternion order \(\mathcal{O}\)
Let \(B\) be the quaternion algebra \((-3,2)_{\mathbb{Q}}\) with \(\mathbb{Q}\)-basis \(\{1,i,j,ij\}\) and relations \(i^{2}=-3\), \(j^{2}=2\), \(ij=-ji\). This is an indefinite quaternion algebra of discriminant \(6\). Let \(\omega:=\frac{1}{2}(-1+i)\), so that \(\omega^{2}+\omega+1=0\). Let \(\mathcal{O}:=\mathbb{Z}[\omega,j]=\mathbb{Z}+\mathbb{Z}\omega+\mathbb{Z}j+ \mathbb{Z}\omega j\). A discriminant calculation shows that \(\mathcal{O}\) is a maximal order in \(B\). As an associative ring, \(\mathcal{O}\) has generators \(\omega,j\) and relations \(\omega^{2}+\omega+1=0\), \(j^{2}=2\) and \(j\omega=\omega^{-1}j\).
Let \(\operatorname{Aut}(\mathcal{O})\) denote the set of ring automorphisms of \(\mathcal{O}\), which we will think of as acting on the right on \(\mathcal{O}\). By the Skolem-Noether theorem, every element of \(\operatorname{Aut}(\mathcal{O})\) is given by conjugating by an element of \(b\in B^{\times}\) (uniquely defined up to \(\mathbb{Q}^{\times}\)) normalizing \(\mathcal{O}\); denote the conjugation action of such an element \(b\) by \([b]\in\operatorname{Aut}(\mathcal{O})\).
For reasons that will become clear in SS6.2, we define the groups
\[\operatorname{Aut}_{i}^{+}(\mathcal{O}) :=\{\varphi\in\operatorname{Aut}(\mathcal{O})\mid i^{\varphi}=i\},\] \[\operatorname{Aut}_{i}(\mathcal{O}) :=\{\varphi\in\operatorname{Aut}(\mathcal{O})\mid i^{\varphi}=\pm i\}.\]
**Lemma 4.1**.: \(\operatorname{Aut}_{i}^{+}(\mathcal{O})=\langle[1-\omega]\rangle\) _is cyclic of order \(6\) and \(\operatorname{Aut}_{i}(\mathcal{O})=\langle[1-\omega],[j]\rangle\) is dihedral of order \(12\)._
Proof.: A direct calculation shows that \(\langle[1-\omega],[j]\rangle\subset\operatorname{Aut}_{i}(\mathcal{O})\), that this subgroup is dihedral of order \(12\) and that \(\langle[1-\omega],[j]\rangle\cap\operatorname{Aut}_{i}^{+}(\mathcal{O})= \langle[1-\omega]\rangle\). Conversely, suppose \([b]\in\operatorname{Aut}_{i}(\mathcal{O})\). After possibly multplying by \(j\), we may assume \(i^{[b]}=b^{-1}ib=i\). In that case \(b\) centralizes \(i\), so \(b\in\mathbb{Q}(i)\). By [29, Lemma 2.2.1], \([b]\) can be represented by an element of \(\mathcal{O}\cap\mathbb{Q}(i)=\mathbb{Z}[\omega]\) of norm dividing \(6\). Therefore \([b]\in\langle[1-\omega]\rangle\), as desired.
**Lemma 4.2**.: _Let \(G\leq\operatorname{Aut}_{i}(\mathcal{O})\) be a subgroup. Then the subring \(\mathcal{O}^{G}\) fixed by \(G\) is: \(\mathcal{O}\) if \(G=\{1\}\); \(\mathbb{Z}[\omega]\) if \(\{1\}\neq G\subset\langle[1-\omega]\rangle\); isomorphic to \(\mathbb{Z}[\sqrt{2}]\) if \(G\) is conjugate to \(\langle[j]\rangle\); isomorphic to \(\mathbb{Z}[\sqrt{6}]\) if \(G\) is conjugate to \(\langle[ij]\rangle\). In all other cases, \(G\) is dihedral of order \(6\) or \(12\) and \(\mathcal{O}^{G}=\mathbb{Z}\)._
Proof.: This is a direct calculation using the explicit description of \(\operatorname{Aut}_{i}(\mathcal{O})\) from Lemma 4.1.
### Moduli of marked \(\mu_{3}\)-abelian surfaces
In Remark 3.16, we have introduced the stack \(\mathfrak{Y}\) of \(\mu_{3}\)-abelian surfaces \((A,\lambda,\alpha)\) over \(\mathbb{Z}[1/6]\), which we now view as a stack over \(\mathbb{Q}\). This is a Deligne-Mumford stack; let \(Y/\mathbb{Q}\) be its coarse space. By a variant of [27, Proposition 2.1] for \((1,2)\)-polarizations, \(Y\) is smooth and purely one-dimensional. The next lemma shows that \(Y\) may be viewed as a compactification of the \(j\)-line \(\mathbb{P}^{1}\setminus\{0,\infty\}\) from SS2.3.
**Lemma 4.3**.: _There exists a unique isomorphism \(j\colon Y\xrightarrow{\sim}\mathbb{P}^{1}_{\mathbb{Q}}\) with the property that \(j([\operatorname{P}\!\operatorname{r}\!\operatorname{r}\!\operatorname{m}(C, \gamma)^{\vee}])=j(C)\) for all marked bielliptic Picard curves \((C,\gamma)\) over \(\overline{\mathbb{Q}}\)._
Proof.: The open substack \(\mathfrak{Y}^{\operatorname{indec}}\subset\mathfrak{Y}\) parametrizing those triples \((A,\lambda,\alpha)\) for which \((A,\lambda)\) is indecomposable has coarse space an open subset \(Y^{\operatorname{indec}}\subset Y\). Every decomposable \(\mu_{3}\)-abelian surface \((A,\lambda,\alpha)\) over \(\mathbb{C}\) is isomorphic to \((E\times E,\lambda_{O}\times 2\lambda_{O},\alpha_{1}\times\alpha_{2})\), where \(E:y^{2}=x^{3}+1\), \(\lambda_{O}\) is the unique principal polarization on \(E\) and \(\alpha_{i}\) are mutually inverse nontrivial \(\mu_{3}\)-actions on \(E\). Since there are two choices for the pair \((\alpha_{1},\alpha_{2})\), even up to isomorphism, \((Y\setminus Y^{\operatorname{indec}})(\mathbb{C})\) has size \(2\). Since \(Y\) is purely one-dimensional, \(Y^{\operatorname{indec}}\) is dense in \(Y\). Theorem 3.13 and Lemma 2.8 show that the morphism \(\mathbb{A}^{2}_{\mathbb{Q}}\setminus\{\Delta_{a,b}=0\}\to Y^{\operatorname{ indec}},(a,b)\mapsto[\operatorname{P}\!\operatorname{r}\!\operatorname{r}\! \operatorname{m}(C_{a,b},\gamma_{a,b})^{\vee}]\) is surjective, factors through the \(j\)-invariant morphism \(\mathbb{A}^{2}_{\mathbb{Q}}\setminus\{\Delta_{a,b}=0\}\to\mathbb{P}^{1}_{ \mathbb{Q}}\setminus\{0,\infty\},(a,b)\mapsto j_{a,b}\) of (2.3), and induces an isomorphism \(\mathbb{P}^{1}_{\mathbb{Q}}\setminus\{0,\infty\}\simeq Y^{\operatorname{ indec}},j\mapsto[\operatorname{P}\!\operatorname{r}\!\operatorname{m}(C_{a,b}, \gamma_{a,b})^{\vee}]\), where \(a,b\) are any elements with \(j=j_{a,b}\). Putting everything together, \(Y\) is a purely one-dimensional smooth variety having an open subset isomorphic to \(\mathbb{P}^{1}\setminus\{0,\infty\}\) whose complement has size \(2\). We conclude that \(Y\simeq\mathbb{P}^{1}_{\mathbb{Q}}\) and that the isomorphism \(j\colon Y^{\operatorname{indec}}\to\mathbb{P}^{1}_{\mathbb{Q}}\setminus\{0,\infty\}\) uniquely extends to an isomorphism \(Y\to\mathbb{P}^{1}_{\mathbb{Q}}\).
**Remark 4.4**.: The curve \(Y_{\mathbb{C}}\) is a disjoint union of Shimura varieties for groups of type \(\operatorname{GU}(1,1)\). Similarly to [27, Proposition 3.1], the connected components of \(Y_{\mathbb{C}}\) are parametrized by isometry classes of rank \(2\) Hermitian \(\mathbb{Z}[\omega]\)-lattices of discriminant \(2\). Lemma 4.3 shows that there is precisely one such Hermitian lattice, although this is certainly not the easiest way to see that.
**Remark 4.5**.: The isomorphism \(j\colon Y\to\mathbb{P}^{1}\) identifies \(Y^{\operatorname{indec}}\) with \(\mathbb{P}^{1}\setminus\{0,\infty\}\). The two decomposable points \(0,\infty\) might be considered 'cusps'.
### Comparison with a quaternionic Shimura curve
We now compare \(Y\) to a quaternionic Shimura curve, in the same spirit as [23, SS4.2]. However, our analysis is a little different since we must consider twists and non-principal polarizations. We refer the reader to [47, Chapter 43] for background on Shimura curves.
**Definition 4.6**.: _Let \(X\) be the (quaternionic) Shimura curve of discriminant \(6\) over \(\mathbb{Q}\)._
The nice curve \(X\) is the coarse space of the moduli stack of triples \((A,\lambda,\iota)\), where \((A,\lambda)\) is a \((1,2)\)-polarized abelian surface and \(\iota\colon\mathcal{O}\to\operatorname{End}(A)\) is an embedding such that the Rosati involution restricts to \(b\mapsto i\bar{b}i^{-1}\), where we recall from SS4.1 that \(i=1+2\omega\). (The moduli problem is typically
formulated for principal polarizations, but the results of [47, SS43.6] have analogues for the polarized order \((\mathcal{O},i)\), see [10, SS12].) To such a triple \((A,\lambda,\iota)\) we would like to associate a \(\mu_{3}\)-abelian surface by restricting \(\iota\) to \(\mathbb{Z}/3\mathbb{Z}\simeq\langle\omega\rangle\subset\mathbb{Z}[\omega]^{ \times}\subset\mathcal{O}^{\times}\). However, this defines a \(\mathbb{Z}/3\mathbb{Z}\)-action on \(A\); to get a \(\mu_{3}\)-action we need to suitably twist the situation.
To this end, recall the action of the Atkin-Lehner group \(W=\{1,w_{2},w_{3},w_{6}\}\simeq\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/2\mathbb{Z}\) on \(X\): if \(b\in\mathcal{O}\) is any element whose (reduced) norm has absolute value \(d\in\{2,3,6\}\), then \(w_{d}([A,\lambda,\iota])=[A,\lambda,\iota\circ[b]]\), where \([b]\in\operatorname{Aut}(\mathcal{O})\) denotes conjugation by \(b\). The following curves will be important:
* Let \(X/w_{3}\) be the quotient of \(X\) by \(w_{3}\). The \(W\)-action on \(X\) induces an action of \(W/\langle w_{3}\rangle=\{1,\bar{w}\}\) on \(X/w_{3}\).
* Let \((X/w_{3})^{\bar{w}}\) be the twist of \(X/w_{3}\) along the involution \(\bar{w}\) and extension \(\mathbb{Q}(\omega)/\mathbb{Q}\).
* For each \(d\in\{2,6\}\), let \(X^{w_{d}}\) be the twist of \(X\) along the involution \(w_{d}\) and extension \(\mathbb{Q}(\omega)/\mathbb{Q}\).
We recall that the twist of a nice curve \(C/k\) along an involution \(\tau\) and quadratic extension \(K/k\) corresponds to the cocycle given by the image of the nontrivial element under \(\operatorname{H}^{1}(\operatorname{Gal}(K/k),\langle\tau\rangle)\to \operatorname{H}^{1}(k,\operatorname{\mathbf{Aut}}(C))\). Note that \(W\) acts on \(X^{w_{2}}\) and \(X^{w_{6}}\), \(\bar{w}\) acts on \((X/w_{3})^{\bar{w}}\), and we have canonical isomorphisms \(X^{w_{2}}/w_{3}\simeq X^{w_{6}}/w_{3}\simeq(X/w_{3})^{\bar{w}}\).
To compare \(X\) and \(Y\), we will construct morphisms \(\pi_{d}\colon X^{w_{d}}\to Y\) for each \(d\in\{2,6\}\). In the following paragraph, for a field \(k/\mathbb{Q}\) we write \(K:=k\otimes_{\mathbb{Q}}\mathbb{Q}(\omega)\) and \(\operatorname{Aut}_{k}(K)=\{1,\sigma\}\). The curve \(X^{w_{2}}\) is the coarse space of the moduli stack whose \(k\)-points parametrizes triples \((A,\lambda,\iota)\), where \((A,\lambda)\) is a \((1,2)\)-polarized abelian surface over \(k\) and \(\iota\colon\mathcal{O}\to\operatorname{End}(A_{K})\) is an embedding such that the Rosati involution restricts to \(b\mapsto i\bar{b}i^{-1}\) and such that \(\iota^{\sigma}=\iota\circ[j]\). We claim that such a triple naturally gives rise to a \(\mu_{3}\)-abelian surface \((A,\lambda,\iota|_{\langle\omega\rangle})\) over \(k\). Indeed, restricting \(\iota\) to \(\langle\omega\rangle\subset\mathbb{Z}[\omega]^{\times}\subset\mathcal{O}^{\times}\) defines homomorphism \(\mathbb{Z}/3\mathbb{Z}\to\operatorname{End}(A_{K})\) which due to the twisting descends to a \(\mu_{3}\)-action on \(A\). The triple \((A,\lambda,\iota|_{\langle\omega\rangle})\) satisfies first condition of Definition 3.7 since the Rosati involution sends \(\iota(\omega)\) to \(\iota(\omega^{-1})\). The second condition follows from the identity \(\omega^{2}+\omega+1=0\). The association \((A,\lambda,\iota)\mapsto(A,\lambda,\iota|_{\langle\omega\rangle})\) also works on the level of \(\mathbb{Q}\)-schemes, hence it induces a morphism \(\pi_{2}\colon X^{w_{2}}\to Y\). Repeating the above discussion but insisting that \(\iota^{\sigma}=\iota\circ[ij]\) instead of \(\iota^{\sigma}=\iota\circ[j]\), we obtain a morphism \(\pi_{6}\colon X^{w_{6}}\to Y\).
**Proposition 4.7**.: _For each \(d\in\{2,6\}\), \(\pi_{d}\colon X^{w_{d}}\to Y\) induces an isomorphism \(\phi_{d}\colon X^{w_{d}}/w_{3}\xrightarrow{\sim}Y\). Under the canonical isomorphisms \(X^{w_{2}}/w_{3}\simeq X^{w_{6}}/w_{3}\simeq(X/w_{3})^{\bar{w}}\), \(\phi_{2}\) and \(\phi_{6}\) correspond to the same isomorphism \((X/w_{3})^{\bar{w}}\simeq Y\)._
Proof.: It suffices to check these claims over \(\mathbb{C}\), so let \(\pi=\pi_{2,\mathbb{C}}=\pi_{6,\mathbb{C}}\). The morphism \(\pi\) is given by mapping an isomorphism class of triples \([A,\lambda,\iota]\) to \([A,\lambda,\iota|_{\langle\omega\rangle}]\). We have \(\pi\circ w_{3}=\pi\), since \(w_{3}([A,\lambda,\iota])=[A,\lambda,\iota\circ[i]]\) and \(\iota|_{\langle\omega\rangle}=(\iota\circ[i])|_{\langle\omega\rangle}\). (Recall that \(i=1+2\omega\).)
Since \(\pi\) is a morphism between nice curves, it is either constant or a finite covering map. To prove the proposition, it suffices to show that \(\pi\) has degree \(2\), equivalently infinitely many fibres \(\pi^{-1}(y)\) have size \(2\). Let \(x=[A,\lambda,\iota]\in X(\mathbb{C})\) be a non-CM point and \(y=\pi(x)=[A,\lambda,\iota|_{\langle\omega\rangle}]\); there are infinitely many (in fact uncountably many) such points \(x\) by the theory of Shimura curves [14, SS2.4]. The set \(\pi^{-1}(y)\) is in bijection with the set of isomorphism classes of triples \((A^{\prime},\lambda^{\prime},\iota^{\prime})\) such that \((A,\lambda,\iota|_{\langle\omega\rangle})\simeq(A^{\prime},\lambda^{\prime}, \iota^{\prime}|_{\langle\omega\rangle})\). Replacing \((A^{\prime},\lambda^{\prime},\iota^{\prime})\) by an isomorphic triple, we may assume that \((A^{\prime},\lambda^{\prime})=(A,\lambda)\) and \(\iota^{\prime}|_{\mathbb{Z}[\omega]}=\iota|_{\mathbb{Z}[\omega]}\). Therefore \(\pi^{-1}(y)\) is in bijection with the orbit space
\[\{\iota^{\prime}\colon\mathcal{O}\to\operatorname{End}(A)\mid\iota^{\prime} \text{ compatible with Rosati involution and }\iota^{\prime}|_{\mathbb{Z}[\omega]}=\iota|_{\mathbb{Z}[\omega]}\}/ \operatorname{Aut}(A,\lambda). \tag{4.1}\]
Since \(x\) is assumed to be a non-CM point and \(\mathcal{O}\) is maximal, \(\operatorname{End}(A)\simeq\mathcal{O}\). It follows that every \(\iota^{\prime}\) of (4.1) is of the form \(\iota\circ\alpha\) for some \(\alpha\in\operatorname{Aut}(\mathcal{O})\). Since \(\iota^{\prime}|_{\mathbb{Z}[\omega]}=\iota|_{\mathbb{Z}[\omega]}\), \(\alpha\) lies in the subgroup \(\operatorname{Aut}_{k}^{+}(\mathcal{O})=\langle[1-\omega]\rangle\) of Lemma 4.1 (using the notation of SS4.1). Conversely, every element \([b]\in\langle[1-\omega]\rangle\) defines an element of (4.1), and two elements \([b],[b^{\prime}]\) lie in the same \(\operatorname{Aut}(A,\lambda)\)-orbit if and
only if \(b^{\prime}b^{-1}\in\mathbb{Q}^{\times}\mathcal{O}^{\times}\). It follows that this orbit space has size \(|\pi^{-1}(y)|=|\langle[1-\omega]\rangle/\langle[\omega]\rangle|=2\), as desired.
**Corollary 4.8**.: _Every \(\mu_{3}\)-abelian surface \((A,\lambda,\alpha)\) over a field \(k\) of characteristic \(0\) admits an embedding \(\mathcal{O}\hookrightarrow\operatorname{End}(A_{k^{\mathrm{sep}}})\)._
Proof.: Proposition 4.7 shows that the point \([A,\lambda,\alpha]\in Y(k)\) lifts to a \(k^{\mathrm{sep}}\)-point of \(X^{w_{2}}\) under \(\pi_{2}\). The corollary follows from the moduli interpretation of \(X\).
**Remark 4.9**.: Corollary 4.8 was proved by Petkova and Shiga over \(\mathbb{C}\) using period matrix calculations [37, Proposition 7.1].
Our proof of Corollary 4.8 shows that there exists an action of \(\mathcal{O}\) on \(P_{k^{\mathrm{sep}}}\), but it sheds no light on what this action actually is. In Proposition 6.1 we will give a more geometric proof of the quaternionic multiplication of \(P_{k^{\mathrm{sep}}}\), which works in all characteristics \(\neq 2,3\) and gives finer arithmetic information, such as the field of definition of the quaternionic multiplication, see SS6.2.
## 5. Identifying bielliptic Picard curves in the pencil
Let \((C,\gamma)\) be a marked bielliptic Picard curve. Its Prym variety \(\operatorname{Prym}(C,\gamma)=(P,\lambda,\alpha)\) is an indecomposable \(\mu_{3}\)-abelian surface (Lemma 3.12). Therefore the results of SS3.1-3.2 apply, and the pencil of curves on \(P\) contains exactly two bielliptic Picard curves by Proposition 3.11. We have already seen in SS2.7 that this pencil contains the bigonal dual curve \(\widehat{C}\). In this section we will determine the other bielliptic Picard curve in the pencil: it turns out to be a sextic twist of the original curve \(C\)! This calculation is a crucial ingredient for the results of SS6.
Let \(\widehat{C}\subset P\) be the dual curve of \(C\) from SS2.7. Then \(\mathscr{M}:=\mathcal{O}_{P}(\widehat{C})\) is the unique line bundle on \(P\) representing the \((1,2)\)-polarization with base locus \(P[\lambda]\) (Proposition 3.2). By Lemma 2.18, \(\widehat{C}\) is a bielliptic Picard curve preserved by the \(\mu_{3}\)-action \(\alpha\). By Proposition 3.11, there is exactly one other curve in \(|\mathscr{M}|\) that is preserved by the \(\mu_{3}\)-action; call this curve \(C_{\mathrm{twist}}\). We will explicitly describe \(C_{\mathrm{twist}}\) using the work of Ikeda [24].
We will use the notations from SS2.5. If \(p\in E(k)\), let \(\Theta_{p}\) be the image of the map \(\operatorname{Sym}^{2}C\to J\) sending \(Q+Q^{\prime}\) to \(Q+Q^{\prime}-\pi^{*}(p)\). Then \(D_{p}:=\Theta_{p}\cap P\) is a divisor on \(P\). Note that \(D_{\pi(\infty)}=\widehat{C}\). Recall that we view \(E\) as an elliptic curve with origin \(O_{E}:=\pi(\infty)\in E(k)\).
**Lemma 5.1**.:
1. _For every_ \(p\in E(k)\)_,_ \(D_{p}\in|\mathscr{M}|(k)\)_._
2. _If_ \(k=k^{\mathrm{sep}}\)_, every element of_ \(|\mathscr{M}|\) _is of the form_ \(D_{p}\) _for some_ \(p\in E(k)\)_._
3. _For every_ \(p\in E(k)\)_,_ \(D_{\iota(p)}=D_{p}\)_, where_ \(\iota(p)\) _denotes the inverse of_ \(p\) _under the group law._
Proof.: This is [24, Lemma 3.4] and [24, Remark 3.5].
Therefore the map \(E\to|\mathscr{M}|,p\mapsto D_{p}\) factors through the map \(\phi\colon E\to|2O_{E}|\) and gives rise to a \(\mu_{3}\)-equivariant map \(|2O_{E}|\to|\mathscr{M}|,y\mapsto D_{y}\). We note that if \(y\) is \(k\)-rational, then so is \(D_{y}\).
By Lemma 2.8, we may and do assume that \((C,\gamma)=(C_{a,b},\gamma_{a,b})\) for some \(a,b\in k\). Then \(E_{a,b}\) has equation \(y^{3}=x^{2}+ax+b\) and \(y\) induces an isomorphism \(|2O_{E}|\simeq\mathbb{P}^{1}\). There are two fixed points for the \(\mu_{3}\)-action on \(|2O_{E}|\), corresponding to \(0,\infty\in\mathbb{P}^{1}(k)\); call them \(y_{0},y_{\infty}\in|2O_{E}|\). Then \(D_{y_{\infty}}=D_{O_{E}}=\widehat{C}\) and \(D_{y_{0}}=C_{\mathrm{twist}}\).
In order to compute equations for \(D_{y_{0}}\), we will give a different description of the family of divisors \(\{D_{p}\}_{p\in E}\). For ease of notation, write \(C^{(2)}=\operatorname{Sym}^{2}C\). For each \(\xi\in\operatorname{Pic}^{2}(E)\), consider the closed subscheme
\[F_{\xi}:=\{Q+Q^{\prime}\in C^{(2)}\mid\pi(Q)+\pi(Q^{\prime})\sim\xi\}\subset C ^{(2)}.\]
Since \(C\) is not hyperelliptic, the map \(Q+Q^{\prime}-\pi^{*}(p)\mapsto Q+Q^{\prime}\) induces an isomorphism \(\phi_{p}\colon D_{p}\xrightarrow{\sim}F_{2p}\). Consider the morphism
\[\psi\colon C^{(2)} \longrightarrow\mathbb{P}^{2}\] \[([x:y:z],[x^{\prime}:y^{\prime}:z^{\prime}]) \mapsto[yz^{\prime}-y^{\prime}z:xz^{\prime}-x^{\prime}z:xy^{ \prime}-x^{\prime}y],\]
which sends a pair of distinct points on \(C\) to the line it spans in the canonical embedding \(C\subset\mathbb{P}^{2}\). Ikeda has shown that for every \(\xi\in\operatorname{Pic}^{2}(E)\) such that \(F_{\xi}\) is smooth, \(\psi\) maps \(F_{\xi}\) isomorphically onto a plane quartic in \(\mathbb{P}^{2}\) and determined this quartic explicitly [24, Lemma 3.12]. (In fact, he has shown this for any bielliptic genus \(3\) curve.) We will use his calculation to determine \(C_{\text{twist}}\), together with a descent argument along a quadratic extension.
Let \(\alpha,\alpha^{\prime}\) be the roots of \(x^{2}+ax+b\). Let \(p=(\alpha,0)\) and \(p^{\prime}=(\alpha^{\prime},0)\). Let \(K=k(\alpha,\alpha^{\prime})=k(\sqrt{a^{2}-4b})\). Then \(p,p^{\prime}\in E(K)\) and \(C_{\text{twist}}=D_{y_{0}}=D_{p}=D_{p^{\prime}}\). Both \(p\) and \(p^{\prime}\) give rise to \(K\)-isomorphisms \(\phi_{p}\colon D_{p}\to F_{2p}\) and \(\phi_{p^{\prime}}\colon D_{p^{\prime}}\to F_{2p^{\prime}}\). Let \([u:v:1]\) be affine coordinates for the target of the morphism \(\psi\colon\operatorname{Sym}^{2}C\to\mathbb{P}^{2}\).
**Lemma 5.2**.: \(\psi(F_{2p})=\psi(F_{2p^{\prime}})\subset\mathbb{P}^{2}_{K}\) _is cut out by the projective closure of the equation_
\[(2a^{2}-8b)v^{3}=bu^{4}+au^{2}+1. \tag{5.1}\]
Proof.: This is a special case [24, Lemma 3.12]4. For the reader's convenience, we provide the translation to Ikeda's notation. In his notation: \(C\) has equation \((z^{2}+\frac{a}{2}y^{2})^{2}=x^{3}y+\left(\frac{1}{4}a^{2}-b\right)y^{4}\), \(S(x,y)=\frac{1}{4}a^{2}y^{2}\), \(T(x,y)=x^{3}y+(\frac{1}{4}a^{2}-b)y^{4}\), and \(p=[0:\alpha:1]\).
Footnote 4: The results of loc. cit. assume that \(k=\mathbb{C}\), but the proof of the lemma works over any field of characteristic \(\neq 2\).
Let \(\mathcal{Q}=\psi(F_{2p})\subset\mathbb{P}^{2}_{K}\). Lemma 5.2 shows that \(\mathcal{Q}=\psi(F_{2p^{\prime}})\) and that this quartic is defined over \(k\). The substitution \((u,v)\mapsto((2a^{2}-b)^{3}bu,(2a^{2}-b)^{2}bv)\) shows that \(\mathcal{Q}\) is \(k\)-isomorphic to the curve \(y^{3}=x^{4}+dax^{2}+d^{2}b\), where \(d=16b(a^{2}-4b)^{4}\). Therefore \(\mathcal{Q}\) and \(C\) are sextic twists in the sense of SS2.4 and thus isomorphic over \(k^{\text{sep}}\). However, these curves are not necessarily isomorphic over \(k\), since the isomorphisms \(C_{\text{twist},K}=D_{p}\xrightarrow{\phi_{p}}F_{2p}\) and \(F_{2p}\xrightarrow{\psi}\mathcal{Q}_{K}\) are only defined over the (at most quadratic) extension \(K/k\). We descend down to \(k\) by determining the Galois action on these isomorphisms; see SS1.7 for our conventions on how \(\operatorname{Gal}_{k}\) acts on morphisms.
**Lemma 5.3**.: _Suppose that \(K\neq k\) and let \(\sigma\in\operatorname{Gal}(K/k)\) be the unique nontrivial element. Let \(\chi=\psi|_{F_{2p}}\circ\phi_{p}\colon C_{\text{twist},K}\xrightarrow{\sim} \mathcal{Q}_{K}\). Then \(\chi^{\sigma}\circ\chi^{-1}\colon\mathcal{Q}_{K}\to\mathcal{Q}_{K}\) equals the involution \((u,v)\mapsto(-u,v)\)._
Proof.: Let \((u,v)\in\mathcal{Q}_{K}\) and let \(Q_{1}+Q_{2}\in F_{2p}\) be the unique element mapping to \((u,v)\) under \(\psi|_{F_{2p}}\). Since \(p^{\sigma}=p^{\prime}\), a calculation shows that \(\chi^{\sigma}\circ\chi^{-1}\) maps \((u,v)\) to \(\psi(R_{1}+R_{2})\), where \(R_{1}+R_{2}\) is the unique effective divisor on \(C\) such that \(R_{1}+R_{2}-\pi^{*}(p^{\prime})\sim Q_{1}+Q_{2}-\pi^{*}(p)\). We interpret this geometrically: let \(\ell=\psi(Q_{1}+Q_{2})\) be the line spanned by the points \(Q_{1},Q_{2}\) in the embedding \(C=C_{a,b}\subset\mathbb{P}^{2}\). The condition \(Q_{1}+Q_{2}\in F_{2p}\) implies that \(Q_{1}+Q_{2}+\tau(Q_{1})+\tau(Q_{2})\sim 2\pi^{*}(p)\), where we recall that \(\tau\colon C\to C\) denotes the bielliptic involution. Combining the above two linear equivalences shows that \(\tau(Q_{1})+\tau(Q_{2})+R_{1}+R_{2}\sim\pi^{*}(p)+\pi^{*}(p^{\prime})\sim 4\infty\). Since \(4\infty\) is canonical, it follows that \(\tau(Q_{1}),\tau(Q_{2}),R_{1},R_{2}\) are collinear and \(\psi(R_{1}+R_{2})=\psi(\tau(Q_{1})+\tau(Q_{2}))=\tau(\ell)\). In the coordinates \((u,v)\), this shows that \(\psi(R_{1}+R_{2})=(-u,v)\), proving the lemma.
**Theorem 5.4**.: _Let \(C\) be a bielliptic Picard curve of the form \(C_{a,b}\) for some \(a,b\in k\). Then \(C_{\text{twist}}\) is again a bielliptic Picard curve, isomorphic to \(C_{\delta a,\delta^{2}b}\) where \(\delta=\Delta_{a,b}=16b(a^{2}-4b)\). In particular, \(C\) and \(C_{\text{twist}}\) are isomorphic over \(k^{\text{sep}}\)._
Proof.: By Lemma 5.2, \(\mathcal{Q}\simeq C_{da,d^{2}b}\), where \(d=16b(a^{2}-4b)^{4}\). Suppose first that \(K=k\). Then the isomorphism \(\psi|_{F_{2p}}\circ\phi_{p}\colon C_{\text{twist}}\to\mathcal{Q}\) are defined over \(k\), hence \(C_{\text{twist}}\simeq C_{da,d^{2}b}\) too. Since \(K=k(\sqrt{a^{2}-4b})\), \(a^{2}-4b\) is a square in \(k^{\times}\). Therefore \(d\) and \(\delta\) have the same image in \(k^{\times}/k^{\times 6}\), so \(C_{\text{twist}}\simeq C_{da,d^{2}b}\simeq C_{\delta a,\delta^{2}b}\) by Lemma 2.8.
Suppose now that \(K/k\) is a quadratic extension and let \(\sigma\in\operatorname{Gal}(K/k)\) be the nontrivial element. Lemma 5.3 shows that \(C_{\text{twist}}\) is isomorphic to the quadratic twist of \(\mathcal{Q}\) along the extension \(K/k\) and involution \((u,v)\mapsto(-u,v)\). A computation (using SS2.4) shows that this twist is exactly \(C_{\delta a,\delta^{2}b}\).
Theorem 5.4 has consequences for the Prym variety of \(C\) and its dual. If \((X,\mu,\beta)\) is a \(\mu_{3}\)-abelian surface in the sense of Definition 3.7, the \(\mu_{3}\)-action \(\beta\) and \(\mu_{2}\)-action \([-1]\) combine to a \(\mu_{6}\)-action. Therefore, just as in SS2.4, every \(\delta\in k^{\times}/k^{\times 6}\) gives rise to a sextic twist \((X_{\delta},\mu_{\delta},\beta_{\delta})=(X,\mu,\beta)_{\delta}\) of \((X,\mu,\beta)\). The formation of the \(\mu_{3}\)-abelian surfaces \(\operatorname{Prym}(C,\gamma)\) and \(\operatorname{Prym}(C,\gamma)^{\vee}\) from SS3.3 is compatible with sextic twisting: \(\operatorname{Prym}((C,\gamma)_{\delta})\simeq\operatorname{Prym}(C,\gamma)_ {\delta}\) and \(\operatorname{Prym}((C,\gamma)_{\delta})^{\vee}\simeq\operatorname{Prym}(C, \gamma)_{\delta}^{\vee}\).
**Corollary 5.5**.: _Let \((C,\gamma)=(C_{a,b},\gamma_{a,b})\) be a marked bielliptic Picard curve over \(k\). Let \(\delta=16b(a^{2}-4b)\). Let \(\operatorname{Prym}(C,\gamma)=(P,\lambda,\alpha)\) and \(\operatorname{Prym}(C,\gamma)^{\vee}=(A,\hat{\lambda},\hat{\alpha})\). Then \((P,\lambda,\alpha)\simeq(A,\hat{\lambda},\hat{\alpha}^{-1})_{\delta}\). In particular, \(P_{k^{\text{sep}}}\) and \(A_{k^{\text{sep}}}\) are isomorphic as \((1,2)\)-polarized abelian surfaces and \(P_{k^{\text{sep}}}\) is self-dual._
Proof.: The \(\mu_{3}\)-action \(\alpha\) and the involution \([-1]\) on \(P\) combine to a \(\mu_{6}\)-action that restricts to \(\mu_{6}\)-actions \(\hat{\gamma}\) and \(\gamma_{\text{twist}}\) on \(\widehat{C}\) and \(C_{\text{twist}}\) respectively. Lemma 2.18 shows that \((\widehat{C},\hat{\gamma})\) is a marked bielliptic Picard curve; so is \((C_{\text{twist}},\gamma_{\text{twist}}^{-1})\) by Proposition 3.11. Therefore \((P,\lambda,\alpha)\simeq\operatorname{Prym}(\widehat{C},\hat{\gamma})^{\vee}\) and \((P,\lambda,\alpha^{-1})\simeq\operatorname{Prym}(C_{\text{twist}},\gamma_{ \text{twist}}^{-1})^{\vee}\) by Proposition 3.4.
It remains to show that \((P,\lambda,\alpha)\simeq(A,\hat{\lambda},\hat{\alpha}^{-1})_{\delta}\), or equivalently that \(\operatorname{Prym}(C_{\text{twist}},\gamma_{\text{twist}}^{-1})^{\vee} \simeq(A,\hat{\lambda},\hat{\alpha})_{\delta}\). Recall that \((C_{\delta},\gamma_{\delta})\) denotes the sextic twist of \(C\) along \(\delta\). Theorem 5.4 shows that \((C_{\text{twist}},\gamma_{\text{twist}}^{-1})\simeq(C_{\delta},\gamma_{ \delta})\). Therefore
\[\operatorname{Prym}(C_{\text{twist}},\gamma_{\text{twist}}^{-1})^{\vee}\simeq \operatorname{Prym}(C_{\delta},\gamma_{\delta})^{\vee}\simeq(A,\hat{\lambda}, \hat{\alpha})_{\delta},\]
as desired.
**Remark 5.6**.: In contrast to the results of SS2.6, 2.7, 3.1, Corollary 5.5 crucially uses properties specific to bielliptic Picard curves. Indeed, the last sentence of the corollary might fail for a general bielliptic genus \(3\) curve.
**Remark 5.7**.: The isomorphism \(A\simeq P_{\delta^{-1}}\) of Corollary 5.5 can be chosen 'universally'. More precisely, let \(R:=\mathbb{Z}[1/6,a,b,\Delta_{a,b}^{-1}]\) and \(S:=\operatorname{Spec}(R)\). Equation (2.1) defines the universal marked bielliptic Picard curve (in the sense of Remark 3.16) with Prym variety \(\mathcal{P}\to S\) and dual \(\mathcal{A}\to S\). Then there exists an isomorphism \(\Phi\colon\mathcal{A}\to\mathcal{P}_{\delta^{-1}}\) that restricts to an isomorphism of Corollary 5.5 for every geometric point of \(S\). (This can be seen, for example, by spreading out an isomorphism over the generic point to all of \(S\) using [16, SS2, Lemma 1].)
As a first application of Corollary 5.5, we show that the Atkin-Lehner involution on \(X/w_{3}\) corresponds to inversion on the \(j\)-line \(Y\simeq\mathbb{P}_{j}^{1}\).
**Lemma 5.8**.: _Under the isomorphisms \((X/w_{3})^{\bar{w}}\simeq Y\simeq\mathbb{P}_{j}^{1}\) of Proposition 4.7 and Lemma 4.3, the involution \(\bar{w}\) on \((X/w_{3})^{\bar{w}}\) corresponds to the involution \(j\mapsto 1/j\) on \(\mathbb{P}_{j}^{1}\)._
Proof.: We use the notations of SS4.2, 4.3. It suffices to check the claim over \(\mathbb{C}\) on the locus of indecomposable \(\mu_{3}\)-abelian surfaces \(Y^{\text{indec}}\subset Y\), so let \(y=[A,\lambda,\alpha]\in Y^{\text{indec}}(\mathbb{C})\) be such a point. Let \(x=[A,\lambda,\iota]\in X^{w_{2}}(\mathbb{C})=X(\mathbb{C})\) be a lift of \(y\) under \(\pi_{2}\), so \(\alpha=\iota_{|\langle\omega\rangle}\). Then \(\bar{w}(y)=\pi_{2}(w_{2}(x))\). Since \(j\) has norm \(2\), \(w_{2}(x)=[A,\lambda,\iota\circ[j]]\). Moreover \([j]\) maps \(\omega\) to \(\omega^{-1}\), so \(\iota\circ[j]\) restricts to \(\alpha^{-1}\). It
follows that \(\pi_{2}(w_{2}(x))=[A,\lambda,\alpha^{-1}]\). On the other hand, if \((A,\lambda,\alpha)\simeq\operatorname{Prym}(C,\gamma)^{\vee}\), then Corollary 5.5 shows that \((A,\lambda,\alpha^{-1})\simeq\operatorname{Prym}(C,\gamma)\), which is itself isomorphic to \(\operatorname{Prym}(\widehat{C},\hat{\gamma})^{\vee}\), where \((\widehat{C},\hat{\gamma})\) is the bigonal dual curve from SS2.7. Since the \(j\)-invariant of \(\widehat{C}\) equals the inverse of the \(j\)-invariant of \(C\) (Lemma 2.20), the lemma follows from the explicit description of the isomorphism \(Y\simeq\mathbb{P}^{1}_{j}\).
## 6. Explicit quaternionic multiplication
In Corollary 4.8 we proved that there exists an action of \(\mathcal{O}\) on \(P_{k^{\mathrm{sep}}}\). Next we use Theorem 5.4 to describe this action explicitly. For the remainder of this paper, we fix a universal isomorphism \(\Phi\colon\mathcal{A}\to\mathcal{P}_{\delta^{-1}}\) as in Remark 5.7.
### Explicit quaternionic multiplication
Let \((C,\gamma)=(C_{a,b},\gamma_{a,b})\) be a marked bielliptic Picard curve over \(k\). Let \(\delta=\Delta_{a,b}=16b(a^{2}-4b)\), which is nonzero since \(C\) is smooth. Consider the \(\operatorname{Gal}_{k}\)-set:
\[\mathcal{S}:=\{(\zeta,\varepsilon)\in k^{\mathrm{sep}}\times k^{\mathrm{sep}} \mid\zeta\text{ is a primitive sixth root of unity and }\varepsilon^{6}=\delta\}. \tag{6.1}\]
Consider \(\mu_{3}\)-abelian surfaces \(\operatorname{Prym}(C,\gamma)=(P,\lambda,\alpha)\) and \(\operatorname{Prym}(C,\gamma)^{\vee}=(A,\hat{\lambda},\hat{\alpha})\). For every \((\zeta,\varepsilon)\in\mathcal{S}\), we will define elements \(r_{\zeta},s_{\varepsilon}\in\operatorname{End}(P_{k^{\mathrm{sep}}})\). Since \(\zeta^{2}\) is a third root of unity, we may set \(r_{\zeta}:=\alpha(\zeta^{2})\in\operatorname{End}(P_{k^{\mathrm{sep}}})\).
To define \(s_{\varepsilon}\), we use Corollary 5.5. By that corollary and Remark 5.7, the specialization of the universal isomorphism \(\Phi\) at the \(k\)-point \((a,b)\in S(k)\) is an isomorphism \(\phi\colon(A,\hat{\lambda},\hat{\alpha}^{-1})\xrightarrow{\sim}(P,\lambda, \alpha)_{\delta^{-1}}\). The element \(\varepsilon\) determines an isomorphism \(C_{\delta^{-1},k^{\mathrm{sep}}}\to C_{k^{\mathrm{sep}}},(x,y)\mapsto( \varepsilon^{3}x,\varepsilon^{4}y)\), hence an isomorphism \(\chi_{\varepsilon}\colon(P,\lambda,\alpha)_{\delta^{-1},k^{\mathrm{sep}}} \to(P,\lambda,\alpha)_{k^{\mathrm{sep}}}\) between their Prym varieties. Let \(s_{\varepsilon}\) be the composition:
\[P_{k^{\mathrm{sep}}}\xrightarrow{\lambda}A_{k^{\mathrm{sep}}}\xrightarrow{ \phi}(P_{\delta^{-1}})_{k^{\mathrm{sep}}}\xrightarrow{\chi_{\varepsilon}}P_{ k^{\mathrm{sep}}}. \tag{6.2}\]
**Proposition 6.1**.: _For every \(x=(\zeta,\varepsilon)\in\mathcal{S}\) the assignment \(\omega\mapsto r_{\zeta},j\mapsto s_{\varepsilon}\) extends to an embedding_
\[\iota_{x}\colon\mathcal{O}\to\operatorname{End}(P_{k^{\mathrm{sep}}})= \operatorname{End}(P_{\bar{k}}). \tag{6.3}\]
Proof.: We may assume \(k=k^{\mathrm{sep}}\). Fix \(x=(\zeta,\varepsilon)\in\mathcal{S}\) and write \(r=r_{\zeta}\), \(s=s_{\varepsilon}\). We will show that \(r^{2}+r+1=0\), \(s^{2}=2\) and \(sr=r^{-1}s\). Since the associative ring \(\mathcal{O}\) from SS4.1 is generated by \(\omega,j\) and satisfies the same three relations, this will imply that there exists a unique ring homomorphism \(\iota_{x}\colon\mathcal{O}\to\operatorname{End}(P_{k^{\mathrm{sep}}})\) with \(\iota_{x}(\omega)=r\) and \(\iota_{x}(j)=s\). Such a homomorphism is automatically an embedding since \(\mathcal{O}\otimes\mathbb{Q}\) is a simple algebra.
Equation (2.4) of Lemma 2.14 shows that \(r^{2}+r+1=0\). To check that \(sr=r^{-1}s\), note that the surfaces \(P,P_{\delta^{-1}},A\) have \(\mu_{3}\)-actions \(\alpha,\alpha_{\delta^{-1}},\hat{\alpha}\) respectively. The morphisms \(\lambda\) and \(\chi_{\varepsilon}\) from (6.2) are equivariant with respect to these \(\mu_{3}\)-actions. On the other hand, \(\phi\) intertwines the \(\mu_{3}\)-action on the domain with the _inverse_ of the \(\mu_{3}\)-action on the target. The same must therefore be true for their composition \(s\), so \(sr=r^{-1}s\).
It suffices to prove that \(s^{2}=2\); we will use the results of SS4.3 to achieve this. Write \(\psi=\chi_{\varepsilon}\circ\phi\). The morphism \(\psi\) is compatible with the polarizations since \(\phi\) and \(\chi_{\varepsilon}\) are, so \(\psi^{\vee}\circ\lambda\circ\psi=\hat{\lambda}\). Therefore
\[s^{2}=\psi\lambda\psi\lambda=(\psi(\psi^{\vee})^{-1})(\psi^{\vee}\lambda\psi \lambda)=(\psi(\psi^{\vee})^{-1})(\hat{\lambda}\lambda)=2(\psi(\psi^{\vee})^{ -1}). \tag{6.4}\]
It follows that \(s^{2}=2\) if and only if \(\psi^{\vee}=\psi\). To prove the latter statement, we will first show that \((\psi^{\vee})^{-1}\circ\psi\in\operatorname{Aut}(A,\hat{\lambda},\hat{\alpha})\). The identity \(\psi^{\vee}\circ\lambda=\hat{\lambda}\circ\psi\) implies that \(\lambda\circ(\psi\circ\lambda\circ\psi^{\vee}-\hat{\lambda})\circ\lambda=0\). Since \(\lambda\) is an isogeny, the inner expression must be zero, so \(\psi\circ\lambda\circ\psi^{\vee}=\hat{\lambda}\). In other words \(\psi^{\vee}\) is compatible with the polarizations. Moreover, \(\psi\) intertwines the \(\mu_{3}\)-action \(\hat{\alpha}\) on \(A\) with the _inverse_ of the \(\mu_{3}\)-action \(\alpha\) on \(P\): \(\psi(\hat{\alpha}(\omega)(x))=\alpha(\omega)^{-1}(\psi(x))\) for all \(x\in A\) and \(\omega\in\mu_{3}\). Applying duality to this identity and using that \(\alpha^{\vee}(\omega)=\hat{\alpha}(\omega^{-1})\), \(\psi^{\vee}\) again intertwines the \(\mu_{3}\)-action with the inverse of the other. We conclude that \((\psi^{\vee})^{-1}\circ\psi\in\operatorname{Aut}(A,\lambda,\alpha)\).
To show that this automorphism is the identity, we will use that its construction can be carried out in families. More precisely, let \(R=\mathbb{Z}[1/6,a,b,\Delta_{a,b}^{-1}]\) and \(\tilde{R}=R[\varepsilon]/(\varepsilon^{6}-\Delta_{a,b})\). Equation (2.1) defines a universal marked bielliptic Picard curve \((\mathcal{C},\gamma)\) over \(S=\operatorname{Spec}(R)\subset\mathbb{A}_{\mathbb{Z}[1/6]}^{2}\) with Prym variety \((\mathcal{P},\lambda,\alpha)\) and dual \((\mathcal{A},\hat{\lambda},\hat{\alpha})\). On the cover \(\tilde{S}=\operatorname{Spec}(\tilde{R})\to S\) given by adjoining a sixth root of \(\Delta_{a,b}\), there exists an isomorphism \(\Psi\colon\mathcal{A}_{\tilde{S}}\to\mathcal{P}_{\tilde{S}}\) with the property that \(\Psi\) specializes to \(\psi\) for all morphisms \(\operatorname{Spec}(k)\to\tilde{S}\) and that \((\Psi^{\vee})^{-1}\circ\Psi\in\operatorname{Aut}_{\tilde{S}}(\mathcal{A}, \lambda,\alpha)\). Since the moduli stack of \((1,2)\)-polarized abelian surfaces is separated and Deligne-Mumford, \(\operatorname{\mathbf{Aut}}_{\tilde{S}}(\mathcal{A},\lambda,\alpha)\to\tilde{S}\) is finite and unramified. Since finite unramified morphisms are etale locally on the target disjoint unions of closed immersions [46, Tag 04HJ] and the base \(\tilde{S}\) is connected, it suffices to show that \((\Psi^{\vee})^{-1}\Psi\) is the identity for a _single_ specialization \(s\in\tilde{S}\).
The specialization \((a_{0},b_{0},\varepsilon_{0})=(1,1/8,1)\) defines a \(\mathbb{Q}\)-point \(s_{0}\) of \(\tilde{S}\). Theorem 3.13 and Lemma 2.10 shows that \(\operatorname{\mathbf{Aut}}((\mathcal{A},\lambda,\alpha)_{s_{0}})=\mu_{6}\). Since \(\mu_{6}(\mathbb{Q})=\{\pm 1\}\), either \((\Psi^{\vee})^{-1}\Psi=1\) or \((\Psi^{\vee})^{-1}\Psi=-1\). Assume for the sake of contradiction that \((\Psi^{\vee})^{-1}\Psi=-1\). Then (6.4) shows that \(s^{2}=-2\). This implies that \(r\) and \(s\) determine an embedding of the _definite_ quaternion algebra \((-3,-2)_{\mathbb{Q}}\) into \(\operatorname{End}(\mathcal{P}_{s})\otimes\mathbb{Q}\) for every specialization \(s\in\tilde{S}(\mathbb{C})\). By the classification of endomorphism algebras of complex abelian surfaces [6, Proposition 5.5.7, Exercise 9.10(1) and Exercise 9.10(4)], this implies that \(\mathcal{P}_{s}\) is isogenous to the square of a CM elliptic curve. This is a contradiction, since there are (uncountably) many specializations \(s\) for which \(\mathcal{P}_{s}\) is simple, using Proposition 4.7 and CM theory for the quaternionic Shimura curve \(X\). We conclude that \(\psi^{\vee}=\psi\) and hence by (6.4) that \(s^{2}=2\), as desired.
If \(P_{\tilde{k}}\) is simple, then \(\mathcal{O}\) is the full ring of endomorphisms of \(P_{\tilde{k}}\):
**Lemma 6.2**.: _Let \(C/k\) be a bielliptic Picard curve with Prym variety \(P\) and suppose that \(k\) is algebraically closed. Then \(P\) is not simple \(\Leftrightarrow\mathcal{O}\not\simeq\operatorname{End}(P)\)\(\Leftrightarrow\)\(P\) is isogenous to the square of a CM elliptic curve. Consequently, if \(P\) is simple then \(\operatorname{End}(P)=\iota_{x}(\mathcal{O})\) for every \(x\in\mathcal{S}\)._
Proof.: This is a direct consequence of the Albert classification [6, Proposition 5.5.7, Exercise 9.10(1) and Exercise 9.10(4)].
**Remark 6.3**.: If \(\operatorname{char}(k)=0\) or if \(k=\overline{\mathbb{F}}_{p}\) with \(p>3\), then these conditions are also equivalent to the condition that \(P\) is _isomorphic_ to a product of elliptic curves. See [41] for the former case and [39, Theorem 3.1] for the latter.
We will study geometrically split Prym varieties in more detail in SS6.4.
### Field of definition of the endomorphisms
Keep the notations of the previous section. We proceed to determine how the various embeddings \(\iota_{x}\colon\mathcal{O}\to\operatorname{End}(P_{k^{\Leftrightarrow}})\) are related and deduce an explicit description of the Galois action on \(\mathcal{O}\).
Let \(\operatorname{Sym}(\mathcal{S})\) be the group of bijections \(\mathcal{S}\to\mathcal{S}\) acting on the right on the \(\operatorname{Gal}_{k}\)-set \(\mathcal{S}\) of (6.1). Define the subgroup \(D_{6}\subset\operatorname{Sym}(\mathcal{S})\) generated by \(\rho,\sigma\), where \((\zeta,\varepsilon)^{\rho}=(\zeta,\zeta^{-1}\varepsilon)\) and \((\zeta,\varepsilon)^{\tau}\mapsto(\zeta^{-1},\varepsilon)\). Then \(D_{6}\) is a dihedral group of order \(12\) and \(D_{6}\) acts simply transitively on \(\mathcal{S}\).
Recall the subgroup \(\operatorname{Aut}_{i}(\mathcal{O})\) from Lemma 4.1. The assignment \(\rho\mapsto[1-\omega]\), \(\tau\mapsto[j]\) induces an isomorphism \(\varphi\colon D_{6}\xrightarrow{\sim}\operatorname{Aut}_{i}(\mathcal{O})\).
**Lemma 6.4**.: _For all \(x\in\mathcal{S}\), \(g\in D_{6}\) and \(b\in\mathcal{O}\), \(\iota_{x^{g}}(b^{\varphi(g)})=\iota_{x}(b)\)_
Proof.: It suffices to prove the claimed identity when \(g\in\{\tau,\rho\}\) and \(b\in\{\omega,j\}\), for which it can be checked using routine but intricate calculations. Write \(x=(\zeta,\varepsilon)\). Suppose first that \(g=\tau\), then \(\iota_{x^{g}}(b^{\varphi(g)})=\iota_{(\zeta^{-1},\varepsilon)}(j^{-1}bj)\). If \(b=\omega\), this equals \(\iota_{(\zeta^{-1},\varepsilon)}(j^{-1}\omega j)=\iota_{(\zeta^{-1}, \varepsilon)}(\omega^{-1})=r_{\zeta^{-1}}^{-1}=r_{\zeta}=\iota_{x}(\omega)\). If \(b=j\), this equals \(\iota_{(\zeta^{-1},\varepsilon)}(j)=s_{\varepsilon}=\iota_{x}(j)\).
Suppose next that \(g=\rho\). If \(b=\omega\), both sides are simply \(r_{\zeta}\). If \(b=j\), then
\[\iota_{x^{g}}(b^{\varphi(g)})=\iota_{x^{g}}(j^{[1-\omega]})=\iota_{x^{g}}(- \omega^{-1}j)=-\iota_{(\zeta,\zeta^{-1}\varepsilon)}(\omega^{-1})\iota_{(\zeta, \zeta^{-1}\varepsilon)}(j)=-r_{\zeta}^{-1}s_{\zeta^{-1}\varepsilon}.\]
The definition of \(s_{\varepsilon}\) in (6.2) shows that \(s_{\zeta^{-1}\varepsilon}\) is given by postcomposing \(s_{\varepsilon}\) with the automorphism of \(P\) induced by \(\gamma(\zeta^{-1})\). A simple calculation shows that this is exactly \(-r_{\zeta}\), so \(s_{\zeta^{-1}\varepsilon}=-r_{\zeta}s_{\varepsilon}\). Therefore \(\iota_{x^{g}}(j^{g})=-r_{\zeta}^{-1}(-r_{\zeta}s_{\varepsilon})=s_{\varepsilon }=\iota_{x}(b)\).
The Galois action on \(\mathcal{S}\) determines a homomorphism \(\operatorname{Gal}_{k}\to\operatorname{Sym}(\mathcal{S})\). This homomorphism factors through an injection \(\operatorname{Gal}(L/k)\hookrightarrow D_{6}\), where \(L=k(\omega,\sqrt[6]{\delta})\) is the splitting field of \(f(T)=T^{6}-\delta\). Denote the composition \(\operatorname{Gal}_{k}\to D_{6}\xrightarrow{\varphi}\operatorname{Aut}_{i}( \mathcal{O})\) again by \(\varphi\). Recall from SS1.7 that if \(f\colon X_{k^{\mathrm{sep}}}\to Y_{k^{\mathrm{sep}}}\) is a \(k^{\mathrm{sep}}\)-morphism between \(k\)-varieties and \(\sigma\in\operatorname{Gal}_{k}\), then \(f^{\sigma}\) denotes the \(k^{\mathrm{sep}}\)-morphism \(x\mapsto f(x^{\sigma^{-1}})^{\sigma}\).
**Theorem 6.5**.: _For all \(x\in\mathcal{S}\), \(\sigma\in\operatorname{Gal}_{k}\) and \(b\in\mathcal{O}\), \(\iota_{x}(b)^{\sigma}=\iota_{x}(b^{\varphi(\sigma^{-1})})\). Consequently, there exists an embedding \(\mathcal{O}\subset\operatorname{End}(P_{k^{\mathrm{sep}}})\) such that \(f^{\sigma}=f^{\varphi(\sigma^{-1})}\) for all \(f\in\mathcal{O}\) and \(\sigma\in\operatorname{Gal}_{k}\). In particular, \(\mathcal{O}\) is \(\operatorname{Gal}_{k}\)-stable and defined over \(L=k(\omega,\sqrt[6]{\delta})\)._
Proof.: Write \(x=(\zeta,\varepsilon)\). The explicit construction of \(r_{\zeta}\) and \(s_{\varepsilon}\) shows that \(r_{\zeta}^{\sigma}=r_{\zeta^{\sigma}}\) and \(s_{\varepsilon}^{\sigma}=s_{\varepsilon^{\sigma}}\). Therefore \(\iota_{x}(b)^{\sigma}=\iota_{x^{\sigma}}(b)\). By the previous lemma, \(\iota_{x}(b)^{\sigma}=\iota_{x^{\sigma}}(b)=\iota_{x}(b^{\varphi(\sigma^{-1})})\).
The endomorphism field of an abelian variety is the smallest field extension over which all endomorphisms are defined [21]. If \(P\) is geometrically simple, then \(\operatorname{End}(P_{\tilde{k}})=\mathcal{O}\) by Lemma 6.2. Since the action of \(D_{6}\) on \(\mathcal{O}\) is faithful, Theorem 6.5 has the following immediate corollary.
**Corollary 6.6**.: _Let \(C=C_{a,b}\) be a bielliptic Picard curve over \(k\). Suppose that \(P_{\tilde{k}}\) is simple. Then the endomorphism field of \(P/k\) is \(k(\omega,\sqrt[6]{16b(a^{2}-4b)})\)._
Since the endomorphism field is invariant under isogeny, the corollary holds verbatim for the dual Prym \(A/k\).
**Example 6.7**.: Corollary 6.6 gives many examples of abelian surfaces with large endomorphism fields. For example, for the curve \(C=C_{3,4}:y^{3}=x^{4}+3x^{2}+4\) over \(\mathbb{Q}\), the Prym \(P=P_{3,4}\) is geometrically simple (see Proposition 6.17) and its endomorphism field is \(\mathbb{Q}(\sqrt{-3},\sqrt[6]{-7})\), a dihedral degree \(12\) extension of \(\mathbb{Q}\). In the notation of [18], this seems to be the first published example of a geometrically simple abelian surface over \(\mathbb{Q}\) with Sato-Tate group \(J(E_{6})\). Similarly, \(P_{-4,2}\) is a geometrically simple abelian surface with endomorphism field \(\mathbb{Q}(\sqrt{-3},\sqrt[3]{2})\) and Sato-Tate group \(J(E_{3})\).
### Prym surfaces of \(\operatorname{GL}_{2}\)-type
Let \(C=C_{a,b}\) be a bielliptic Picard curve over \(k\) with Prym variety \(P\), and let \(\iota\colon\mathcal{O}\hookrightarrow\operatorname{End}(P_{k^{\mathrm{sep}}})\) be an embedding satisfying the conclusion of Theorem 6.5. Recall that \(k(\omega)\) denotes the smallest field extension of \(k\) containing a primitive third root of unity.
**Corollary 6.8**.: _The following conditions are equivalent:_
1. \(16b(a^{2}-4b)\) _is a sixth power in_ \(k(\omega)\)_;_
2. \(\iota(\mathcal{O})\subset\operatorname{End}(P_{k(\omega)})\)_;_
3. \(\iota(\mathcal{O})\cap\operatorname{End}(P)\) _is either an order in a real quadratic field or all of_ \(\iota(\mathcal{O})\)_._
Proof.: (1) and (2) are equivalent by Theorem 6.5. To prove the equivalence between (2) and (3), let \(G\) be the image of the homomorphism \(\varphi\colon\operatorname{Gal}_{k}\to\operatorname{Aut}_{i}(\mathcal{O})\) of Theorem 6.5, and let \(H=\operatorname{Im}(\operatorname{Gal}_{k(\omega)}\to\operatorname{Aut}_{i}^{+}( \mathcal{O}))\). Then \(H=G\cap\operatorname{Aut}_{i}^{+}(\mathcal{O})\), where \(\operatorname{Aut}_{i}^{+}(\mathcal{O})=\langle[1-\omega]\rangle\). A group theory
calculation using Lemma 4.2 shows that \(H=\{1\}\) (in other words, (2) holds) if and only if \(\mathcal{O}^{G}\) contains an order in a real quadratic field (in other words (3) holds).
**Remark 6.9**.: Recall that an abelian surface \(X/k\) is of \(\text{GL}_{2}\)-type if \(\text{End}(X)\) contains a quadratic ring. If \(P\) is geometrically simple and \(k\neq k(\omega)\) then \(\iota(\mathcal{O})\cap\text{End}(P)=\text{End}(P)\) (Lemma 6.2) and \(\text{End}(P)\) is either \(\mathbb{Z}\) or an order in a real quadratic field by Lemma 4.2, so the conditions above are equivalent to \(P\) being of \(\text{GL}_{2}\)-type. If \(k=k(\omega)\) then \(\mathbb{Z}[\omega]\hookrightarrow\text{End}(P)\), so \(P\) is always of \(\text{GL}_{2}\)-type.
We can be more precise about the ring of endomorphisms \(\text{End}(P)\). A calculation shows that an element \(\delta\in k^{\times}\) is a sixth power in \(k(\omega)\) if and only if either \(\delta\) or \(-27\delta\) is a sixth power in \(k\). This dichotomy breaks up Corollary 6.8 in the following two cases.
**Corollary 6.10**.: _Assume that \(j(C)\neq 1\). Then the following conditions are equivalent._
1. \(\Delta_{a,b}=16b(a^{2}-4b)\) _is a sixth power in_ \(k\)_._
2. \(\text{End}(P)\cap\iota(\mathcal{O})\) _contains a subring isomorphic to_ \(\mathbb{Z}[\sqrt{2}]\)_._
3. _There exist_ \(t,d\in k^{\times}\) _such that_ \((a,b)=(2(t^{2}+1)^{2}td^{3},(t^{2}+1)^{3}t^{2}d^{6})\)_._
_If these conditions are satisfied then \(P\) and \(A=P^{\vee}\) are \(k\)-isomorphic. A sextic twist of \(P\) satisfies the above conditions if and only if \(j(C)=-t^{2}\) for some \(t\in k^{\times}\)._
Proof.: (1) implies \(P\simeq A\) by Corollary 5.5. To show (1) \(\Leftrightarrow\) (2), let \(H\leq G\) be the subgroups of \(\text{Aut}_{i}(\mathcal{O})\) defined in the proof of Corollary 6.8. Then \(\mathcal{O}^{G}\) contains a subring isomorphic to \(\mathbb{Z}[\sqrt{2}]\) if and only if \(G\) is conjugate to a subgroup of \(\langle[j]\rangle\) by Lemma 4.2. Theorem 6.5 and Galois theory show that this is equivalent to \(T^{6}-\delta\) having a \(k\)-rational root. This proves that (1) \(\Leftrightarrow\) (2). The equivalence of (1) and (3) as well as the last claim of the corollary are routine algebra.
**Corollary 6.11**.: _Assume that \(j(C)\neq 1\). Then the following conditions are equivalent._
1. \(-27\Delta_{a,b}=-432b(a^{2}-4b)\) _is a sixth power in_ \(k\)_._
2. \(\text{End}(P)\cap\iota(\mathcal{O})\) _contains a subring isomorphic to_ \(\mathbb{Z}[\sqrt{6}]\)_._
3. _There exist_ \(t,d\in k^{\times}\) _such that_ \((a,b)=(18d^{3}t(1-3t^{2})^{2},3^{4}d^{6}t^{2}(1-3t^{2})^{3})\)_._
_If these conditions hold then \(P\simeq A_{-27}\), i.e. \(P\) is isomorphic to the quadratic twist of \(A\) along \(k(\omega)\). A sextic twist of \(P\) satisfies the above conditions if and only if \(j(C)=3t^{2}\) for some \(t\in k^{\times}\)._
Proof.: The proof is similar to that of Corollary 6.10 and is omitted.
**Remark 6.12**.: Corollaries 6.10 and 6.11 show that a \(j\)-invariant \(j\in Y(\mathbb{Q})=\mathbb{P}^{1}(\mathbb{Q})\) lifts to a \(\mathbb{Q}\)-rational point under the morphism \(\pi_{2}\) (respectively \(\pi_{6}\)) from SS4.3 if and only if \(j\in-\mathbb{Q}^{\times 2}\) (respectively \(j\in 3\mathbb{Q}^{\times 2}\)). We also note that the equivalence of (1) and (2) holds even when \(j(C)=1\).
**Corollary 6.13**.: _Suppose \(k\neq k(\omega)\) and \(P\) is geometrically simple. Then_
1. \(\text{End}(P)\) _is isomorphic to either_ \(\mathbb{Z}\)_,_ \(\mathbb{Z}[\sqrt{2}]\) _or_ \(\mathbb{Z}[\sqrt{6}]\)_._
2. \(P\) _carries a principal polarization if and only if_ \(\text{End}(P)\simeq\mathbb{Z}[\sqrt{2}]\)_, and in this case the unique principal polarization is_ \(\frac{1}{2}\lambda(2-\sqrt{2})\)_._
Proof.: (1) follows from Lemma 4.2. For (2), if \(\text{End}(P)=\mathbb{Z}\), then \(\text{Hom}(P,A)=\mathbb{Z}\lambda\) so cannot contain a principal polarization. Otherwise, we use the fact that any abelian surface with \(\mathbb{Z}[\sqrt{2}]\)-RM is principally polarized, whereas any \((1,2)\)-polarized abelian surface \(P\) with \(\text{End}(P)\simeq\mathbb{Z}[\sqrt{6}]\) is not, by [20, Proposition 3.11]. If \(\text{End}(P)=\mathbb{Z}[\sqrt{2}]\), the unique principal polarization \(\lambda^{\prime}\colon P\to A\) is \(\frac{1}{2}\lambda(2-\sqrt{2})\); see [20, Proposition 2.1]. Equivalently, \(\lambda^{\prime}=\lambda-\psi^{-1}\), in the notation of the proof of Proposition 6.1.
### Geometrically split Prym varieties
In this subsection, we assume that \(k\) has characteristic zero. Let \(C/k\) be a bielliptic Picard curve with Prym variety \(P\). If any of the equivalent conditions of Lemma 6.2 is satisfied for \(P_{\bar{k}}\) we call \(C\) (or \(P\)) CM. Being CM only depends on the \(j\)-invariant (2.3) of \(C\). Choose an embedding \(\iota\colon\mathcal{O}\hookrightarrow\operatorname{End}(P_{\bar{k}})\) of the form \(\iota_{x}\) for some \(x\in\mathcal{S}\) as in SS6.1. Let
\[\operatorname{End}_{\mathcal{O}}(P_{\bar{k}}):=\{f\in\operatorname{End}(P_{ \bar{k}})\mid f\circ\iota(b)=\iota(b)\circ f\text{ for all }b\in\mathcal{O}\}. \tag{6.5}\]
This subring does not depend on the choice of \(x\in\mathcal{S}\). If \(P\) is not CM, then \(\operatorname{End}_{\mathcal{O}}(P_{\bar{k}})=\mathbb{Z}\); if \(P\) is CM, then \(\operatorname{End}_{\mathcal{O}}(P_{\bar{k}})\) is an order \(R\) in an imaginary quadratic field, and we say that \(P\) has CM by the order \(R\).
**Example 6.14**.: The Prym variety \(P\) of the special bielliptic Picard curve \(C:y^{3}=x^{4}+1\) has CM by the order \(\mathbb{Z}[i]\), hence \(j(C)=1\) is a CM \(j\)-invariant. Indeed, over \(\overline{\mathbb{Q}}\), the curve \(C\) has an automorphism \(\beta(x,y)=(ix,y)\) commuting with the \(\mu_{6}\)-action on \(C\). A signature calculation similar to Lemma 2.13 shows that \(\beta\) induces an order \(4\) automorphism \(\beta_{*}\) on \(P\) and has characteristic polynomial \(X^{2}+1\). In the notation of SS6.1, choose a pair \((\zeta,\varepsilon)\in\mathcal{S}\). By construction, the elements \(r_{\zeta}\) and \(s_{\varepsilon}\) commute with \(\beta_{*}\), so \(\mathbb{Z}[\beta_{*}]=\mathbb{Z}[i]\subset\operatorname{End}_{\mathcal{O}}(P_ {\bar{k}})\).
We now state some basic properties of the set of CM \(j\)-invariants, completely analogous to the elliptic curve setting, using the results of SS4.3. In the notation of that subsection, a \(j\)-invariant \(j\in Y(\mathbb{C})\) is CM if and only if it lifts to a CM point of \(X\) under the map \(\pi_{2}\).
**Lemma 6.15**.: _Every CM \(j\)-invariant is algebraic over \(\mathbb{Q}\). Moreover, for every \(d\in\mathbb{Z}_{\geq 1}\), there are only finitely many CM \(j\)-invariants \(j\) with \([\mathbb{Q}(j):\mathbb{Q}]\leq d\)._
Proof.: Follows immediately from Proposition 4.7 and the corresponding facts concerning CM points Shimura curves [14, SS2.4].
We conclude by determining all \(\mathbb{Q}\)-rational CM \(j\)-invariants, equivalently all geometrically split \(\operatorname{Prym}(C,\gamma)\) that can be defined over \(\mathbb{Q}\). To this end, we will use Elkies' computations of all rational CM points on the full Atkin-Lehner quotient \(X^{*}:=X/W\simeq\mathbb{P}^{1}_{\mathbb{Q}}\). Following Elkies [14, SS3.1], let \(t\) be the unique coordinate \(X^{*}\xrightarrow{\sim}\mathbb{P}^{1}_{t}\) with the property that \(t=0,1,\infty\) corresponds to the unique CM point by the order \(\mathbb{Z}[\sqrt{-6}]\), \(\mathbb{Z}[i]\), \(\mathbb{Z}[\omega]\) respectively. Recall from Lemma 4.3 the isomorphism \(j\colon Y\to\mathbb{P}^{1}\). The isomorphism \(Y\simeq(X/w_{3})^{\bar{w}}\) of Proposition 4.7 determines a quotient map \(\Pi\colon Y\simeq(X/w_{3})^{\bar{w}}\to((X/w_{3})^{\bar{w}})/\langle\bar{w} \rangle=X^{*}\).
**Lemma 6.16**.: _In the coordinates \(j\) and \(t\) of \(Y\) and \(X^{*}\) respectively, the map \(\Pi\colon Y\to X^{*}\) is given by \(j\mapsto\frac{(j+1)^{2}}{4j}\). A \(k\)-point \(t\in k\) on \(X^{*}\) lifts to \(Y\) if and only if \(t(t-1)\) is a square in \(k\)._
Proof.: Elkies [14, SS3.1, end of p 17] has shown that the biquadratic extension of function fields \(\mathbb{Q}(X)/\mathbb{Q}(X^{*})\) is given by adjoining \(\sqrt{-t}\) and \(\sqrt{3(t-1)}\) to \(\mathbb{Q}(X^{*})=\mathbb{Q}(t)\). The three intermediate quadratic subfields correspond to the three intermediate Atkin-Lehner quotients of \(X\); analyzing the ramification shows that the morphism \(X/w_{3}\to X^{*}\) can be realized as the double cover of \(X^{*}=\mathbb{P}^{1}_{t}\) branched along the function \(-3t(t-1)\), so \(X/w_{3}\) has equation \(s^{2}=-3t(t-1)\). Moreover, the involution \((t,s)\mapsto(t,-s)\) corresponds to the involution \(\bar{w}\). Proposition 4.7 shows that \(Y\to X^{*}\) is the quadratic twist of \(X/w_{3}\) along \(\mathbb{Q}(\sqrt{-3})/\mathbb{Q}\), so \(Y\to\mathbb{P}^{1}_{t}\) has equation \(s^{2}=t(t-1)\). Rationally parametrizing this conic shows that \(Y\) has a rational coordinate \(r\) such that \(s=(1-r^{2})/(4r)\), \(t=(1+r)^{2}/(4r)\) and the involution \(\bar{w}\) corresponds to \(r\mapsto 1/r\).
We claim that after possibly replacing \(r\) by \(1/r\), which does not affect the expression \(t=(1+r)^{2}/(4r)\), we have \(j=r\). Analyzing the ramification of \(\Pi\), we see that \(Y\) has two CM points by \(\mathbb{Z}[\omega]\) (corresponding to \(r=0,\infty\)) and one CM point by \(\mathbb{Z}[i]\) (corresponding to \(r=1\)). The points \(j=0,\infty\) correspond to decomposable \(\mu_{3}\)-abelian surfaces, described in the proof of Lemma 4.3, and
have CM by \(\mathbb{Z}[\omega]\). Example 6.14 shows that \(j=1\) corresponds to the CM point by \(\mathbb{Z}[i]\). So \(j\) and \(r\) (or \(1/r\)) agree on the three points \(0,1,\infty\), so they must agree everywhere, proving the claim.
**Proposition 6.17**.: _Table 1 is a complete list of rational CM \(j\)-invariants and the discriminants of the corresponding quadratic orders. Consequently, a bielliptic Picard curve \(C/\mathbb{Q}\) has geometrically non-simple Prym variety if and only if \(j(C)\) appears in this table._
Proof.: A point in \(Y(\mathbb{Q})\) is CM if and only if its image under \(\Pi\colon Y\to X^{*}\) is CM. It therefore suffices to find all rational CM points on \(X^{*}\) and determine which ones lift to \(Y(\mathbb{Q})\). Elkies [14, Table 1] has determined all rational CM points in his \(t\)-coordinate. Some of these were only conjecturally CM, but [15] confirms this table unconditionally. The proposition then follows from Lemma 6.16 and an elementary calculation.
## 7. 6-torsion points in the Prym variety
For the remainder of the paper, we analyze the Galois modules \(P[n]\) of bielliptic Picard Pryms, culminating in a proof of Theorem 1.1. In this section we study \(P[2]\), \(P[3]\) and \(P[6]\) explicitly and give various criteria for the existence of rational torsion points.
For the remainder of this section, let \((C,\gamma)=(C_{a,b},\gamma_{a,b})\) be a marked bielliptic Picard curve over a field \(k\) (always assumed of characteristic \(\neq 2,3\)).
### \(2\)-torsion
Recall from SS2.5-2.6 that \([2]=\hat{\lambda}\lambda\), giving rise to a short exact sequence
\[0\to P[\lambda]\to P[2]\to A[\hat{\lambda}]\to 0. \tag{7.1}\]
The diagram from SS2.5 shows that there is a canonical isomorphism \(P[\lambda]\simeq E[2]\) and by bigonal duality we have \(A[\hat{\lambda}]\simeq\widehat{E}[2]\) as well. We conclude:
**Lemma 7.1**.: _There is a short exact sequence of \(\operatorname{Gal}_{k}\)-modules_
\[0\to E[2]\to P[2]\to\widehat{E}[2]\to 0. \tag{7.2}\]
The \(k\)-points of the outer terms in (7.2) are easy to determine.
**Lemma 7.2**.: _Let \(E=E_{a,b}\) and \(\widehat{E}=\widehat{E}_{a,b}\) be as above. Then_
1. \(E[2](k)\neq 0\) _if and only if_ \(16(a^{2}-4b)\) _is a cube in_ \(k\)_, in which case_ \(P[2](k)\neq 0\)_._
2. \(\widehat{E}[2](k)\neq 0\) _if and only if_ \(b\) _is a cube in_ \(k\)_, in which case_ \(A[2](k)\neq 0\)_._
_Moreover if \(k(\omega)\neq k\), then \(|E[2](k)|\leq 2\) and \(|\widehat{E}[2](k)|\leq 2\)._
Proof.: This can be read off the models \(E\colon y^{2}=x^{3}+16(a^{2}-4b)\) and \(\widehat{E}\colon y^{2}=x^{3}+b\) of SS2.7.
\begin{table}
\begin{tabular}{c|c|c} \(\operatorname{Disc}(R)\) & \(|\operatorname{Disc}(R)|\) & \(j\) or \(1/j\) \\ \hline \(-3\) & \(3\) & \(0\) \\ \(-4\) & \(2^{2}\) & \(1\) \\ \(-24\) & \(2^{3}\cdot 3\) & \(-1\) \\ \(-84\) & \(2^{2}\cdot 3\cdot 7\) & \(-27\) \\ \(-120\) & \(2^{3}\cdot 3\cdot 5\) & \(27/125\) \\ \(-228\) & \(2^{2}\cdot 3\cdot 19\) & \(15625/729\) \\ \(-147\) & \(3\cdot 7^{2}\) & \(-48384/15625\) \\ \(-372\) & \(2^{2}\cdot 3\cdot 31\) & \(-1771561/421875\) \\ \(-408\) & \(2^{3}\cdot 3\cdot 17\) & \(-11390625/4913\) \\ \end{tabular}
\end{table}
Table 1. Rational CM \(j\)-invariants and their discriminants
To determine \(P[2](k)\), we study the extension class of (7.2) and determine when an element of \(\widehat{E}[2](k)\) lifts to \(P[2](k)\). We will use the following explicit geometric description of \(P[2]\); we thank Adam Morgan for pointing it out to us. Recall that \(E\) is an elliptic curve with origin \(O_{E}=\pi(\infty)\).
**Proposition 7.3**.: _Each divisor class in \(P[2](\bar{k})\) is represented by a unique divisor of the form \(R+\pi^{*}(T)-3\infty\), where \(R\in C(\overline{k})\) is a ramification point of the map \(\pi\colon C\to E\) and \(T\in E(\overline{k})\) is such that \([2](T)=-\pi(R)\)._
Proof.: We first show that every such divisor defines an element of \(P[2](\bar{k})\), so let \(x=R+\pi^{*}(T)-3\infty\) be such a divisor class. Recall that \(\pi(\infty)=O_{E}\) denotes the origin of \(E\). The condition \([2](T)=-\pi(R)\) translates to an equivalence \(2T+\pi(R)\sim 3O_{E}\). Pulling this equivalence back along \(\pi\) shows that \(2x=0\), so \(x\in J[2](\bar{k})\). Since both \(R\) and \(\pi^{*}(T)\) are fixed by \(\tau\), \(\tau(x)=x\) so \(\tau(x)+x=2x=0\) so \(x\in P[2](\bar{k})\). We now claim that two such divisors \(R+\pi^{*}(T)-3\infty\) and \(R^{\prime}+\pi^{*}(T^{\prime})-3\infty\) are linearly equivalent if and only if \(R=R^{\prime}\) and \(T=T^{\prime}\). Indeed, Proposition 2.15(1) shows that \(R=R^{\prime}\), and then \(T=T^{\prime}\) follows from the fact that \(\pi^{*}\colon E\to J\) is injective.
Since there are 16 divisors of this form and \(P[2](\bar{k})\) has order 16 too, every element of \(P[2](\bar{k})\) has a unique representative of this form.
This description of \(P[2]\) is compatible with the sequence (7.2): the map \(E[2]\to P[2]\) sends \(T\) to \(\pi^{*}(T)-2\infty\); the map \(P[2]\to\widehat{E}[2]\) sends \(R+\pi^{*}(T)-3\infty\) with \(R=(0,t)\in C(\bar{k})\) to \((-t,0)\in\widehat{E}[2](\bar{k})\), using the coordinates of the equations in SS2.7.
**Theorem 7.4**.: _An element \((-t,0)\in\widehat{E}[2](k)\) with \(t\in k\) lifts to \(P[2](k)\) under (7.2) if and only if \(g_{a,t}(z):=z^{4}-6tz^{2}+4az-3t^{2}\) has a \(k\)-rational root. Consequently, \((P[2]\setminus P[\lambda])(k)\neq\varnothing\) if and only if there exists \(t\in k\) such that \(t^{3}=b\) and such that \(g_{a,t}\) has a \(k\)-rational root._
Proof.: By the above proposition, the point lifts to \(P[2](k)\) if and only if the corresponding point \((4t,4a)\in E(k)\) is divisible by 2 in \(E(k)\). Using the 2-descent map \(E(k)/2E(k)\hookrightarrow\operatorname{H}^{1}(k,E[2])\) and its interpretation via binary quartic forms, this is equivalent to \(z^{4}-24tz^{2}+32az-48t^{2}\) having a root in \(k\)[2, SS2, (11)]. Changing \(z\) by \(2z\) and dividing by 16 results in the polynomial \(g_{a,t}\).
**Corollary 7.5**.: \((P[2]\setminus P[\lambda])(k)\neq\varnothing\) _if and only if \((a,b)=((4s+3)(4s^{2}-3)d^{3},(4s+3)^{3}d^{6})\) for some \(s,d\in k\)._
Proof.: Theorem 7.4 shows that \(P[2]\setminus P[\lambda]\) has a \(k\)-point if and only if there exists a \(t\in k\) with \(t^{3}=b\) such that \(g_{a,t}(z)=z^{4}-6tz^{2}+4az-3t^{2}\) has a \(k\)-rational root. A calculation shows that if \((a,b)\) is of the above form for some \(s,d\in k\) then \(t=(4s+3)\) is such that \(g_{a,t}(z)\) has the root \(z=-4s-3\) in \(k\). Conversely, suppose that \(t\in k\) is a cube root of \(b\) such that \(g_{a,t}(z)\) has a root in \(k\). If \(a=0\), a calculation shows that we may take \(s=\pm\sqrt{3}/2\), so we may assume that \(a\neq 0\). Note that \(\lambda g_{a,t}(z)=g_{\lambda^{3}a,\lambda^{2}t}(\lambda z)\) for all \(\lambda\in k\). Choosing \(\lambda=t/a\) and setting \(v=t^{3}/a^{2}\), it follows that \(g_{v,v}(z)\) has a \(k\)-rational root. Since \(t\neq 0\), this root must be nonzero. The locus of \((v,z)\) with \(g_{v,v}(z)=0\) and \(z\neq 0\) is isomorphic to an open subset of a smooth conic. After rationally parametrizing this conic we find that \(v\) must be of the form \((4s+3)/(4s^{2}-3)^{2}\) for some \(s\in k\). It follows that \((a,b)\) is of the above form with \(d=a/t\).
In the following we assume for simplicity that \(k(\omega)\neq k\), so that \(E[2](k)\neq E[2]\). If \(G\) and \(G^{\prime}\) are abstract groups, we will sometimes write \(G\subset G^{\prime}\) as shorthand for "there exists an embedding \(G\hookrightarrow G^{\prime}\)".
**Proposition 7.6**.: _Suppose \(k(\omega)\neq k\). Then the following conditions are equivalent:_
1. \((\mathbb{Z}/2\mathbb{Z})^{2}\subset P(k)\)
_._
2. \(E[2](k)\neq 0\) _and there exists a branch point of_ \(C\to E\) _different from_ \(O_{E}\) _lying in_ \(2E(k)\)_;_
3. \((a,b)=((16w^{6}+40w^{3}-2)d^{3},\left(8w^{3}+1\right)^{3}d^{6})\) _for some pair_ \(w,d\in k\)_;_
4. \((\mathbb{Z}/2\mathbb{Z})^{2}\subset A(k)\)_._
Proof.: That (1) and (2) are equivalent follows from (7.2), Lemma 7.2, the fact that \(k(\omega)\neq k\) and Proposition 7.3. Similarly, Proposition 7.6 shows that \((\mathbb{Z}/2\mathbb{Z})^{2}\subset P_{a,b}(k)\) if and only if \((a,b)=((4s+3)(4s^{2}-3)d^{3},(4s+3)^{3}d^{6})\) for some \(s,d\in k\) and \(16(a^{2}-4b)=16d^{6}(2s-3)(2s+1)^{3}(4s+3)^{2}\) is a cube. The latter is equivalent to \((3-2s)/4(4s+3)\) being a cube \(w^{3}\) in \(k\). Solving for \(s\), plugging in back to \(a,b\) and absorbing common factors in \(d\) proves the equivalence between (1) and (3).
A calculation shows that if \((a,b)\) is of the form (3) for some \(w,d\in k\), then \((8a,16(a^{2}-4b))\) is of the form (3) with \(w^{\prime}=-1/2w\) and \(d^{\prime}=-(2t)^{2}d\). Therefore \((3)\Leftrightarrow(4)\) by bigonal duality from SS2.7.
Note that when \(k(\omega)\neq k\), we have \(|P[2](k)|\leq 4\) by (7.2) and Lemma 7.2. So the conditions in Proposition 7.6 are also equivalent to the condition \(P[2](k)\simeq(\mathbb{Z}/2\mathbb{Z})^{2}\).
### \(\sqrt{-3}\)-torsion
We use the \(\mu_{3}\)-action on \(P\) to explicitly analyze a \(\operatorname{Gal}_{k}\)-submodule of \(P[3]\). Let \(\mathfrak{p}\) be the ideal \((1-\omega)\subset\mathbb{Z}[\omega]\), viewed as a subgroup of \(\operatorname{End}(P_{k^{\mathrm{sep}}})\). Since \(\mathfrak{p}\) is \(\operatorname{Gal}_{k}\)-stable, so is the subgroup \(A[\mathfrak{p}]\) of points fixed by \(\omega\). We have \(\mathfrak{p}=(\sqrt{-3})\), so \(P[\mathfrak{p}]\) has order \(9\), and is the kernel of an isogeny \(P\to B\) over \(k\). In fact, the quotient \(B=P/P[\mathfrak{p}]\) is isomorphic to the sextic twist \(P_{-27}\), which is also the quadratic twist of \(P\) by the extension \(k(\omega)/k\)[43, Rem 2.8]. We therefore have an exact sequence
\[0\to P[\mathfrak{p}]\to P[3]\to P_{-27}[\mathfrak{p}]\to 0 \tag{7.3}\]
Of course, \(A=P^{\vee}\) also has a \(\mu_{3}\)-action, so we can similarly define \(A[\mathfrak{p}]\subset A[3]\), a subgroup of order \(9\) that sits in an analogous short exact sequence.
**Lemma 7.7**.: \(A[\mathfrak{p}]\simeq P[\mathfrak{p}]\) _as \(\operatorname{Gal}_{k}\)-modules._
Proof.: The restriction of \(\lambda\) induces such an isomorphism.
Let \(f(x):=x^{4}+ax^{2}+b\in k[x]\) and let \(\mathcal{R}=\{\pm\alpha_{1},\pm\alpha_{2}\}\) be the \(4\) roots of \(f\) in \(k^{\mathrm{sep}}\).
**Lemma 7.8**.: _The Galois module \(P[\mathfrak{p}]\) is generated by the classes \(D_{1}:=(\alpha_{1},0)-(-\alpha_{1},0)\) and \(D_{2}:=(\alpha_{2},0)-(-\alpha_{2},0)\)._
Proof.: These classes live in \(P\), are non-trivial, and are annihilated by \(1-\omega\) so lie in \(P[\mathfrak{p}]\). They are linearly independent over \(\mathbb{F}_{3}\) since otherwise we would have \(D_{2}\sim\pm D_{1}\), which is impossible since \(C\) is not hyperelliptic.
Let \(D_{4}\subset\operatorname{Sym}(\mathcal{R})\) the subset of permutations of \(\mathcal{R}\) such that \((-\alpha)^{\sigma}=-\alpha^{\sigma}\) for all \(\alpha\in\mathcal{R}\) and \(\sigma\in\operatorname{Sym}(\mathcal{R})\). This is a dihedral group of order \(8\), generated by its four reflections \(\{\tau_{1},\tau_{2},\hat{\tau}_{1},\hat{\tau}_{2}\}\), where \(\tau_{i}\) maps \(\alpha_{i}\) to \(-\alpha_{i}\) and fixes the other roots, \(\hat{\tau}_{1}\) swaps \(\alpha_{1}\leftrightarrow\alpha_{2}\) and \(-\alpha_{1}\leftrightarrow-\alpha_{2}\), and \(\hat{\tau}_{2}\) swaps \(\alpha_{1}\leftrightarrow-\alpha_{2}\) and \(-\alpha_{1}\leftrightarrow\alpha_{2}\). (The reader is invited to picture \(D_{4}\) acting on a square with labels \(\{\alpha_{1},\alpha_{2},-\alpha_{1},-\alpha_{2}\}\).) Let \(V\) be the \(\mathbb{F}_{3}\)-vector space with basis \(\{v_{\alpha}\mid\alpha\in\mathcal{R}\}\), modulo the relations \(v_{-\alpha}=-v_{\alpha}\). The \(D_{4}\)-action on \(\mathcal{R}\) induces a linear \(D_{4}\)-action on \(V\). After choosing the basis \(\{v_{\alpha_{1}},v_{\alpha_{2}}\}\), this induces a representation \(\varphi\colon D_{4}\to\operatorname{GL}_{2}(\mathbb{F}_{3})\), isomorphic to the mod \(3\) reduction of the reflection representation of \(D_{4}\). The \(\operatorname{Gal}_{k}\)-action on \(\mathcal{R}\) determines a homomorphism \(\chi\colon\operatorname{Gal}_{k}\to\operatorname{GL}_{2}(\mathbb{F}_{3})\). Lemma 7.8 shows:
**Proposition 7.9**.: _The \(\operatorname{Gal}_{k}\)-action on \(P[\mathfrak{p}](k^{\mathrm{sep}})\) in the \(\mathbb{F}_{3}\)-basis \(\{D_{1},D_{2}\}\) is given by the composition \(\operatorname{Gal}_{k}\xrightarrow{\chi}D_{4}\xrightarrow{\varphi}\operatorname {GL}_{2}(\mathbb{F}_{3})\)._
**Corollary 7.10**.: _We have \(P[\mathfrak{p}](k)\simeq(\mathbb{Z}/3\mathbb{Z})^{2}\) if and only if \(f(x)\) splits completely in \(k[x]\)._
Proof.: Use Proposition 7.9 and the fact that \(\varphi\) is injective.
**Corollary 7.11**.: _The \(\mathbb{F}_{3}[\operatorname{Gal}_{k}]\)-module \(P[\mathfrak{p}]\) is semisimple._
Proof.: Use Proposition 7.9 and the fact that \(3\) is coprime to the order of \(D_{4}\).
Let \(\hat{f}:=x^{4}+8ax+16(a^{2}-4b)\).
**Corollary 7.12**.: \(P[\mathfrak{p}](k)\neq 0\) _if and only if either \(f\) or \(\hat{f}\) has a root in \(k\)._
Proof.: Let \(G\) be the image of \(\chi\colon\operatorname{Gal}_{k}\to D_{4}\). Then \(P[\mathfrak{p}](k)\neq 0\) if and only if \(G\) fixes some nonzero element of \(V=\mathbb{F}_{3}^{2}\). A group theory calculation shows that only reflections have fixed points, so this is equivalent to \(G\subset\langle\tau_{i}\rangle\) or \(G\subset\langle\hat{\tau}_{i}\rangle\) for some \(i=1,2\). We have \(G\subset\langle\tau_{i}\rangle\) for some \(i\) if and only if \(f\) has a \(k\)-rational root. By Lemma 2.21, \(G\subset\langle\hat{\tau}_{i}\rangle=\operatorname{Stab}_{D_{4}}(\alpha_{1}+ \alpha_{2})\) for some \(i\) if and only if \(\hat{f}\) has a \(k\)-rational root.
**Proposition 7.13**.: \(\mathbb{Z}/3\mathbb{Z}\subset P[\mathfrak{p}](k)\) _if and only if either_
1. \(P=P_{-(c+1)d^{2},cd^{4}}\) _for some_ \(c,d\in k\) _or_
2. \(P=P_{-8(c+1)d^{2},16(c-1)^{2}d^{4}}\) _for some_ \(c,d\in k\)_._
Proof.: The polynomial \(f(x)\) has a \(k\)-rational root if and only if \(f(x)=(x^{2}-t^{2})(x^{2}-c)\) for some \(c,t\in k\), if and only if \(P\simeq P_{-c-t^{2},ct^{2}}\) for some \(c,t\in k\) if and only if \(P\) is as in (1). Similarly \(\widehat{f}\) has a \(k\)-root if and only if \(P\) is as in (2). The proposition then follows from Corollary 7.12.
**Proposition 7.14**.: \((\mathbb{Z}/3\mathbb{Z})^{2}\subset P[\mathfrak{p}](k)\) _if and only if \(P=P_{-(c^{2}+1)d^{2},c^{2}d^{4}}\) for some \(c,d\in k\)._
Proof.: By Corollary 7.10, this happens if and only if \(f(x)\) splits completely, in which case we have \(f(x)=(x^{2}-t^{2})(x^{2}-c^{2})\) for some \(c,t\in k\). Up to cubic twist we may take \(t=1\), hence we arrive at the form \(P_{-(c^{2}+1)d^{2},c^{2}d^{4}}\), for some \(c,d\in k\).
**Question 1**.: Do there exist curves \(C/\mathbb{Q}\) such that \(P[3](\mathbb{Q})\neq 0\) but \(P[\mathfrak{p}](\mathbb{Q})=0\)?
### \(6\)-torsion
We combine our results on \(P[2]\) and \(P[3]\) to produce examples of points of order \(6\) in \(P(k)\). Here we assume \(k=\mathbb{Q}\) for simplicity.
**Proposition 7.15**.: \(\mathbb{Z}/6\mathbb{Z}\subset P[2\mathfrak{p}](\mathbb{Q})\) _if and only if one of the following holds_
1. \(P=P_{2^{4}(c+1)(c-1)^{2},2^{8}c(c-1)^{4}}\) _for some_ \(c\in\mathbb{Q}\)_; or_
2. \(P=P_{8c(1-c),16c^{2}(1+c)^{2}}\) _for some_ \(c\in\mathbb{Q}\)_._
3. \(P=P_{-(c+1)c^{4},16(1-c)^{2}c^{8}}\) _where_ \(c=\frac{(3t-2)(5t-2)^{3}}{t(7t-4)^{3}}\) _for some_ \(t\in\mathbb{Q}\)_._
4. \(P=P_{\frac{1}{4}(1-v)^{3}(3v+1)^{3}(3v^{4}+6v^{2}-1),v^{6}(v-1)^{6}(3v+1)^{6}}\) _for some_ \(v\in\mathbb{Q}\)_._
Proof.: Each family above is the fiber product of one of the two families of rational \(2\)-torsion in \(P\) (Lemma 7.2 and Corollary 7.5) with one of the two families in Proposition 7.13. We omit the details, as it is tedious but straightforward algebra, in a spirit similar to Proposition 7.6.
**Proposition 7.16**.: \(\mathbb{Z}/3\mathbb{Z}\times\mathbb{Z}/6\mathbb{Z}\subset P[2\mathfrak{p}]( \mathbb{Q})\) _if and only if there exists \(c\in\mathbb{Q}\setminus\{0,\pm 1\}\) such that \(a=-16(c^{2}+1)(c^{2}-1)^{2}\) and \(b=2^{8}c^{2}(c^{2}-1)^{4}\)._
Proof.: This follows by combining Proposition 7.14 with both Lemma 7.2 and Corollary 7.5. In the first case, we are forced to set \(d=4(c^{2}-1)\) which leads to the formulas for \(a\) and \(b\) above. In the second case, we must first set \(d=c\) in order for \(b\) to be a cube, with \(b=t^{3}\) where \(t=c^{2}\). Then we must see when the polynomial
\[g_{a,t}(z)=g_{c}(z)=z^{4}-6c^{2}z^{2}-4c^{4}z-4c^{2}z-3c^{4}\]
has a root. The plane curve \(\{(c,z)\colon g_{c}(z)=0\}\) is irreducible of geometric genus \(1\), and with the help of Magma [8] we find that it is birational to an elliptic curve with Mordell-Weil group of order \(8\). None of these rational points correspond to smooth bielliptic Picard curves, so this second case gives no new examples.
Simple algebra shows that for \((a,b)\) as in Proposition 7.16, the curve \(C_{a,b}\) has affine model
\[2(c^{2}-1)^{2}y^{3}=(x^{2}-1)(x^{2}-c^{2}),\]
recovering the equation given in the introduction.
**Remark 7.17**.: Combining Proposition 7.16 and Remark 2.12, we are led to the family of genus two curves
\[C_{t}\colon(t^{2}+1)y^{2}=(2x^{2}+2x-1)((t^{2}-1)^{2}x^{4}+2(t^{2}-1)^{2}x^{3}+ 4t^{2}x-t^{2}).\]
For all but finitely many \(t\in\mathbb{Q}\), the Jacobian \(J=J_{t}=\operatorname{Jac}(C_{t})\) satisfies \(\operatorname{End}^{0}(J_{\overline{\mathbb{Q}}})\simeq B\). For all integers \(1\leq t\leq 10000\) we compute \(J_{t}(\mathbb{Q})_{\operatorname{tors}}\simeq\mathbb{Z}/2\mathbb{Z}\times( \mathbb{Z}/3\mathbb{Z})^{2}\) in Magma [8]. We suspect this holds for all (but finitely many) \(t\in\mathbb{Q}\), since the isogeny of Remark 2.12 should have degree \(2\), but we have not tried to carefully prove this.
### Infinite families
Table 2 summarizes many of the computations of Section SS7. For each group \(G\) in Theorem 1.1, the table gives an infinite family of curves \(C_{a,b}\) such that \(G\hookrightarrow P(\mathbb{Q})_{\operatorname{tors}}\). Since the \(j\)-invariant of every family is not constant, each family contains infinitely many distinct \(\overline{\mathbb{Q}}\)-isomorphism classes, and all but finitely many of these are geometrically simple by Lemma 6.15. For certain groups \(G\) there exist multiple such families, but we only write one. In particular, we do not claim that every Prym \(P\) such that \(G\hookrightarrow P(\mathbb{Q})_{\operatorname{tors}}\) appears in this table.
### \(2\)-torsion in \(A\) using bitangents
Even though it is possible to use the description of \(P[2]\) from SS7.3 and bigonal duality to describe \(A[2]\), we give a more intrinsic description of the \(\operatorname{Gal}_{k}\)-module using the bitangents of the quartic curve \(C\). This will be used in SS8.3 to rule out rational points of order \(4\) in \(P\).
We first recall the connection between the bitangents on a smooth plane quartic curve \(C\) over a general field \(k\) (of characteristic not \(2\) nor \(3\)) and \(2\)-torsion points in its Jacobian; see [12, SS6] for more details. Let \(X\subset\mathbb{P}_{k}^{2}\) be a smooth plane quartic curve. A bitangent of \(X\) is a line \(\ell\subset\mathbb{P}_{k}^{2}\) which intersects \(X\) with even multiplicity at every point, i.e. \(\ell\cap X=2D\) for some effective divisor \(D\) of \(X\) of degree \(2\). Since \(X\) is canonically embedded, \(2D\sim K_{X}\) and so \(D\) is a theta characteristic. This sets up a bijection between the \(28\) bitangents of \(X_{\bar{k}}\) and the odd theta characteristics of \(X_{\bar{k}}\). If \(D,E\) are two odd theta characteristics of \(X_{\bar{k}}\), then \(D-E\in\operatorname{Jac}_{X}[2](\bar{k})\). Every element of \(\operatorname{Jac}_{X}[2](\bar{k})\) is a sum of points of this form.
We now specialize to our setting where \((C,\gamma)\) is a bielliptic Picard curve over \(k\). Note that the line at infinity is a bitangent with theta characteristic \(2\infty\).
**Lemma 7.18**.: _The action of \(\langle\tau\rangle\) on the \(28\) bitangents of \(C_{\bar{k}}\) has four fixed points \((\)including the line at infinity\()\) and \(12\) orbits of size \(2\)._
Proof.: Using the explicit equation (2.1), \(\tau(x,y)=(-x,y)\). Therefore \(\tau\) is fixes a bitangent line (different from the line at infinity) if and only if it is of the form \(y=c\). Such an equation defines a bitangent line if and only if \(x^{4}+ax^{2}+b-c\) is the square of a polynomial, which happens if and only if \(c^{3}=b-a^{2}/4\), which has \(3\) solutions in \(\bar{k}\).
Write \(p\colon J\to A\) for the projection map, the dual of \(P\hookrightarrow J\). Given any bitangent \(\ell\) of \(C_{\bar{k}}\) with theta characteristic \(D\), write \(x_{\ell}=D-2\infty\) for the associated \(2\)-torsion point in \(J(\bar{k})\).
**Proposition 7.19**.: _If \(\ell\) is a bitangent line fixed by \(\tau\), then \(p(x_{\ell})=0\). If \(\ell\) and \(\ell^{\prime}\) are two distinct bitangent lines of \(C\) not fixed by \(\tau\), then \(p(x_{\ell})=p(x_{\ell^{\prime}})\) if and only if \(\ell^{\prime}=\tau(\ell)\). The map \(\ell\mapsto p(x_{\ell})\) induces a bijection between the \(\langle\tau\rangle\)-orbits of bitangents of \(C_{\bar{k}}\) of size \(2\) and \(A[2](\bar{k})\setminus A[\widehat{\lambda}](\bar{k})\)._
Proof.: We may assume that \(k=\bar{k}\). If \(\ell\) is fixed by \(\tau\), then \(\ell\cap C=2\pi^{-1}(R)\) for some \(R\in E[2]\). Since the morphism \(p\) has kernel \(\pi^{*}(E)\), \(p(x_{\ell})=0\). We claim that if \(\ell\) is not fixed by \(\tau\) then \(p(x_{\ell})\not\in A[\widehat{\lambda}]\). Since \(J\simeq(P\times E)/(P\cap\pi^{*}(E))\), this is equivalent to saying that \(x_{\ell}\notin P[2]\). This follows from the assumption that \(D\not\sim\tau(D)\) and since \(\tau\) acts as \(-1\) on \(P\). Since \(A[2]\setminus A[\widehat{\lambda}]\) and the set of \(\langle\tau\rangle\)-orbits of bitangents of size \(2\) both have size \(12\), the remainder of the lemma follows from Proposition 2.16.
## 8. Classifying rational torsion in Prym varieties
In Section 7 we showed that the finite groups mentioned in Theorem 1.1 all arise as subgroups of Pryms surfaces, and in fact infinitely often. In this section we finish the proof of the theorem by showing that all other finite abelian groups do not arise.
Let \(C=C_{a,b}\) be a bielliptic Picard curve over \(\mathbb{Q}\), with double cover \(\pi\colon C\to E\), Prym \(P\), and dual Prym \(A=P^{\vee}\), as usual.
### Eliminating \(\ell\)-torsion for \(\ell\geq 5\)
We first show that \(P[\ell](\mathbb{Q})=0\) for all primes \(\ell\geq 5\). When \(P\) is geometrically simple, this follows from the more general result [29, Theorem 1.1]. We briefly explain how the techniques of that paper can be adapted to handle the geometrically split case as well and in the process give a simplified version of the proof (which is possible due to the specific nature of our family).
Fix a prime \(\ell\geq 5\). Theorem 6.5 shows that there exists an embedding \(\iota\colon\mathcal{O}\to\operatorname{End}(P_{\bar{\mathbb{Q}}})\) whose image is \(\operatorname{Gal}_{\mathbb{Q}}\)-stable; we will fix such an embedding in what follows. Let \(\mathcal{O}_{\ell}:=\mathcal{O}\otimes_{\mathbb{Z}}\mathbb{F}_{\ell}\). Since \(\mathcal{O}\) has discriminant \(6\) and \(\mathcal{O}\) is maximal at \(\ell\), there is an isomorphism \(\mathcal{O}_{\ell}\simeq\operatorname{Mat}_{2}(\mathbb{F}_{\ell})\). Let \(M:=P[\ell](\bar{\mathbb{Q}})\), a free left \(\mathcal{O}_{\ell}\)-module of rank \(1\). Both \(\mathcal{O}_{\ell}\) and \(M\) are (right) \(\operatorname{Gal}_{\mathbb{Q}}\)-modules, and the action of \(\mathcal{O}_{\ell}\) on \(M\) is \(\operatorname{Gal}_{\mathbb{Q}}\)-equivariant.
**Proposition 8.1**.: _If \(P[\ell](\mathbb{Q})\neq 0\), then \(\mathcal{O}\subset\operatorname{End}(P_{\mathbb{Q}(\omega)})\)._
Proof.: The proof of [29, Theorem 6.0.1] carries over essentially without change; we briefly sketch the details. Suppose that \(m\in M^{\operatorname{Gal}_{\mathbb{Q}}}\) is nonzero. An argument identical to [29, Lemma 6.2.3] shows that \(\mathcal{O}_{\ell}\cdot m\subset M\) has order \(\ell^{2}\). Let \(S:=\mathbb{Z}[\omega]\subset\mathcal{O}\) and \(S_{\ell}:=S\otimes_{\mathbb{Z}}\mathbb{F}_{\ell}\). Then \(S\) is a \(\operatorname{Gal}_{\mathbb{Q}}\)-stable subring of \(\mathcal{O}\), and the induced action of \(\operatorname{Gal}_{\mathbb{Q}}\) on \(S\) factors through \(\operatorname{Gal}(\mathbb{Q}(\omega)/\mathbb{Q})\). Since \(S_{\ell}\) has no \(\operatorname{Gal}_{\mathbb{Q}}\)-stable proper nonzero ideals, the map \(S_{\ell}\to\mathcal{O}\cdot m\) sending \(x\mapsto x\cdot m\) is injective. By cardinality reasons, it is also surjective, so \(S_{\ell}\cdot x=\mathcal{O}_{\ell}\cdot x\). The (purely linear-algebraic) [29, Lemma 6.2.6] then shows that \(\operatorname{Gal}_{\mathbb{Q}(\omega)}\) acts trivially on \(\mathcal{O}_{\ell}\). Since \(\ell\geq 5\), by [29, Lemma 3.5.7] this implies that \(\operatorname{Gal}_{\mathbb{Q}(\omega)}\) acts trivially on \(\mathcal{O}\). In other words, \(\mathcal{O}\subset\operatorname{End}(P_{\mathbb{Q}(\omega)})\), as desired.
The quaternionic multiplication places strong restrictions on the reduction type of \(P\) at a prime \(p\). First of all, \(P\) has potentially good reduction at \(p\), and acquires good reduction over a totally ramified extension \(K\) of \(\mathbb{Q}_{p}\)[29, Lemma 4.1.2]. The special fiber of the Neron model of \(P_{K}\) is an abelian surface over \(\mathbb{F}_{p}\); we will denote this abelian surface by \(P_{\mathbb{F}_{p}}\) and (by slight abuse of language) call it the reduction of \(P\) mod \(p\). (Its isomorphism class might depend on the choice of \(K\).) The reduction \(P_{\mathbb{F}_{p}}\) is geometrically isogenous to the square of an elliptic curve. If \(p\) divides the discriminant of \(\mathcal{O}\) (that is, if \(p\mid 6\)), then \(P_{\mathbb{F}_{p}}\) is supersingular [25, SS2]. Additionally, the prime-to-\(p\) torsion subgroup of \(P(\mathbb{Q})\) embeds in \(P_{\mathbb{F}_{p}}(\mathbb{F}_{p})\) for every prime \(p\) by formal group considerations. We use these remarks to prove the following:
**Proposition 8.2**.: _Let \(\ell\geq 5\) be a prime. Then \(P[\ell](\mathbb{Q})=0\)._
Proof.: Suppose instead that \(P[\ell](\mathbb{Q})\neq 0\). Then \(\ell\) divides the order of \(P_{\mathbb{F}_{3}}(\mathbb{F}_{3})\). Moreover, \(P_{\mathbb{F}_{3}}\) is supersingular. Proposition 8.1 shows that \(\mathcal{O}\subset\operatorname{End}(P_{\mathbb{Q}(\omega)})\), so the base change \(P_{\mathbb{Q}(\omega)}\) has quaternionic multiplication over \(\mathbb{Q}(\omega)\). Since \(3\) is ramified in \(\mathbb{Q}(\omega)\), we must also have \(\mathcal{O}\subset\operatorname{End}(P_{\mathbb{F}_{3}})\). By Honda-Tate theory (conveniently recorded in the results of the following LMFDB [32] search), we see that the only option is \(\ell=7\).
We conclude by excluding the case \(\ell=7\). Let \(L/\mathbb{Q}\) be the endomorphism field, namely the smallest field extension with the property that \(\operatorname{End}(P_{L})=\operatorname{End}(P_{\overline{\mathbb{Q}}})\). Since \(\operatorname{Gal}(\mathbb{Q}(\omega)/\mathbb{Q})\) acts nontrivially on the subring \(\mathbb{Z}[\omega]\subset\mathcal{O}\subset\operatorname{End}(P_{\mathbb{Q}( \omega)})\), we have \(\mathbb{Q}(\omega)\subset L\). A result of Silverberg [45, Theorem 4.2] shows that \(L\) is unramified at all places of good reduction of \(P\). Since \(3\) is ramified in \(\mathbb{Q}(\omega)\subset L\), it follows that \(P\) has bad reduction at \(3\). By [29, Proposition 4.1.3(b)] and Lemma 6.8, the reduction of \(P\) at \(3\) is totally additive; in other words the identity component of the special fibre of the Neron model of \(P\) over \(\mathbb{Z}_{3}\) is unipotent. A result of Lorenzini [33, Corollary 3.25] then shows that \(7\nmid|P(\mathbb{Q}_{3})_{\operatorname{tors}}|\), a contradiction.
**Remark 8.3**.: The elliptic curve \(X\colon y^{2}=x^{3}+48(\omega+5)\) has a \(\mathbb{Q}(\omega)\)-rational point of order \(7\). It follows that the decomposable \(\mu_{3}\)-abelian surface \(X\times X\) has (two independent) \(\mathbb{Q}(\omega)\)-rational points of order \(7\). This shows that the argument above is sharp in a certain sense.
### Eliminating small groups
We rule out certain small groups of order \(2^{i}3^{j}\) from appearing as subgroups of \(P(\mathbb{Q})\), using arguments that are specific to our family.
**Proposition 8.4**.: \(\mathbb{Z}/9\mathbb{Z}\not\subset P(\mathbb{Q})\) _and \(P(\mathbb{Q})[3]\subset(\mathbb{Z}/3\mathbb{Z})^{2}\)._
Proof.: Using the notation and remarks made before Proposition 8.2, the prime-to-\(2\) torsion subgroup of \(P(\mathbb{Q})\) injects into \(P_{\mathbb{F}_{2}}(\mathbb{F}_{2})\) and \(P_{\mathbb{F}_{2}}\) is a supersingular abelian surface. Consulting the LMFDB [32], we see that the \(3\)-part of \(P_{\mathbb{F}_{2}}(\mathbb{F}_{2})\) is of order at most \(9\), and there is a unique isogeny class of supersingular abelian surfaces \(B/\mathbb{F}_{2}\) with \(9\mid|B(\mathbb{F}_{2})|\). This isogeny class is the square of a supersingular elliptic curve \(E\) over \(\mathbb{F}_{2}\) and \(E(\mathbb{F}_{2})\simeq\mathbb{Z}/3\mathbb{Z}\). By [29, Lemma 7.2.1] (and its proof) and the fact that \(\mathbb{Z}[\sqrt{-2}]\) is a PID, every member of this isogeny class is in fact isomorphic to \(E^{2}\)
so \(B(\mathbb{F}_{2})\simeq(\mathbb{Z}/3\mathbb{Z})^{2}\). We conclude that the \(3\)-part of \(P_{\mathbb{F}_{2}}(\mathbb{F}_{2})\), hence also the \(3\)-part of \(P(\mathbb{Q})_{\rm tors}\), is a subgroup of \((\mathbb{Z}/3\mathbb{Z})^{2}\).
**Proposition 8.5**.: \((\mathbb{Z}/2\mathbb{Z})^{2}\times\mathbb{Z}/3\mathbb{Z}\not\subset P(\mathbb{Q})\)_._
Proof.: Suppose that \((\mathbb{Z}/2\mathbb{Z})^{2}\times(\mathbb{Z}/3\mathbb{Z})\subset P(\mathbb{ Q})\). By Proposition 7.6, we have \(j=-\left(\frac{4w\left(w^{3}-1\right)}{(8w^{3}+1)}\right)^{3}\) for some \(w\in\mathbb{Q}\). Since \(\mathbb{Z}/3\mathbb{Z}\subset P(\mathbb{Q})\), the exact sequence (7.3) shows that \(P[\mathfrak{p}](\mathbb{Q})\neq 0\) or \(P_{-27}[\mathfrak{p}](\mathbb{Q})\neq 0\). Proposition 7.13 then shows one of \(j\) or \(1/j\) is of the form \(-4t/(t-1)^{2}\) for some \(t\in\mathbb{Q}\). Replacing \(j\) by \(1/j\) if necessary, using Proposition 7.6, Lemma 7.7 and bigonal duality, we may assume \(j=-4t/(t-1)^{2}\) for some \(t\in\mathbb{Q}\). This is equivalent to
\[(4j-2)^{2}-4=\frac{1024w^{3}\left(w^{3}-1\right)^{3}\left(8w^{6}+20w^{3}-1 \right)^{2}}{\left(8w^{3}+1\right)^{6}} \tag{8.1}\]
being a square in \(\mathbb{Q}\). Since \(w=0,1,-1/2\) leads to \(j=0,\infty\), we may assume \(w\) is distinct from these values. Then this expression is a square if and only if \(z^{2}=w(w^{3}-1)\) for some \(z\in\mathbb{Q}\). This equation defines the affine part \(C^{\circ}\) of a genus \(1\) curve \(C/\mathbb{Q}\) which is a double cover of \(\mathbb{P}^{1}\). The point \((w,z)=(0,0)\) endows \(C\) with the structure of an elliptic curve with Weierstrass model \(y^{2}=x^{3}+1\). (This can be seen using the invariant theory of binary quartics [2, SS2.1].) This elliptic curve has Mordell-Weil group \(\mathbb{Z}/6\mathbb{Z}\), so \(C^{\circ}(\mathbb{Q})=\{(0,0),(-2,0),(1,\pm 3)\}\). We conclude that \(w\in\{0,1,-2\}\), but we already observed these all correspond to \(j=0,\infty\).
### Eliminating points of order \(4\)
We rule out the existence of order \(4\) points in \(P(\mathbb{Q})\) using the description of \(A[2]\) using bitangents from SS7.5.
**Proposition 8.6**.: \(\mathbb{Z}/4\mathbb{Z}\not\subset P(\mathbb{Q})\)_._
Proof.: For the sake of contradiction, let \(x\in P(\mathbb{Q})\) be a point of order \(4\). Let \(A=P^{\vee}\) be the dual Prym. We have \(\hat{\lambda}\circ\lambda=[2]\), so either \(\lambda(x)\in A[2]\) or \(y=\lambda(x)\in A(\mathbb{Q})\) has order \(4\) and \(\hat{\lambda}(y)\in P[2]\). After possibly replacing \(P\) by \(A\) and applying bigonal duality (Proposition 2.19), we may therefore assume that \(\lambda(x)\in A[2](\mathbb{Q})\). Since \(x\) has order \(4\), we have \(\lambda(x)\not\in A[\hat{\lambda}]\). Using Proposition 7.19 and its notation, there exists a unique \(\tau\)-orbit of odd theta characteristics \(\{D,D^{\prime}\}\) such that \(p(D-2\infty)=\lambda(x)\). This orbit is defined over \(\mathbb{Q}\) but \(D\) and \(D^{\prime}\) may only be defined over a quadratic extension.
Recall that \(\lambda\) is the composition \(P\hookrightarrow J\xrightarrow{p}A\) and that \(p\) has kernel \(\pi^{*}(E)\), where \(\pi\colon C\to E\) is the double cover. Therefore \(x-(D-2\infty)\in\pi^{*}(E)\). Since \(\pi^{*}\) is injective, we may write
\[x-(D-2\infty)\sim\pi^{*}(y-O_{E}) \tag{8.2}\]
for some unique \(y\in E(\overline{\mathbb{Q}})\). The left hand side has order \(4\), so \(y\) has order \(4\) too.
We claim that the subgroup \(\{0,y,2y,3y\}\) generated by \(y\) is \(\operatorname{Gal}_{\mathbb{Q}}\)-stable. Indeed, let \(\sigma\in\operatorname{Gal}_{\mathbb{Q}}\), so \(\sigma(D)\in\{D,\tau(D)\}\). If \(\sigma(D)=D\), then \(\sigma(y)=y\) by (8.2). Suppose \(\sigma(D)=\tau(D)\). Using that \(\tau(x)=-x\) and applying \(\tau\) to (8.2), we see that \(-x-(\tau(D)-2\infty)=\tau(\pi^{*}(y)-O_{E})=\pi^{*}(y-O_{E})\). Applying \(\sigma\) to (8.2) shows that \(x-(\tau(D)-2\infty)=\pi^{*}(\sigma(y)-O_{E})\). Adding the two equations from the last two sentences shows that \(\sigma(y)=-y=3y\), so \(\{0,y,2y,3y\}\) is indeed \(\operatorname{Gal}_{\mathbb{Q}}\)-stable.
But \(E\) is an elliptic curve with a faithful \(\mu_{3}\)-action, hence has CM by \(\mathbb{Z}[\omega]\), and such curves have no rational cyclic \(4\)-isogenies using CM theory [9, Theorem 6.18(c)], contradiction!
### Proof of Theorem 1.1
The groups appearing in the theorem are realized infinitely often in Table 2, so it suffices to prove that \(G:=P(\mathbb{Q})_{\mathrm{tors}}\) is isomorphic to one of these. Proposition 8.2 shows that the order of \(G\) is of the form \(2^{i}3^{j}\). Propositions 8.4 and 8.6 show that \(G\) is \(6\)-torsion, of the form \((\mathbb{Z}/2\mathbb{Z})^{m}\times(\mathbb{Z}/3\mathbb{Z})^{n}\) for some \(m\geq 0\) and \(n\leq 2\). Equation (7.1) and Lemma 7.2 show that \(m\leq 2\) as well. Proposition 8.5 shows that if \(m=2\) then \(n=0\). Therefore \(G\) must be one of the groups appearing in Theorem 1.1.
## 9. Classifying rational torsion in Pryms of \(\mathrm{GL}_{2}\)-type
Combining our knowledge of rational torsion subgroups and rational endomorphism rings of Pryms \(P=P_{a,b}\), we classify the finite \(\mathrm{End}(P)\)-modules that arise as \(P(\mathbb{Q})_{\mathrm{tors}}\) for Pryms \(P\) of \(\mathrm{GL}_{2}\)-type.
Define the rational functions
\[a_{2}(t):=\frac{-4(t^{2}+1)}{t^{2}(t^{2}-1)^{2}}\quad\text{and}\quad b_{2}(t): =\frac{16}{t^{2}(t^{2}-1)^{4}}.\]
**Proposition 9.1**.: _For all but finitely many \(t\in\mathbb{Q}\), the Prym \(P=P_{a_{2}(t),b_{2}(t)}\) is geometrically simple and satisfies \(\mathrm{End}(P)\simeq\mathbb{Z}[\sqrt{2}]\) and \((\mathbb{Z}/3\mathbb{Z})^{2}\subset P[\mathfrak{p}](\mathbb{Q})\). Conversely, if \(P_{a,b}/\mathbb{Q}\) is a Prym with these three properties then \((a,b)=(a_{2}(t)\lambda^{6},b_{2}(t)\lambda^{12})\) for some \(t,\lambda\in\mathbb{Q}\)._
Proof.: First note that by Lemma 6.15, the surface \(P_{a_{2}(t),b_{2}(t)}\) will be non-CM (and hence geometrically simple) for all but finitely many specializations of \(t\), so for such \(t\) the condition \(\mathbb{Z}[\sqrt{2}]\simeq\mathrm{End}(P)\) is equivalent to \(\mathbb{Z}[\sqrt{2}]\subset\mathrm{End}(P)\) by Corollary 6.13. By Proposition 7.14 and Corollary 6.10, \((\mathbb{Z}/3\mathbb{Z})^{2}\subset P[\mathfrak{p}]\) and \(\mathbb{Z}[\sqrt{2}]\subset\mathrm{End}(P)\) if and only if \((a,b)=(-(t^{2}+1)d^{2},t^{2}d^{4})\) for some \(t,d\in k\) and \(16b(a^{2}-4b)\in\mathbb{Q}^{\times 6}\). The latter is equivalent to \(d=16t^{2}(t^{2}-1)\lambda^{3}\) for some \(\lambda\in\mathbb{Q}^{\times}\), proving the proposition.
According to [29, Theorem 1.4], this is the largest torsion subgroup that can arise among maximal PQM abelian surfaces over \(\mathbb{Q}\) of \(\mathrm{GL}_{2}\)-type.
Note that we cannot simultaneously have \(\mathrm{End}(P)\simeq\mathbb{Z}[\sqrt{2}]\) and \(P[3](\mathbb{Q})\simeq\mathbb{Z}/3\mathbb{Z}\) since \(3\) is inert in \(\mathbb{Z}[\sqrt{2}]\). However, using our family of \(\mathbb{Z}[\sqrt{6}]\)-RM Pryms, we give examples of Pryms of \(\mathrm{GL}_{2}\)-type with \(P[3](\mathbb{Q})\simeq\mathbb{Z}/3\mathbb{Z}\). For this we define
\[a_{6}(t)=36\frac{3t^{2}-1}{t^{2}(3t^{2}+1)^{2}}\quad\text{and}\quad b_{6}(t) =-\frac{3888}{t^{2}(3t^{2}+1)^{4}}\]
**Proposition 9.2**.: _For all but finitely values of \(t\in\mathbb{Q}\) the surface \(P_{a_{6}(t),b_{6}(t)}\) is geometrically simple and satisfies \(\mathrm{End}(P)\simeq\mathbb{Z}[\sqrt{6}]\) and \(\mathbb{Z}/3\mathbb{Z}\subset P[\mathfrak{p}](\mathbb{Q})\). Conversely, if \(P_{a,b}/\mathbb{Q}\) is a Prym with these three properties then either \(P_{a,b}\) or \(A_{a,b}\) is isomorphic to \(P_{a_{6}(t),b_{6}(t)}\) for some \(t\in\mathbb{Q}\)._
Proof.: The proof is as before, this time combining Corollary 6.11 with the two cases of Proposition 7.13. Notice that the properties of having \(\mathbb{Z}[\sqrt{6}]\)-multiplication and a non-trivial rational \(\mathfrak{p}\)-torsion point are preserved by duality, and the two families of Proposition 7.13 are interchanged by duality as well. Thus, it is enough to consider the first family, which leads to the functions \(a_{6}(t)\) and \(b_{6}(t)\).
**Question 2**.: The minimal conductor of a geometrically simple abelian surface \(A/\mathbb{Q}\) of \(\mathrm{GL}_{2}\)-type with potential quaternionic multiplication is \(3^{10}\), corresponding to a Galois orbit of weight two eigenforms \(f\in S_{2}(\Gamma_{0}(243))\) with coefficients in \(\mathbb{Z}[\sqrt{6}]\)[19, Table 1]. One can show that the corresponding optimal quotient \(A\) of \(J_{0}(243)\) has a rational point of order \(3\), a \((1,2)\)-polarization, and endomorphism field \(L=\mathbb{Q}(\omega)\). The same is also true for its dual. Using the results of this paper, it is not hard to show that either \(A\) or its dual is isomorphic to \(P_{a_{6}(t),b_{6}(t)}\) for some value of \(t\). Which value of \(t\) is it?
Proof of Theorem 1.5.: We have \(\operatorname{End}(P)\simeq\mathbb{Z}[\sqrt{D}]\) for some \(D\in\{2,6\}\) by Corollary 6.13. First assume \(D=2\), so that \(\mathfrak{a}_{2}=(\sqrt{2})\) and \(\mathfrak{a}_{3}=(3)\). Corollary 6.10 gives a parameterization (in terms of \(t\) and \(d\)) of those \(P\) with \(\operatorname{End}(P)\simeq\mathbb{Z}[\sqrt{2}]\). Since \(A\simeq P\) in this case (again by Corollary 6.10), we have \(P_{a,b}[2](\mathbb{Q})\neq 0\) if and only if \(b\) is a cube. Thus, choosing \(t\) to be a cube guarantees that \(\mathbb{Z}[\sqrt{2}]/\mathfrak{a}_{2}\simeq\mathbb{Z}/2\mathbb{Z}\subset P[2] (\mathbb{Q})\). Proposition 9.1 gives a one-parameter family of examples with \(\mathbb{Z}[\sqrt{2}]/\mathfrak{a}_{3}\simeq\mathbb{F}_{9}\subset P(\mathbb{Q})\). By Theorem 1.1, it remains to rule out \((\mathbb{Z}/2\mathbb{Z})^{2}\) and \(\mathbb{Z}/3\mathbb{Z}\times\mathbb{Z}/6\mathbb{Z}\) as subgroups of \(P(\mathbb{Q})\). If \((\mathbb{Z}/2\mathbb{Z})^{2}\subset P(\mathbb{Q})\) then \(P=P_{a,b}\) must be a specialization of the family in Corollary 7.6, and moreover \(16b(w,d)(a(w,d)^{2}-4b(w,d))\) must be a sixth power. The latter is automatically a cube, so it is enough to show that it cannot be a square. Writing it out explicitly, we must show that the affine curve \(Y\colon y^{2}=w(w^{3}-8)(w^{3}+1)\) has no rational points with \(y\neq 0\). With the help of Magma [8] we find that the smooth projectivization \(\overline{Y}\) of \(Y\) is a double cover of the elliptic curve \(y^{2}=x^{3}+6x-7\), which has four rational points. Checking pre-images, we find that \(Y(\mathbb{Q})\) consists of the three rational points with \(y=0\), as desired. Finally, we must rule out \(\mathbb{Z}/3\mathbb{Z}\times\mathbb{Z}/6\mathbb{Z}\subset P(\mathbb{Q})\). If \(P\) has this property, then since \(P[\mathfrak{p}](\mathbb{Q})\) and \(P_{-27}[\mathfrak{p}](\mathbb{Q})\) are \(\mathbb{F}_{9}\)-vector spaces, one of them is isomorphic to \(\mathbb{F}_{9}\) by (7.3). Thus, exchanging \(P\) with \(P_{-27}\) if necessary (which is allowed since \(P[2](\mathbb{Q})\simeq P_{-27}[2](\mathbb{Q})\)), we may assume \(P\) is a specialization of the family in Proposition 9.1, and hence it is enough to show that \(b_{2}(t)\) is never a cube. It is then enough to show that there are no \(\mathbb{Q}\)-rational points on the curve \(Y^{\prime}\colon 4t^{2}(t^{2}-1)=y^{3}\) with \(y\neq 0\) and \(t\notin\{0,\pm 1\}\). Note the double cover \(Y^{\prime}\to X\), where \(X\colon 4t(t-1)=y^{3}\) is an elliptic curve (minus the origin) such that \(X(\mathbb{Q})=\{(0,0),(1,0)\}\). It follows that \(Y^{\prime}\) has no interesting rational points, finishing the proof in the case \(D=2\).
Next consider \(D=6\), so that \(\mathfrak{a}_{2}=(2-\sqrt{6})\) and \(\mathfrak{a}_{3}=(3+\sqrt{6})\). Corollary 6.11 then gives a parameterization (in terms of \(t\) and \(d\)) of those \(P\) with \(\operatorname{End}(P)\simeq\mathbb{Z}[\sqrt{6}]\). Since \(A\simeq P_{-27}\) in this case, we have \(P_{a,b}[2](\mathbb{Q})\neq 0\) if and only if \(b\) is a cube. Choosing \(t\) so that \(3t^{2}\) is a cube guarantees that \(\mathbb{Z}[\sqrt{2}]/\mathfrak{a}_{2}\simeq\mathbb{Z}/2\mathbb{Z}\subset P[2 ](\mathbb{Q})\). Proposition 9.2 gives a one-parameter family of examples with \(\mathbb{Z}[\sqrt{2}]/\mathfrak{a}_{3}\simeq\mathbb{F}_{3}\subset P(\mathbb{Q})\). By Theorem 1.1, it remains to rule out \((\mathbb{Z}/2\mathbb{Z})^{2}\) and \(\mathbb{Z}/6\mathbb{Z}\) as subgroups of \(P(\mathbb{Q})\). The argument for \((\mathbb{Z}/2\mathbb{Z})^{2}\) is exactly as before so we omit it.
It remains to rule out \(\mathbb{Z}/6\mathbb{Z}\subset P(\mathbb{Q})\). Let \(I\subset\mathcal{O}\) be the maximal two-sided ideal in \(\mathcal{O}\) above \(3\) (which is unique since \(\mathcal{O}\) is ramified at \(3\)), so that \(\mathcal{O}/I\simeq\mathbb{F}_{9}\). Note that the completion \(I\otimes\mathbb{Z}_{3}\subset\mathcal{O}\otimes\mathbb{Z}_{3}\) is generated by any element of \(\mathcal{O}\otimes\mathbb{Z}_{3}\) of minimal positive valuation. Since \(\mathfrak{a}_{3}\subset\mathbb{Z}[\sqrt{6}]\) is generated by the element \(3+\sqrt{6}\) of norm \(3\), and similarly \(\mathfrak{p}\subset\mathbb{Z}[\omega]\) is generated by the element \(1-\zeta\) of norm \(3\), we have \(A[\mathfrak{a}_{3}]=A[I]=A[\mathfrak{p}]\), where \(A[I]\) is the subgroup of \(A\) killed by \(\iota(I)\), where \(\iota\colon\mathcal{O}\to\operatorname{End}(P_{\bar{\mathbb{Q}}})\) is a choice of embedding satisfying the conclusion of Theorem 6.5. There is also an exact sequence
\[0\to P[\mathfrak{a}_{3}]\to P[3]\to P[\mathfrak{a}_{3}]\to 0\]
where the surjection is multiplication by \(3+\sqrt{6}\). Thus, if \(P[3](\mathbb{Q})\neq 0\), we must also have \(P[\mathfrak{p}](\mathbb{Q})=P[\mathfrak{a}_{3}](\mathbb{Q})\neq 0\). It follows that \(P\) is a specialization of the family in Proposition 9.2. So it suffices to show that both \(b_{6}(t)\) and \(16(a_{6}(t)^{2}-4b(t))\) are never cubes. These two quantities turn out to be inverses of each other modulo cubes, so it is enough to show \(b_{6}(t)\) is never a cube. More precisely, it is enough to show that the curve \(Y^{\prime}\colon y^{3}=12t^{2}(3t^{2}+1)\) has no rational points with \(y\neq 0\). Using the double cover \(Y^{\prime}\to X\) where \(X\colon y^{3}=12t(3t+1)\), which is (the affine part of) an elliptic curve \(\overline{X}\) with \(\overline{X}\simeq\mathbb{Z}/6\mathbb{Z}\), we check that \(Y^{\prime}(\mathbb{Q})=\{(0,0)\}\), which completes the proof.
|
2306.12054
|
A Reliable and Interpretable Framework of Multi-view Learning for Liver
Fibrosis Staging
|
Staging of liver fibrosis is important in the diagnosis and treatment
planning of patients suffering from liver diseases. Current deep learning-based
methods using abdominal magnetic resonance imaging (MRI) usually take a
sub-region of the liver as an input, which nevertheless could miss critical
information. To explore richer representations, we formulate this task as a
multi-view learning problem and employ multiple sub-regions of the liver.
Previously, features or predictions are usually combined in an implicit manner,
and uncertainty-aware methods have been proposed. However, these methods could
be challenged to capture cross-view representations, which can be important in
the accurate prediction of staging. Therefore, we propose a reliable multi-view
learning method with interpretable combination rules, which can model global
representations to improve the accuracy of predictions. Specifically, the
proposed method estimates uncertainties based on subjective logic to improve
reliability, and an explicit combination rule is applied based on
Dempster-Shafer's evidence theory with good power of interpretability.
Moreover, a data-efficient transformer is introduced to capture representations
in the global view. Results evaluated on enhanced MRI data show that our method
delivers superior performance over existing multi-view learning methods.
|
Zheyao Gao, Yuanye Liu, Fuping Wu, NanNan Shi, Yuxin Shi, Xiahai Zhuang
|
2023-06-21T06:53:51Z
|
http://arxiv.org/abs/2306.12054v1
|
# A Reliable and Interpretable Framework of Multi-view Learning for Liver Fibrosis Staging
###### Abstract
Staging of liver fibrosis is important in the diagnosis and treatment planning of patients suffering from liver diseases. Current deep learning-based methods using abdominal magnetic resonance imaging (MRI) usually take a sub-region of the liver as an input, which nevertheless could miss critical information. To explore richer representations, we formulate this task as a multi-view learning problem and employ multiple sub-regions of the liver. Previously, features or predictions are usually combined in an implicit manner, and uncertainty-aware methods have been proposed. However, these methods could be challenged to capture cross-view representations, which can be important in the accurate prediction of staging. Therefore, we propose a reliable multi-view learning method with interpretable combination rules, which can model global representations to improve the accuracy of predictions. Specifically, the proposed method estimates uncertainties based on subjective logic to improve reliability, and an explicit combination rule is applied based on Dempster-Shafer's evidence theory with good power of interpretability. Moreover, a data-efficient transformer is introduced to capture representations in the global view. Results evaluated on enhanced MRI data show that our method delivers superior performance over existing multi-view learning methods.
Keywords:Liver fibrosis Multi-view learning Uncertainty.
## 1 Introduction
Viral or metabolic chronic liver diseases that cause liver fibrosis impose great challenges on global health. Accurate staging for the severity of liver fibrosis is essential in the diagnosis of various liver diseases. Current deep learning-based methods [26, 27] mainly use abdominal MRI and computed tomography (CT) data for liver fibrosis staging. Usually, a square sub-region of the liver instead of the whole image is cropped as input features, since the shape of the liver
is irregular and unrelated anatomies in the abdominal image could disturb the training of deep learning models. To automatically extract the region of interest (ROI), a recent work [8] proposes to use slide windows to crop multiple image patches around the centroid of the liver for data augmentation. However, it only uses one patch as input at each time, which only captures a sub-view of the liver. To exploit informative features across the whole liver, we formulate this task as a multi-view learning problem and consider each patch as a view. The pipeline for view extraction is shown in Fig. 1(a). A square region of interest (ROI) is cropped based on the segmentation of the foreground. Then nine sub-views of the liver are extracted in the ROI through overlapped sliding windows.
The aim of multi-view learning is to exploit complementary information from multiple features [25]. The central problem is how to integrate features from multiple views properly. In addition to the naive method that concatenates features at the input level [5], feature-level fusion strategies seek a common representation between different views through canonical correlation analysis [12, 23] or maximizing the mutual information between different views using contrastive learning [1, 22]. In terms of decision-level fusion, the widely used methods are decision averaging [18], decision voting [14], and attention-based decision fusion [9]. However, in the methods above, the weighting of multi-view features is either equal or learned implicitly through model training, which undermines the interpretability of the decision-making process. Besides, they are not capable of quantifying uncertainties, which could be non-trustworthy in healthcare applications.
To enhance the interpretability and reliability of multi-view learning methods, recent works have proposed uncertainty-aware decision-level fusion strategies. Typically, they first estimate uncertainties through Bayesian methods such as Monte-Carlo dropout [20], variational inference [19], ensemble methods [4], and evidential learning [17]. Then, the predictions from each view are aggregated through explicit uncertainty-aware combination rules [7, 21], as logic rules are commonly acknowledged to be interpretable in a complex model [28]. How
Figure 1: (a) The pipeline to extract sub-views of the liver. First, the foreground is extracted using intensity-based segmentation. Based on the segmentation, a square region of interest (ROI) centered at the centroid of the liver is cropped. Then overlapped sliding windows are used in the ROI to obtain nine sub-views of the liver. (b) The road map of this work.
ever, the predictions before the combination are made based on each independent view. Cross-view features are not captured to support the final prediction. In our task, global features could also be informative in the staging of liver fibrosis.
In this work, we propose an uncertainty-aware multi-view learning method with an interpretable fusion strategy of liver fibrosis staging, which captures both global features across views and local features in each independent view. The road map for this work is shown in Fig. 1(b). The uncertainty of each view is estimated through the evidential network and subjective logic to improve reliability. Based on the uncertainties, we apply an explicit combination rule according to Dempster-Shafer's evidence theory to obtain the final prediction, which improves explainability. Moreover, we incorporate an additional global view to model the cross-view representation through the data-efficient transformer.
Our contribution has three folds. First, we are the first to formulate liver fibrosis staging as a multi-view learning problem and propose an uncertainty-aware framework with an interpretable fusion strategy based on Dempster-Shafer Evidence Theory. Second, we propose to incorporate global representation in the multi-view learning framework through the data-efficient transformer network. Third, we evaluate the proposed framework on enhanced liver MRI data. The results show that our method outperforms existing multi-view learning methods and yields lower calibration errors than other uncertainty estimation methods.
## 2 Methods
The aim of our method is to derive a distribution of class probabilities with uncertainty based on multiple views of a liver image. As shown in Fig. 2, our
Figure 2: The left side shows the main framework. Multi-view images are first encoded as evidence vectors by evidential networks. For each view, an opinion with uncertainty \(u\) is derived from evidence, under the guidance of subjective logic. Finally, the opinions are combined based on an explicit rule to derive the overall opinion, which can be converted to the distribution of classification probabilities. The right side illustrates the SPT and LSA modules in the data-efficient transformer that serves as the evidential network for the global view.
framework mainly consists of three parts, _i.e._, evidential network, subjective logic, and combination rule. The evidential networks encode local views and the whole ROI as global view to evidence vectors \(\mathbf{e}\). For local views, the networks are implemented with the convolutional structure. While for the global view, a data-efficient vision transformer with shifted patch tokenization (SPT) and locality self-attention (LSA) strategy is applied. Subjective logic serves as a principle that transforms the vector \(\mathbf{e}\) into the parameter \(\mathbf{\alpha}\) of the Dirichlet distribution of classification predictions, and the opinion \(\mathbf{D}\) with uncertainty \(u\). Then, Dempster's combination rule is applied to form the final opinion with overall uncertainty, which can be transformed into the final prediction. The details of subjective logic, Dempster's combination rule, the data-efficient transformer, and the training paradigm are discussed in the following sections.
### Subjective Logic for Uncertainty Estimation
Subjective logic, as a generalization of the Bayesian theory, is a principled method of probabilistic reasoning under uncertainty [10]. It serves as the guideline of the estimation of both uncertainty and distribution of predicted probabilities in our framework. Given an image \(x_{k}\) from view \(k\), \(k\in\{1,2,\cdots,K\}\), the evidence vector \(\mathbf{e}^{k}=[e_{1}^{k},e_{2}^{k},...,e_{C}^{k}]\) with non-negative elements for \(C\) classes is estimated through the evidential network, which is implemented using a classification network with softplus activation for the output.
According to subjective logic, the Dirichlet distribution of class probabilities \(Dir(\mathbf{p}^{k}|\mathbf{\alpha}^{k})\) is determined by the evidence. For simplicity, we follow [17] and derive the parameter of the distribution by \(\mathbf{\alpha}^{k}=\mathbf{e}^{k}+1\). Then the Dirichlet distribution is mapped to an opinion \(\mathbf{D}^{k}=\{\{b_{c}^{k}\}_{c=1}^{C},u^{k}\}\), subject to
\[u^{k}+\sum_{c=1}^{C}b_{c}^{k}=1, \tag{1}\]
where \(b_{c}^{k}=\frac{\alpha_{c}^{k}-1}{S^{k}}\) is the belief mass for class \(c\), \(S^{k}=\sum_{c=1}^{C}\alpha_{c}^{k}\) is the Dirichlet strength, and \(u^{k}=\frac{C}{S^{k}}\) indicates the uncertainty.
The predicted probabilities \(\tilde{\mathbf{p}}^{k}\in\mathbb{R}^{C}\) of all classes are the expectation of Dirichlet distribution, _i.e._, \(\tilde{\mathbf{p}}^{k}=\mathbb{E}_{Dir(\mathbf{p}^{k}|\mathbf{\alpha}^{k})}[\mathbf{p}^{k}]\). Therefore, the uncertainty \(u^{k}\) and predicted probabilities \(\tilde{\mathbf{p}}^{k}\) can be derived in an end-to-end manner.
### Combination Rule
Based on opinions derived from each view, Dempster's combination rule [11] is applied to obtain the overall opinion with uncertainty, which could be converted to the distribution of the final prediction. Specifically, given opinions \(\mathbf{D}^{1}=\{\{b_{c}^{1}\}_{c=1}^{C},u^{1}\}\) and \(\mathbf{D}^{2}=\{\{b_{c}^{2}\}_{c=1}^{C},u^{2}\}\), the combined opinion \(\mathbf{D}=\{\{b_{c}\}_{c=1}^{C},u\}=\mathbf{D}^{1}\oplus\mathbf{D}^{2}\) is derived by the following rule,
\[b_{c}=\frac{1}{N}(b_{c}^{1}b_{c}^{2}+b_{c}^{1}u^{2}+b_{c}^{2}u^{1}),u=\frac{1 }{N}u^{1}u^{2}, \tag{2}\]
where \(N=1-\sum_{i\neq j}b_{i}^{1}b_{j}^{2}\) is the normalization factor. According to Eq. (2), the combination rule indicates that the combined belief \(b_{c}\) depends more on the opinion which is confident (with small \(u\)). In terms of uncertainty, the combined \(u\) is small when at least one opinion is confident.
For opinions from \(K\) local views and one global view, the combined opinion could be derived by applying the above rule for \(K\) times, _i.e._, \(\mathbf{D}=\mathbf{D}^{1}\oplus\cdots\oplus\mathbf{D}^{K}\oplus\mathbf{D}^{Global}\).
### Global representation modeling
To capture the global representation, we apply a data-efficient transformer as the evidential network for the global view. We follow [13] and improve the performance of the transformer on small datasets by increasing locality inductive bias, _i.e._, the assumption about relations between adjacent pixels. The standard vision transformer (ViT) [3] without such assumptions typically require more training data than convolutional networks [15]. Therefore, we adopt the SPT and LSA strategy to improve the locality inductive bias.
As shown in Fig. 2, SPT is different from the standard tokenization in that the input image is shifted in four diagonal directions by half the patch size, and the shifted images are concatenated with the original images in the channel dimension to further utilize spatial relations between neighboring pixels. Then, the concatenated images are partitioned into patches and linearly projected as visual tokens in the same way as ViT.
LSA modifies self-attention in ViT by sharpening the distribution of the attention map to pay more attention to important visual tokens. As shown in Fig. 2, diagonal masking and temperature scaling are performed before applying softmax to the attention map. Given the input feature \(\mathbf{X}\), The LSA module is formalized as,
\[L(\mathbf{X})=\text{softmax}(\mathcal{M}(\mathbf{q}\mathbf{k}^{T})/\tau)\mathbf{v}, \tag{3}\]
where \(\mathbf{q,k,v}\) are the query, key, and value vectors obtained by linear projections of \(\mathbf{X}\). \(\mathcal{M}\) is the diagonal masking operator that sets the diagonal elements of \(\mathbf{q}\mathbf{k}^{T}\) to a small number (_e.g._,\(-\infty\)). \(\tau\in\mathbb{R}\) is the learnable scaling factor.
### Training Paradigm
Theoretically, the proposed framework could be trained in an end-to-end manner. For each view \(k\), we use the integrated cross-entropy loss as in [17],
\[\mathcal{L}^{k}_{ice}=\mathbb{E}_{\mathbf{p}^{k}\sim Dir(\mathbf{p}^{k}|\mathbf{\alpha}^{ k})}[\mathcal{L}_{CE}(\mathbf{p}^{k},\mathbf{y}^{k})]=\sum_{c=1}^{C}y_{c}^{k}(\psi(S^{k})- \psi(\alpha_{c}^{k})), \tag{4}\]
where \(\psi\) is the digamma function and \(\mathbf{y}^{k}\) is the one-hot label. We also apply a regularization term to increase the uncertainty of misclassified samples,
\[\mathcal{L}^{k}=\mathcal{L}^{k}_{ice}+\lambda KL[Dir(\mathbf{p}^{k}|\tilde{\mathbf{ \alpha}}^{k})||Dir(\mathbf{p}^{k}|\mathbf{1})], \tag{5}\]
where \(\lambda\) is the balance factor which gradually increases during training and \(\tilde{\mathbf{\alpha}}^{k}=\mathbf{y}^{k}+(1-\mathbf{y}^{k})\odot\mathbf{\alpha}^{k}\). The overall loss is the summation of losses from all views and the loss for the combined opinion,
\[\mathcal{L}_{Overall}=\mathcal{L}_{Combined}+\mathcal{L}_{Global}+\sum_{k=1}^{ K}\mathcal{L}^{k}, \tag{6}\]
where \(\mathcal{L}_{Combined}\) and \(\mathcal{L}_{Global}\) are losses of the combined and global opinions, implemented in the same way as \(\mathcal{L}^{k}\). In practice, we pre-train the evidential networks before training with Eq. (6). For local views, we use the model weights pre-trained on ImageNet, and the transformer is pre-trained on the global view images.
## 3 Experiments
### Dataset
The proposed method was evaluated on Gd-EOB-DTPA-enhanced [27] hep-tobiliary phase MRI data, including 342 patients acquired from two scanners, _i.e._, Siemens 1.5T and Siemens 3.0T. The gold standard was obtained through the pathological analysis of the liver biopsy or liver resection within 3 months before and after MRI scans. Among all patients, 88 individuals were identified with fibrosis stage S1, 41 with S2, 40 with S3, and 174 with the most advanced stage S4. Following [27], the slices with the largest liver area in images were selected. The data were then preprocessed with z-score normalization, resampled to a resolution of \(1.5\times 1.5mm^{2}\), and cropped to \(256\times 256\) pixel. For multi-view extraction, the size of the ROI, window, and stride were 160, 96, 32, respectively.
For all experiments, a four-fold cross-validation strategy was employed, and results of two tasks with clinical significance [27] were evaluated, _i.e._, staging cirrhosis (S4 vs S1-3) and identifying substantial fibrosis (S1 vs S2-4). To keep a balanced number of samples for each class, we over-sampled the S1 data and under-sampled S4 data in the experiments of staging substantial fibrosis.
### Implementation details
Augmentations such as random rescale, flip, and cutout [2] were applied during training. We chose ResNet34 as the evidential network for local views. For configurations of the transformer, please refer to supplementary materials. The framework was trained using Adam optimizer with an initial learning rate of \(1e-4\) for 500 epochs, which was decreased by using the polynomial scheduler. The balance factor \(\lambda\) was set to increase linearly from 0 to 1 during training. The transformer network was pre-trained for 200 epochs using the same setting. The framework was implemented using Pytorch and was run on one Nvidia RTX 3090 GPU.
### Results
**Comparison with multi-view learning methods** To assess the effectiveness of the proposed multi-view learning framework for liver fibrosis staging, we compared it with five multi-view learning methods, including Concat [5], DCCAE [24], CMC [22], PredSum [18], and Attention [9]. Concat is a commonly used method that concatenates multi-view images at the input level. DCCAE and CMC are feature-level strategies. PredSum and Attention are based on decision-level fusion. Additionally, SingleView [8] was adopted as the baseline method for liver fibrosis staging, which uses a single patch as input.
As shown in Table 1, our method outperformed the SingleView method by \(10.3\%\) and \(12\%\) in AUC on the two tasks, respectively, indicating that the proposed method could exploit more informative features than the method using single view. Our method also set the new state of the art, when compared with other multi-view learning methods. This could be due to the fact that our method was able to capture both the global and local features, and the uncertainty-aware fusion strategy could be more robust than the methods with implicit fusion strategies.
**Comparison with uncertainty-aware methods.** To demonstrate reliability, we compared the proposed method with other methods. Specifically, these methods estimate uncertainty using Monte-Carlo dropout (Dropout) [20], variational inference (VI) [19], ensemble [4], and softmax entropy [16], respectively. Follow
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Cirrhosis(S4 vs S1-3)} & \multicolumn{3}{c|}{Substantial Fibrosis(S1 vs S2-4)} \\ \cline{2-6} & ACC & AUC & ACC & AUC \\ \hline SingleView [8] & \(77.1\pm 3.17\) & \(78.7\pm 4.17\) & \(78.2\pm 7.18\) & \(75.0\pm 11.5\) \\ \hline Concat [5] & \(80.0\pm 2.49\) & \(81.8\pm 3.17\) & \(80.5\pm 2.52\) & \(83.3\pm 3.65\) \\ \hline DCCAE [24] & \(80.6\pm 3.17\) & \(82.7\pm 4.03\) & \(83.1\pm 5.30\) & \(84.5\pm 4.77\) \\ \hline CMC [22] & \(80.6\pm 1.95\) & \(83.5\pm 3.67\) & \(83.4\pm 3.22\) & \(85.3\pm 4.06\) \\ \hline PredSum [18] & \(78.8\pm 4.16\) & \(78.2\pm 4.94\) & \(81.1\pm 2.65\) & \(84.9\pm 3.21\) \\ \hline Attention [9] & \(76.2\pm 0.98\) & \(78.9\pm 3.72\) & \(81.4\pm 4.27\) & \(84.4\pm 5.34\) \\ \hline Ours & \(\mathbf{84.4\pm 1.74}\) & \(\mathbf{89.0\pm 0.03}\) & \(\mathbf{85.5\pm 1.91}\) & \(\mathbf{88.4\pm 1.84}\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison with multi-view learning methods. Results are evaluated in accuracy (ACC) and area under the receiver operating characteristic curve (AUC) for both tasks.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Cirrhosis(S4 vs S1-3)} & \multicolumn{3}{c|}{Substantial Fibrosis(S1 vs S2-4)} \\ \cline{2-6} & ACC & AUC & ECE & ACC & AUC & ECE \\ \hline Softmax & \(77.1\pm 3.17\) & \(78.7\pm 4.17\) & \(0.256\pm 0.040\) & \(78.2\pm 7.18\) & \(83.3\pm 3.65\) & \(0.237\pm 0.065\) \\ \hline Dropout [20] & \(77.1\pm 4.89\) & \(79.8\pm 4.50\) & \(0.183\pm 0.063\) & \(80.2\pm 5.00\) & \(83.8\pm 6.12\) & \(0.171\pm 0.067\) \\ \hline VI [19] & \(77.6\pm 2.20\) & \(79.5\pm 4.50\) & \(0.229\pm 0.020\) & \(81.1\pm 2.08\) & \(82.2\pm 6.12\) & \(0.191\pm 0.023\) \\ \hline Ensemble [4] & \(78.1\pm 1.91\) & \(80.8\pm 3.13\) & \(0.181\pm 0.040\) & \(79.3\pm 5.11\) & \(80.4\pm 3.90\) & \(0.193\pm 0.031\) \\ \hline Ours & \(\mathbf{84.4\pm 1.74}\) & \(\mathbf{89.0\pm 0.03}\) & \(\mathbf{0.154\pm 0.028}\) & \(\mathbf{85.5\pm 1.91}\) & \(\mathbf{88.4\pm 1.84}\) & \(\mathbf{0.156\pm 0.019}\) \\ \hline \end{tabular}
\end{table}
Table 2: Comparison with uncertainty-aware methods. The expected calibration error (ECE) is evaluated in addition to ACC and AUC. Methods with lower ECE are more reliable.
ing [6], we evaluated the expected calibration error (ECE), which measures the gap between model confidence and expected accuracy.
Table 2 shows that our method achieved better results in ACC and AUC for both tasks than the other uncertainty-ware multi-view learning methods. It indicates that the uncertainty in our framework could paint a clearer picture of the reliability of each view, and thus the final prediction was more accurate based on the proposed scheme of rule-based combination. Our method also achieved the lowest ECE, indicating that the correspondence between the model confidence and overall results was more accurate.
**Ablation study.** We performed this ablation study to investigate the roles of local views and global view, as well as to validate the effectiveness of the data-efficient transformer.
Table 3 shows that using the global view solely achieved the worst performance in the staging of cirrhosis. This means that it could be difficult to extract useful features without complementary information from local views. This is consistent with Fig. 3(a), where the uncertainty derived from the global view is high, even if there are many signs of fibrosis. While in Fig. 3(b), the uncertainty of the global view is low, which indicates that it is easier to make decisions from the global view when there is no visible sign of fibrosis. Therefore, we concluded that the global view was more valuable in identifying substantial fibrosis. Compared with the method that only used local views, our method gained more improvement in the substantial fibrosis identification task, which further con
Figure 3: Typical samples of stage 4 (a) and stage 1 (b). Visible signs of liver fibrosis are highlighted by circles. Yellow circles indicate the nodular surface contour and green circles denote numerous regenerative nodules. Uncertainties (U) of local and global views estimated by our model were demonstrated. Notably, local views of lower uncertainty contain more signs of fibrosis. Please refer to supplementary materials for more high-resolute images
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Cirrhosis(S4 vs S1-3)} & \multicolumn{4}{c|}{Substantial Fibrosis(S1 vs S2-4)} \\ \cline{2-7} & ACC & AUC & ECE & ACC & AUC & ECE \\ \hline Global View solely & \(76.8\pm 2.81\) & \(79.4\pm 4.76\) & \(0.192\pm 0.071\) & \(82.4\pm 3.45\) & \(84.9\pm 5.42\) & \(0.192\pm 0.071\) \\ \hline Local Views solely & \(84.1\pm 6.47\) & \(88.0\pm 8.39\) & \(\mathbf{0.148\pm 0.086}\) & \(82.0\pm 6.07\) & \(86.9\pm 6.68\) & \(0.180\pm 0.060\) \\ \hline Both views by CNN & \(82.9\pm 3.17\) & \(87.8\pm 3.09\) & \(0.171\pm 0.029\) & \(82.0\pm 3.54\) & \(87.1\pm 3.47\) & \(0.174\pm 0.039\) \\ \hline Ours & \(\mathbf{84.4\pm 1.74}\) & \(\mathbf{89.0\pm 0.03}\) & \(0.154\pm 0.028\) & \(\mathbf{85.5\pm 1.91}\) & \(\mathbf{88.4\pm 1.84}\) & \(\mathbf{0.156\pm 0.019}\) \\ \hline \end{tabular}
\end{table}
Table 3: Ablation study for the roles of local and global views, and effectiveness of the data-efficient transformer.
firms the aforementioned conclusion. Our method also performed better than the method that applied a convolution neural network (CNN) for the global view. This demonstrates that the proposed data-efficient transformer was more suitable for the modeling of global representation than CNN.
## 4 Conclusion
In this work, we have proposed a reliable and interpretable multi-view learning framework for liver fibrosis staging. Specifically, uncertainty is estimated through subjective logic to improve reliability, and an explicit fusion strategy is applied which promotes interpretability. Furthermore, we use a data-efficient transformer to model the global representation, which improves the performance.
|
2303.06827
|
Kernel Density Bayesian Inverse Reinforcement Learning
|
Inverse reinforcement learning~(IRL) is a powerful framework to infer an
agent's reward function by observing its behavior, but IRL algorithms that
learn point estimates of the reward function can be misleading because there
may be several functions that describe an agent's behavior equally well. A
Bayesian approach to IRL models a distribution over candidate reward functions,
alleviating the shortcomings of learning a point estimate. However, several
Bayesian IRL algorithms use a $Q$-value function in place of the likelihood
function. The resulting posterior is computationally intensive to calculate,
has few theoretical guarantees, and the $Q$-value function is often a poor
approximation for the likelihood. We introduce kernel density Bayesian IRL
(KD-BIRL), which uses conditional kernel density estimation to directly
approximate the likelihood, providing an efficient framework that, with a
modified reward function parameterization, is applicable to environments with
complex and infinite state spaces. We demonstrate KD-BIRL's benefits through a
series of experiments in Gridworld environments and a simulated sepsis
treatment task.
|
Aishwarya Mandyam, Didong Li, Diana Cai, Andrew Jones, Barbara E. Engelhardt
|
2023-03-13T03:00:03Z
|
http://arxiv.org/abs/2303.06827v2
|
# Kernel Density Bayesian
###### Abstract
Inverse reinforcement learning (IRL) is a powerful framework to infer an agent's reward function by observing its behavior, but IRL algorithms that learn point estimates of the reward function can be misleading because there may be several functions that describe an agent's behavior equally well. A Bayesian approach to IRL models a distribution over candidate reward functions, alleviating the shortcomings of learning a point estimate. However, several Bayesian IRL algorithms use a \(Q\)-value function in place of the likelihood function. The resulting posterior is computationally intensive to calculate, has few theoretical guarantees, and the \(Q\)-value function is often a poor approximation for the likelihood. We introduce kernel density Bayesian IRL (KD-BIRL), which uses conditional kernel density estimation to directly approximate the likelihood, providing an efficient framework that, with a modified reward function parameterization, is applicable to environments with complex and infinite state spaces. We demonstrate KD-BIRL's benefits through a series of experiments in Gridworld environments and a simulated sepsis treatment task.
## 1 Introduction
Reinforcement learning (RL) methods find policies that maximize an agent's long-term expected reward within a Markov decision process (MDP). In many observational data settings, we observe a sequence of states and actions for an agent carrying out a policy driven by an unknown reward function. It can be useful to learn this reward function to identify the factors driving the agent's behavior. For example, in a hospital setting, we see a patient's treatment schedule and measurements of physiological state. To make sense of the underlying factors influencing treatment decisions, we can identify the clinician's reward function (i.e., objectives), and examine how this function drives treatment decisions given a patient's state. It is particularly difficult to infer the reward function in this setting because the vector of observed covariates for a patient at a given time is often noisy and partially missing, and there may be several candidate reward functions that can explain the doctor's behavior.
Inverse reinforcement learning (IRL) methods infer an agent's reward function given observations of the agent's behavior. Early IRL algorithms were used to identify point estimates of the reward function that best explained an agent's behavior [2, 31], and were applied to problems in path planning [30], urban navigation [49] and robotics [24, 35]. A point estimate can also aid in imitation learning, where the inferred reward function is used to fit RL policies that replicate desired behavior.
Despite the success of early IRL approaches, there are limitations to inferring a point estimate. First, the IRL problem is often non-identifiable [2, 36, 49, 50], meaning there may be multiple (and possibly infinite) reward functions that explain a set of behaviors equally well. Second, for finite demonstration data, point estimates fail to capture the uncertainty and noise in the data-generating process. Thus, it is advantageous to take a Bayesian approach, which treats the reward function as inherently random and communicates a degree of uncertainty that relies on the dataset distribution. A Bayesian approach to IRL computes a posterior distribution that places mass on reward functions proportional to how well they explain the observed behavior [5, 13, 14, 27, 28, 34].
However, existing Bayesian IRL methods can be computationally demanding. In Bayesian modeling, likelihood specification has a large impact on the resulting posterior distribution. The formulation of the likelihood function (i.e., the function describing the probability of observing a state-action pair given a reward function) is unknown in the IRL setting. Existing approaches replace it with an optimal \(Q\)-value function, denoted by \(Q^{\star}\), which best approximates the long-term expected reward for a given state-action tuple([34]). The \(Q\)-value function must be learned using \(Q\)-learning [42], a "forward RL" algorithm, or one that solves an environment's MDP. The original algorithm pioneered by Ramachandran and Amir [34] and the majority of its successs use Markov chain Monte Carlo (MCMC) sampling to compute a posterior over the reward function, and every iteration of MCMC requires forward RL for each sampled reward function. This is computationally expensive, especially with infinite or high-dimensional state spaces. Additionally, a posterior that uses \(Q^{\star}\) as a likelihood is equivalent to a Gibbs posterior [6, 47] and lacks desirable theoretical properties [6].
We address these challenges with kernel density Bayesian inverse reinforcement learning (KD-BIRL), a method that (1) estimates the likelihood function directly, leading to theoretical guarantees for the consistency of the resulting posterior distribution, and (2) disassociates the number of times forward RL is required from the number of iterations of MCMC sampling, thus reducing computational complexity. The contributions of our work are as follows:
1. We propose KD-BIRL, a Bayesian IRL method that uses conditional kernel density estimation to directly approximate the likelihood function (Section 3).
2. We justify our method theoretically by proving posterior consistency, and demonstrating that the posterior contracts to the equivalence class of the expert reward function (Section 4).
3. We show that KD-BIRL's posterior estimates efficiently and accurately capture agent priorities in Gridworld environments (Section 5).
4. We demonstrate that, with a feature-based reward function, KD-BIRL can successfully infer rewards in complex state spaces such as a sepsis management task (Section 5).
## 2 Preliminaries
### Background: Inverse reinforcement learning (IRL)
The goal of IRL methods is to infer the reward function of an agent, given its behavior. An RL agent interacts with and responds to an environment that can be defined using an MDP. An MDP is represented by \((\mathcal{S},\mathcal{A},P,R)\), where \(\mathcal{S}\) is the state space; \(\mathcal{A}\) is the set of actions; \(P(\mathbf{s}_{t+1}^{e}|\mathbf{s}_{t}^{e},\mathbf{a}_{t})\) defines state-transition probabilities from time \(t\) to \(t+1\); and \(R:\mathcal{S}\rightarrow\mathbb{R}\) is a reward function, where \(R\in\mathcal{R}\) and \(\mathcal{R}\) denotes the space of reward functions. The input to an IRL algorithm is a set of expert demonstrations, \(\{(s_{t}^{e},a_{t}^{e})\}_{t=1}^{n}\), where each demonstration is a \(2\)-tuple \((s_{t}^{e},a_{t}^{e})\) representing an agent's state and chosen action at time \(t\). These demonstrations are assumed to arise from an agent acting according to policy, \(\pi^{\star}:\mathcal{S}\rightarrow\mathcal{A}\), that is optimal for a fixed but unknown reward function \(R^{\star}\). Given these demonstrations, IRL algorithms seek a reward function \(R^{\star}\) such that \(\pi^{\star}\) is optimal with respect to \(R^{\star}\).
Bayesian approaches to IRL treat the reward function \(R\) as inherently random. By specifying a prior distribution over \(R\) and a likelihood function for the observed data, these methods then infer a posterior distribution over \(R\) given \(n\) expert demonstrations of an agent \(\{(s_{i}^{e},a_{i}^{e})\}_{i=1}^{n}\). Using Bayes rule, the posterior density is equivalent to the product of the prior distribution on the reward, \(p(R)\), and the likelihood of the expert demonstrations given the reward function, with a normalizing constant that corresponds to the probability of the expert demonstrations:
\[p\left(R\,|\,\,\{(s_{i}^{e},a_{i}^{e})\}_{i=1}^{n}\right)=\frac{p(R)\prod_{i= 1}^{n}p(s_{i}^{e},a_{i}^{e}|R)}{p(\{(s_{i}^{e},a_{i}^{e})\}_{i=1}^{n})}. \tag{1}\]
In the initial formulation of Bayesian IRL (BIRL) [34], the authors propose using a \(Q\)-value function to calculate the likelihood in Equation (1). The \(Q\)-value function for a given policy \(\pi\) at time \(t\) is \(Q^{\pi}(s_{t},a_{t})=r_{t}+\gamma\mathbb{E}_{s^{\prime}\sim P}[V^{\pi}(s^{ \prime})]\), where \(s_{t}\), \(a_{t}\), and \(r_{t}\) are the state, action, and reward at time \(t\), and \(\gamma\in[0,1]\) is a discount factor. This approach uses an
optimal \(Q\)-value function, \(Q^{\star}\), as a component of the likelihood within a Gibbs posterior framework. The "likelihood" takes the form
\[p(s,a\,|\,R)\propto e^{\alpha Q^{\star}(s,a,R)}, \tag{2}\]
where \(Q^{\star}(s,a,R)\) is the optimal \(Q\)-value function for reward function \(R\), and \(\alpha>0\) is an inverse temperature parameter that represents confidence in the agent's ability to select optimal actions.
There are several potential challenges with learning the aforementioned BIRL posterior. First, the optimal \(Q^{\star}\) is found using \(Q\)-learning, a forward RL algorithm, and \(Q^{\star}\) is typically expensive to estimate for a new \(R\). BIRL uses MCMC, which requires learning \(Q^{\star}\) on every iteration of sampling for a new \(R\). In addition, because Equation 2 is a loss-based function (rather than a true likelihood), the resulting function is not a classical Bayesian posterior [40] and does not have theoretical guarantees regarding posterior contraction (more details in Appendix Section 2.3). Additionally, \(Q\)-value estimates can be incorrect for states that are very rarely visited, as often happens in infinite state spaces, leading to incorrect likelihood estimates that can affect the accuracy of the BIRL posterior.
### Related Work
Several extensions to the original BIRL algorithm [34] have been proposed. The first set of methods identifies nonparametric reward functions [14, 15, 25, 32, 45]. These algorithms use a variety of strategies such as Gaussian processes [25, 32], Indian buffer process (IBP) priors [15], and Dirichlet process mixture models [14] to learn reward functions for MDPs with large state spaces that may include sub-goals. Other methods reduce computational complexity by using either more informative priors [38], different sampling procedures (e.g., Metropolis-Hastings [14] or expectation-maximization [48]), variational inference to approximate the posterior [13], or by learning several reward functions that each describe a subset of the state space [27, 28]. However, all of these approaches use a \(Q\)-value function in place of the likelihood and hence still suffer from consistency and computational issues; this construction is both inefficient and limits desirable Bayesian behaviors with regard to posterior sampling and uncertainty quantification.
To address these computational and consistency issues, it is necessary to either directly estimate the likelihood, reduce the number of times forward RL is performed, or modify the reward function parameterization. Recent work proposes a variational Bayes framework, Approximate Variational Reward Imitation Learning (AVRIL) [13], to approximate the full posterior. This method improves upon existing work by avoiding re-estimating \(Q^{\star}\) for every sampled reward function, allowing it to bypass some of the computational inefficiencies of Ramachandran and Amir [34]'s initial formulation. However, AVRIL still requires the use of a \(Q\)-value function, resulting in a misspecified optimization objective. One method avoids using a \(Q\)-value function entirely for the likelihood [29] and instead approximates it using real-time dynamic programming or action comparison. Other work proposes a feature-based reward function, which parameterizes the reward as a linear combination of a set of weights and a low-dimensional feature encoding of the state [2, 17]. This approach can be beneficial because the posterior inference is over a lower dimensional reward vector. More recent work builds on this approach and proposes a method that enables imitation learning in complex control problems [9]. All of these techniques are best suited for environments with a closed-loop controller that provides instant feedback.
## 3 Methods
### Conditional kernel density estimation
As discussed earlier, directly estimating the likelihood function can lead to theoretical guarantees of posterior consistency. To estimate the likelihood \(p(s,a\,|\,R)\), we first observe that it can be viewed as the conditional density of the state-action pair given the reward function. Thus, any appropriate conditional density estimator could be applied; examples include the conditional kernel density estimator (CKDE, [44]) and Gaussian processes (GPs). We adopt the CKDE because it is nonparametric, has a closed form, and is straightforward to implement [19, 20]. Motivated by the conditional probability equation \(p(y|x)=\frac{p(x,y)}{p(x)}\) (where \(x\) and \(y\) are two generic random variables), the CKDE estimates the conditional density \(p(y|x)\) by approximating the joint distribution \(p(x,y)\) and marginal distribution \(p(x)\) separately via kernel density estimation (KDE). Given pairs of observations \(\{(x_{j},y_{j})\}_{j=1}^{m}\), the KDE approximations for the joint and marginal distributions are
\[\widehat{p}(x,y)=\frac{1}{m}\sum_{j=1}^{m}K\left(\frac{x-x_{j}}{h}\right)K^{ \prime}\left(\frac{y-y_{j}}{h^{\prime}}\right),\widehat{p}(x)=\frac{1}{m} \sum_{j=1}^{m}K\left(\frac{x-x_{j}}{h}\right), \tag{3}\]
where \(K\) and \(K^{\prime}\) are kernel functions with bandwidths \(h,h^{\prime}>0\), respectively. To approximate the conditional density, the CKDE simply takes the ratio of these two KDE approximations:
\[\widehat{p}(y|x)=\frac{\widehat{p}(x,y)}{\widehat{p}(x)}=\sum_{j=1}^{m}\frac{K \left(\frac{x-x_{j}}{h}\right)K^{\prime}\left(\frac{y-y_{j}}{h^{\prime}}\right) }{\sum_{\ell=1}^{m}K\left(\frac{x-x_{\ell}}{h}\right)}. \tag{4}\]
### Kernel Density Bayesian IRL
We propose kernel density Bayesian inverse reinforcement learning (KD-BIRL), which uses a CKDE approximation \(\widehat{p}_{m}(s,a\,|\,R)\) to estimate the likelihood \(p(s,a\,|\,R)\). While the standard form of the CKDE (Equation (4)) uses the difference between two samples (e.g., \(x-x_{j}\)) as input to the kernel functions, this difference can be replaced by any suitable distance metric [10]. To estimate the joint and marginal distributions, \(p(s,a,R)\) and \(p(R)\), we must specify two distance functions: one for comparing state-action tuples and one for comparing reward functions. We denote these as \(d_{s}:(\mathcal{S}\times\mathcal{A})\times(\mathcal{S}\times\mathcal{A}) \rightarrow\mathbb{R}_{+}\) and \(d_{r}:\mathcal{R}\times\mathcal{R}\rightarrow\mathbb{R}_{+}\), respectively, and we discuss specific choices for them later. The CKDE approximation is then
\[\widehat{p}_{m}(s,a\,|\,R)=\frac{\widehat{p}_{m}(s,a,R)}{\widehat{p}_{m}(R)}= \sum_{j=1}^{m}\frac{K\left(\frac{d_{r}((s,a),(s_{j},a_{j}))}{h}\right)K^{\prime }\left(\frac{d_{r}(R,R_{j})}{h^{\prime}}\right)}{\sum_{\ell=1}^{m}K\left( \frac{d_{r}(R,R_{l})}{h^{\prime}}\right)}, \tag{5}\]
where \(h,h^{\prime}>0\) are the bandwidth hyperparameters.
Note that fitting a CKDE for the likelihood requires estimating the density across a range of reward functions and state-action pairs. To better enable this, we construct an additional set of demonstrations and reward functions - which we call the _training dataset_\(\{(s_{j},a_{j},R_{j})\}_{j=1}^{m}\) - to augment the observed expert demonstrations \(\{(s_{i}^{e},a_{i}^{e})\}_{i=1}^{n}\) (from an agent acting according to the data generating reward \(R^{\star}\)). The training dataset contains demonstrations from agents whose policies optimize for reward functions that are likely distinct from those of the expert. Each sample in the training dataset is a state-action pair associated with a reward function. There will be many state-action pairs that correspond to the same reward function; therefore, \(R_{j}\) is not unique. In a simulated setting where the training demonstrations are not available already, we choose \(k\) training set reward functions, learn \(k\) optimal policies that optimize for each of these functions, and generate \(\lfloor m/k\rfloor\) demonstrations from each policy. We sample the reward functions \(R_{1},\dots,R_{k}\sim u\), where \(u\) is a distribution on the space of reward functions \(\mathcal{R}\). The resulting density estimate is more accurate when \(R_{1},\dots,R_{k}\) are uniformly distributed across \(\mathcal{R}\). As such, in our experiments with simulated environments, we use a uniform distribution for \(u\).
Using the CKDE in Equation (5), we can now estimate the posterior density function of \(R\) given \(n\) expert demonstrations, \(m\) training demonstrations, and prior \(p(R)\):
\[\widehat{p}_{m}^{n}(R|\{s_{i}^{e},a_{i}^{e}\}_{i=1}^{n})\propto p(R)\prod_{i=1 }^{n}\widehat{p}_{m}(s_{i}^{e},a_{i}^{e}\,|\,R)=p(R)\prod_{i=1}^{n}\sum_{j=1} ^{m}\frac{K\left(\frac{d_{r}((s_{i}^{e},a_{i}^{e}),(s_{i},a_{i}))}{h}\right)K^ {\prime}\left(\frac{d_{r}(R,R_{i})}{h^{\prime}}\right)}{\sum_{\ell=1}^{m}K \left(\frac{d_{r}(R,R_{i})}{h^{\prime}}\right)}. \tag{6}\]
The choice of the prior, \(p(R)\), and the distance metrics \(d_{s},d_{r}\) can be altered depending on information known about the reward function or state space in advance [3]. For example, if the reward function is assumed to be a linear function of the state, the cosine distance is more appropriate for \(d_{r}\). Several non-uniform priors may be appropriate for \(p(R)\) depending on the characteristics of the MDP, including Gaussian [32], Beta [34], and Chinese restaurant process (CRP) [27]. In KD-BIRL, the kernel \(K(x)=\exp(-\|x\|^{2})\) is chosen to be Gaussian. We choose a Gaussian kernel because it can approximate bounded and continuous functions well. The bandwidth hyperparameters can be chosen using rule-of-thumb procedures [41].
To infer the posterior estimate in Equation (6), we sample rewards using a Hamiltonian Monte Carlo algorithm [43] (additional details in Appendix Section 9). Note a key computational gain of our approach over BIRL, which is also a sampling-based algorithm: we only use forward RL to generate the training dataset (in the case of simulated environments or when it is not already present), and avoid it in each iteration of MCMC. This is possible because Equation (6) does not depend on \(Q^{\star}\).
#### 3.2.1 Feature-based reward function
While KD-BIRL can be applied as-is in many environments, the CKDE is known to scale poorly to high-dimensional functions. Thus, before KD-BIRL can work in environments with large state spaces where the corresponding reward
function has a large number of parameters, it is necessary to re-parameterize the reward function. In our existing formulation, the reward function is parameterized as a vector where each index corresponds to the reward in one of the states. Under this formulation, in a \(10\times 10\) Gridworld environment, the reward function would be represented as a vector of length \(100\). In practice, the CKDE increases in computational cost with respect to both the length of the vector and the number of samples in the expert and training datasets [21], and it would not be suited to learn 100 parameters.
We propose a formulation of KD-BIRL that uses a _feature-based reward function_[35]. This method of parameterizing a reward is one of three broad categories of IRL formulations [1]. The feature-based reward function \(R(s,a)=w^{\top}\phi(s,a)\) where \(w\in\mathcal{R}^{q}\) and \(\phi:S\times A\rightarrow\mathcal{R}^{q}\) is advantageous because it does not scale with the dimensionality of the state \(s\) and does not rely on the state space being discrete like our earlier approach. Here, \(\phi\) is a known function that maps a state-action tuple to a feature vector of length \(q\). Intuitively, this feature vector is a low-dimensional representation of the original state that facilities reward inference. In this setup, the goal is to find \(w^{\star}\) such that:
\[w^{\star\top}E\left[\sum_{t=0}^{\infty}\gamma^{t}\phi(s_{t},a_{t})|\pi^{\star }\right]\geq w^{\star\top}E\left[\sum_{t=0}^{\infty}\gamma^{t}\phi(s_{t},a_{t} )|\pi\right]\]
where \(\gamma\) is a discount factor, \(\pi\) is a policy, and \(s_{t},a_{t}\) is the state and action at time \(t\). In a Bayesian setting, the resulting posterior is over \(w\) rather than \(R\).
Now we generate \(n\) expert and \(m\) training demonstrations. Recall that the CKDE requires as input the reward function parameters corresponding to each training dataset sample. A given training dataset sample here is \(\{(s_{j},a_{j},w_{j})\}\) where \(w_{j}\) is the weight vector of length \(q\) associated with the reward function that was used by the agent to generate the sample \(s_{j},a_{j}\). \(w_{j}\) can be repeated and is not unique. A sample from the expert demonstration dataset is still \(\{(s^{e}_{i},a^{e}_{i})\}_{i=1}^{n}\), where the data-generating weights \(w^{\star}\) are used to generate these demonstrations. The procedure for learning the CKDE and the resulting posterior inference then stays the same. The CKDE formulation is now:
\[\widehat{p}_{m}(s,a\,|\,w)=\frac{\widehat{p}_{m}(s,a,w)}{\widehat{p}_{m}(w)}= \sum_{j=1}^{m}\frac{K\left(\frac{d_{x}((s,a),(s_{j},a_{j}))}{h}\right)K^{ \prime}\left(\frac{d_{r}(w,w_{j})}{h^{\prime}}\right)}{\sum_{\ell=1}^{m}K \left(\frac{d_{r}(w,w_{l})}{h^{\prime}}\right)}, \tag{7}\]
where \(d_{r}\) measures the similarity between weight vectors, and \(d_{s}\) is the distance between state-action tuples. The posterior is then:
\[\widehat{p}_{m}^{n}(w|\{s^{e}_{i},a^{e}_{i}\}_{i=1}^{n})\propto p(w)\prod_{i =1}^{n}\widehat{p}_{m}(s^{e}_{i},a^{e}_{i}\,|\,w)=p(w)\prod_{i=1}^{n}\sum_{j=1 }^{m}\frac{K\left(\frac{d_{x}((s^{e}_{i},a^{e}_{i}),(s_{j},a_{j}))}{h}\right)K ^{\prime}\left(\frac{d_{r}(w,w_{j})}{h^{\prime}}\right)}{\sum_{\ell=1}^{m}K \left(\frac{d_{r}(w,w_{l})}{h^{\prime}}\right)}. \tag{8}\]
## 4 Theoretical guarantees of KD-BIRL
KD-BIRL estimates the density function of a true Bayesian posterior distribution (Equation (6)), so we can reason about the posterior's asymptotic behavior. In particular, we want to ascertain that this posterior estimate contracts as it receives more samples. Because the IRL problem is non-identifiable, the "correct" reward function as defined by existing methods [2, 36, 49, 50], may not be unique. In this work, we assume that any two reward functions that lead an agent to behave in the same way are equivalent. Said another way, if a set of observations is equally likely under two reward functions, the functions are considered equal: \(R_{1}\simeq R_{2}\) if \(\|p(\cdot|R_{1})-p(\cdot|R_{2})\|_{L_{1}}=0\). We can then define the _equivalence class_\([R^{\star}]\) for \(R^{\star}\) as \([R^{\star}]=\{R\in\mathcal{R}:R\simeq R^{\star}\}\). An ideal posterior distribution places higher mass on reward functions in the equivalence class \([R^{\star}]\).
We first focus on the likelihood estimation step of our approach and show that, when the size of the training dataset \(m\) approaches \(\infty\) and the \(m\) samples arise from sufficiently different reward functions that cover the space of \(\mathcal{R}\), the likelihood estimated using a CKDE (Equation (5)) converges to the true likelihood \(p(s,a\,|\,R)\).
**Lemma 4.1**.: _Let \(h_{m},h^{\prime}_{m}>0\) be the bandwidths chosen for the CKDE. Assume that both \(p(s,a\,|\,R)\) and \(p(R)\) are square-integrable and twice differentiable with a square-integrable and continuous second order derivative, and that \(mh_{m}^{p/2}\rightarrow\infty\) and \(mh_{m}^{{}^{\prime}\,p/2}\rightarrow\infty\) as \(m\rightarrow\infty\). Then,_
\[\widehat{p}_{m}(s,a|R)\xrightarrow[m\rightarrow\infty]{P}p(s,a\,|\,R),\;\forall (s,a,R)\in\mathcal{S}\times\mathcal{A}\times\mathcal{R}.\]
Lemma 4.1 verifies that we can estimate the likelihood using a CKDE, opening the door to Bayesian inference. We now show that as \(n\), the size of expert demonstrations, and \(m\), the size of the training dataset, approach \(\infty\), the posterior distribution generated using KD-BIRL contracts to the equivalence class of the expert demonstration generating reward \([R^{\star}]\).
**Theorem 4.2**.: _Assume the prior for \(R\), denoted by \(\Pi\), satisfies \(\Pi(\{R:\mathrm{KL}(R^{\star},R)<\epsilon\})>0\) for any \(\epsilon>0\), where \(\mathrm{KL}\) is the Kullback-Leibler divergence. Assume \(\mathcal{R}\subseteq\mathbb{R}^{d}\) is a compact set. Then, the posterior measure corresponding to the posterior density function \(\widehat{p}_{m}^{n}\) defined in Equation (6), denoted by \(\Pi_{m}^{n}\), is consistent w.r.t. the \(L_{1}\) distance; that is,_
\[\Pi_{m}^{n}(\{R:\|p(\cdot|R)-p(\cdot|R^{\star})\|_{L_{1}}<\epsilon\})\}) \xrightarrow[n\to\infty]{m\to\infty}1.\]
Theorem 4.2 implies that the posterior \(\Pi_{m}^{n}\) assigns almost all mass to the neighborhood of \([R^{\star}]\). This means that the reward function the KD-BIRL posterior contracts to with a large enough sample size is practically equivalent to the data-generating reward function \(R^{\star}\). Note that this not a statement regarding the posterior contraction rate, just a certification of contraction. Proofs for both Theorem 4.2 and Lemma 4.1 are in Appendix Section 4.
## 5 Experiments
Here, we evaluate the accuracy and computational efficiency of KD-BIRL. We compare KD-BIRL to AVRIL [13], a recent method that that simultaneously learns an imitator policy and performs reward inference on the expert demonstrations, and the original Bayesian IRL algorithm (BIRL) [34]. We demonstrate results using a Gridworld environment [7] and a sepsis management clinical environment [4].
To quantitatively evaluate the reward functions learned by IRL methods, previous studies have used Expected Value Difference (EVD) [8, 14, 15, 25]. EVD is defined as \(|V^{\star}(r^{A})-V^{\pi^{\star}(r^{L})}(r^{A})|\) where \(V^{\pi}=\sum_{s}p_{0}(s)V^{\pi}\) is the value of policy \(\pi\) with initial state distribution \(p_{0}\), \(r^{A}\) is the true data-generating reward, \(r^{L}\) is the learned reward, and \(V^{\star}\) is the value function associated with the optimal policy \(\pi^{\star}\). Intuitively, the EVD measures the difference in reward obtained by an agent whose policy is optimal for the true reward and the reward obtained by an agent whose policy is optimal for the learned reward. We use EVD because it allows us to compare KD-BIRL to related methods without needing to directly compare two reward function samples based on their functional form. The lower the EVD, the better our learned reward recapitulates the expert reward (see Appendix Section 10 for more details).
### Gridworld environment
We begin our analysis in a Gridworld environment. The MDP here is defined by the grid's \(g\times g\) discrete state space \(\mathcal{S}\) where a given state is represented as a one-hot encoded vector in \(\mathcal{R}^{g\times g}\), \(e_{i}\), where the \(i\)'th index is 1 and corresponds to the state in which the agent is in, and \(g\) is the size of the grid; the action space contains 5 possible actions \(\{\texttt{NOACTION},\texttt{UP},\texttt{RIGHT},\texttt{LEFT},\texttt{DOWN}\}\), each represented as a one-hot encoded vector; and the true reward function \(R^{\star}\), which is unobserved by the IRL algorithms, is a vector of length \(g\times g\). We structure the reward function such that each state has an independent scalar reward parameter. We specify the domain of each of these parameters to be the unit interval; thus, each feasible reward function can be represented by a vector \(R\in[0,1]^{g\times g}\).
To fit KD-BIRL we use Stan [43], which uses a Hamiltonian Monte Carlo algorithm. To fit the BIRL and AVRIL posteriors, we first generate the same number of expert demonstration trajectories as used for KD-BIRL. BIRL and AVRIL use an inverse temperature hyperparameter, \(\alpha\); we set \(\alpha=1\) for all methods. AVRIL uses two additional hyperparameters \(\gamma,\delta\), which we set to 1. Unless otherwise specified, KD-BIRL uses a uniform prior for the reward \(r_{s}\sim\text{Unif}(0,1)\) for \(s=1,\ldots,g\times g\) and Euclidean distance for \(d_{s},d_{r}\).
#### 5.1.1 Visualizing KD-BIRL's posterior distribution
First, we visualize KD-BIRL's posterior distribution in comparison to those recovered by AVRIL and BIRL. To do this, we show density plots of samples from all three distributions marginalized at each state in the \(2\times 2\) Gridworld environment. All methods use a data-generating reward function \(R^{\star}=[0,0,0,1]\). We find that the posterior samples from KD-BIRL and BIRL are more concentrated around \(R^{\star}\) than those from AVRIL (Figure 1).
#### 5.1.2 KD-BIRL requires fewer instances of Q-learning
Next, we quantify the computational complexity associated with performing reward inference in a \(4\times 4\) Gridworld in which only the state \([3,3]\) contains a reward of 1. As discussed earlier, much of the computational cost associated with learning a posterior distribution in existing methods arises from repeated instances of forward RL. BIRL requires forward RL during every iteration of MCMC sampling; several thousand iterations are required for the sampler to converge. AVRIL uses one instance of forward RL to learn an approximate posterior. KD-BIRL also minimizes the use of forward RL, only using it during dataset generation, in the case that these observations are not already available.
Here, we vary the number of iterations of forward RL and plot the EVDs for reward samples from the resulting posterior distributions for the three methods. Our results indicate that with fewer instances of forward RL, KD-BIRL reward samples better replicate the behavior of the expert demonstrations than those of BIRL (partly because the x-axis implies too few iterations of MCMC sampling); consequently, even though AVRIL requires fewer instances of forward RL, it is at the expense of accuracy in the posterior distribution, as highlighted by the stagnant EVD (Figure 2).
### Limitations of the CKDE
It is well known that CKDE has difficulty scaling to high-dimensional probability density functions [21]. Regardless, we want to identify the limits of the CKDE used in the original KD-BIRL setup without a feature-based reward function. To do so, we use a \(5\times 5\) Gridworld environment. In Figure 3, despite the fact that the number of reward parameters is larger than what we expect the CKDE to successfully model, KD-BIRL is able to estimate a posterior whose mean is in the equivalence class of \(R^{\star}\). That is, the posterior mean and \(R^{\star}\) encourage the same behavior in an agent, which implies that the expert demonstrations are equally likely under both. However, there are states (\([2,3],[4,3]\) in Figure 3, Panel \(2\)) in the \(5\times 5\) Gridworld in which the mean estimated reward is notably incorrect, which indicates that the CKDE struggles to learn 25 independent reward parameters successfully.
Figure 1: **Marginalized posterior distribution in a \(2\times 2\) Gridworld** where the data generating reward function \(R^{\star}=[0,0,0,1]\). The dashed vertical lines display the true reward in each state. Each algorithm’s mean estimated reward is shown using vertical colored lines. The x-axis corresponds to the numeric value of the sampled reward marginalized by the given state. KD-BIRL’s and BIRL’s marginal posterior distributions are more concentrated around the true reward \(R^{\star}\) compared to those of AVRIL.
Figure 2: **KD-BIRL requires fewer instances of forward RL to generate samples with lower EVD** in a \(4\times 4\) Gridworld where only the state [3, 3] receives a reward of 1.0. The BIRL reward samples continue to have high EVDs even with many instances of RL (each corresponding to an iteration of MCMC), and AVRIL only performs RL once, but the resulting EVDs stagnate.
### Feature-based rewards
We now study three methods of reward function featurization that can enable KD-BIRL to perform reward inference in environments with large state spaces.
#### 5.3.1 Using known features in a 10x10 Gridworld
As discussed in Section 3.2.1, without using feature-based rewards, the original KD-BIRL algorithm would not be able to perform inference in the \(10\times 10\) Gridworld because the reward vector length \((100)\) is too high. In the \(10\times 10\) Gridworld, the MDP is identical to the earlier Gridworld settings, except the state space is the series of one-hot encoded vectors of length 100. In this setting, we select \(\phi(s)=[x,y]\) to be a simple function that ignores the action and maps the state vector of length 100 to the spatial coordinates of the agent. In this way, we treat the coordinates of the agent as a "feature vector". Then, we choose weights \(w^{\star}\) such that \(R^{\star}\) is a linear combination of the features and \(w^{\star}\). Figure 4 visualizes the resulting posterior distributions for two choices of \(w^{\star}\). We use a Normal prior for \(p(w)\) with mean 0 and variance \(1\) for \(w^{\star}=[-1,1]\), and a Normal prior with mean \(0.5\) and variance \(0.5\) for \(w^{\star}=[1,1]\). We find that KD-BIRL accurately recovers the relative magnitude and sign of the individual components of \(w^{\star}\) for both chosen reward functions.
Figure 4: **Feature-based reward in a \(10\times 10\) Gridworld** for \(w^{\star}=[1,1]\) (top) and \(w^{\star}=[-1,1]\) (bottom). The first images in each row visualize \(w^{\star}\) projected onto the Gridworld, the second visualize the mean of the KD-BIRL posterior projected onto the Gridworld, and the third images show the joint density plots of the two weights, with the first on the x-axis and the second on the y-axis. KD-BIRL accurately infers the relative magnitude and sign of the individual weights.
Figure 3: **Pushing the CKDE’s limits in a \(5\times 5\) Gridworld environment.** The first panel shows \(R^{\star}\), the second panel shows the KD-BIRL posterior mean, and the third and fourth panels show the training and expert demonstrations state occupancy respectively. The KD-BIRL posterior mean is within the equivalence class of \(R^{\star}\), because the darkest square is in the upper right hand corner, even though there are some dark squares in other portions of the grid.
#### 5.3.2 Manually curated features in a sepsis treatment environment
Now we perform inference in a sepsis treatment simulator based on de-identified data from MIMIC-III [16, 22], a database of electronic health records (EHR). Sepsis arises when the body responds to infection in a way that is harmful to its own tissues and organs [11]. In the original simulator [4], the state is a vector of length \(46\), where each element contains information about a given physiological covariate. There are 25 possible actions, each corresponding to a different combination of treatments. The transition function was learned using deep RL models [33] (see Appendix Section 12 for details).
Sepsis treatment depends heavily on organ failure metrics, and fast increases in these metrics warrant urgent care [33]. Since we have observed that in the Gridworld environment, KD-BIRL can successfully model a reward that is a function of a small state space, we choose \(\phi\) to be a function that ignores the action and extracts three Sequential Organ Failure Assessment (SOFA) [23] covariates present in the state: _sofa_, _quick sofa_, and _quick sofa systolic blood pressure score_. The result is a feature-based reward function with manually selected features based on prior knowledge. Our reward is now a linear combination of the difference between the state features at time \(t\) and \(t+1\),
\[R(s_{t})=\begin{bmatrix}a\\ b\\ c\end{bmatrix}^{\top}\begin{bmatrix}s(cov_{1})_{t}-s(cov_{1})_{t+1}\\ s(cov_{2})_{t}-s(cov_{2})_{t+1}\\ s(cov_{3})_{t}-s(cov_{3})_{t+1},\end{bmatrix}\]
where \([a,b,c]\) are the weights, and \(s(cov_{1}),s(cov_{2}),s(cov_{3})\) are the three organ failure features in state \(s\). We choose the true (unobserved) weights to be \([a=0.8,b=0.6,c=0.4]\). We compare our method to AVRIL and avoid fitting BIRL due to computational constraints. Our results indicate that KD-BIRL generates reward samples with lower and more concentrated EVDs than the AVRIL trajectories (Figure 5). This indicates that KD-BIRL estimates a posterior distribution that is concentrated around the equivalence class of \(R^{\star}\).
#### 5.3.3 Using a VAE to identify features in the sepsis environment
Finally, we explore the use of a variational auto-encoder (VAE) to learn \(\phi\) in the sepsis environment. More specifically, we use a VAE to learn a low-dimensional representation of state-action tuples, and aim to learn the set of weights that modifies this representation to form the reward function. To do this, we first learn \(\phi\) on a set of state-action tuples independent of the _training_ or _expert_ demonstrations. The input dimension to the VAE is 47 (46 state features + 1 action), and the low dimensional representation has 3 features. The VAE uses 4 linear layers for the encoder and decoder, and optimizes for a downsampled representation with low reconstruction error using Adam.
Once \(\phi\) is known, it can be used to generate the required datasets. To do this, we first select a set of weights \(w^{\star}\) for the expert demonstrations, and generate state-action tuples that optimize \(R(s,a)\) where \(R(s,a)=\phi(s,a)\times w^{\star}\). We repeat this procedure for several sets of uniformly selected weights \(w_{0},\cdots,w_{c}\) to generate the training dataset. Finally, we fit KD-BIRL and evaluate the learned weights using EVD as before. We report results in Table 1, and find that across a variety of \(w^{\star}\) values, KD-BIRL's posterior samples generate comparable, if not much lower values than AVRIL. This, coupled with the additional theoretical guarantees, makes KD-BIRL a good choice for performing IRL in complex environments.
Figure 5: **Evaluating manually curated feature-based reward in the sepsis environment**. We plot the EVD for 100 reward samples from KD-BIRL and 100 trajectories from AVRIL. KD-BIRL’s EVDs are more concentrated around 0 than those of AVRIL, indicating that the reward samples from KD-BIRL’s posterior better replicate the demonstrations from the data-generating reward.
## 6 Discussion and Conclusion
In this work, we present kernel density Bayesian inverse reinforcement learning (KD-BIRL), an efficient IRL algorithm that improves upon existing methods by estimating a posterior distribution on the reward function while avoiding \(Q\)-learning for every iteration of MCMC sampling, and by providing theoretical guarantees of posterior consistency. We show that KD-BIRL generates concentrated posteriors and is more computationally efficient than existing methods in a Gridworld environment. Additionally, we demonstrate that with a feature-based reward function, KD-BIRL can perform inference in a complex healthcare environment, and the resulting posterior outperforms a leading method. Taken together, our results suggest that, in complex environments, KD-BIRL can enable an accurate probabilistic description of clinician objectives that is not possible with current methods.
Several future directions remain. This work is best-suited for on-policy (i.e., simulation) environments, and additional work is necessary to apply it directly to off-policy environments such as retrospective clinical decision-making settings. In particular, we would need behavior demonstrations from multiple agents in order to define a training dataset. Thus, it will be necessary to be able to associate actions with an agent (i.e., clinician). Additionally, the particular choices of distance metrics and hyperparameters used in the CKDE depend on the environment and reward function parameterization; additional experimentation is required to adapt this to different environments. Furthermore, a limitation of the conventional CKDE is that it performs poorly in high dimensions. One solution is to consider a modified version of the CKDE to speed it up [19]. Another solution is to replace the CKDE with another nonparametric conditional density estimator [18, 37]. Because KD-BIRL is a framework that estimates the likelihood as a conditional density, it can be easily modified to accommodate other choices for the CKDE. Finally, in this work, we discuss efforts to re-parameterize the reward function, and it is of interest to apply this work in additional environments with continuous or infinite state spaces, such as real-world EHR.
## Acknowledgments and Disclosure of Funding
We thank Alex Chan for providing code associated with the AVRIL method. This work was funded by the Helmsley Trust grant AWD1006624, NIH NCI 5U2CCA233195, NIH NHLBI R01 HL133218, and NSF CAREER AWD1005627. BEE is on the SAB of Creyon Bio, Arrepath, and Freenome. A. Mandyam was supported in part by a Stanford Engineering Fellowship. D. Cai was supported in part by a Google Ph.D. Fellowship in Machine Learning.
|
2306.05528
|
A brief review of contrastive learning applied to astrophysics
|
Reliable tools to extract patterns from high-dimensionality spaces are
becoming more necessary as astronomical datasets increase both in volume and
complexity. Contrastive Learning is a self-supervised machine learning
algorithm that extracts informative measurements from multi-dimensional
datasets, which has become increasingly popular in the computer vision and
Machine Learning communities in recent years. To do so, it maximizes the
agreement between the information extracted from augmented versions of the same
input data, making the final representation invariant to the applied
transformations. Contrastive Learning is particularly useful in astronomy for
removing known instrumental effects and for performing supervised
classifications and regressions with a limited amount of available labels,
showing a promising avenue towards \emph{Foundation Models}. This short review
paper briefly summarizes the main concepts behind contrastive learning and
reviews the first promising applications to astronomy. We include some
practical recommendations on which applications are particularly attractive for
contrastive learning.
|
Marc Huertas-Company, Regina Sarmiento, Johan Knapen
|
2023-06-08T19:56:32Z
|
http://arxiv.org/abs/2306.05528v1
|
# A brief review of contrastive learning applied to astrophysics
###### Abstract
Reliable tools to extract patterns from high-dimensionality spaces are becoming more necessary as astronomical datasets increase both in volume and complexity. Contrastive Learning is a self-supervised machine learning algorithm that extracts informative measurements from multi-dimensional datasets, which has become increasingly popular in the computer vision and Machine Learning communities in recent years. To do so, it maximizes the agreement between the information extracted from augmented versions of the same input data, making the final representation invariant to the applied transformations. Contrastive Learning is particularly useful in astronomy for removing known instrumental effects and for performing supervised classifications and regressions with a limited amount of available labels, showing a promising avenue towards _Foundation Models_. This short review paper briefly summarizes the main concepts behind contrastive learning and reviews the first promising applications to astronomy. We include some practical recommendations on which applications are particularly attractive for contrastive learning.
keywords: methods: data analysis - methods: statistical - methods: miscellaneous - techniques: miscellaneous
## 1 Introduction
As astronomical data become larger in volume and higher in dimension, new tools are needed to visualize and extract the relevant information contained in these datasets. Although dating back to the 1950s and 1960s (see, e.g., Biehl 2022 for a comprehensive and historical overview), the field of machine learning (ML) has, over the past decade in particular, proven versatile as a statistical tool for data analysis and the deduction and prediction of trends from massive data sets (see, e.g., Huertas-Company and Lanusse 2023; Smith and Geach 2022 for recent reviews on deep learning applied to astronomy and astrophysics). Whereas supervised ML is widely used in astronomy for classification and other tasks, it is in many situations limited by the availability of labeled samples. Since most data is generally unlabeled, self- and unsupervised ML are potentially powerful tools for uncovering correlations hidden in complex data sets. Applications in astronomy are still relatively limited, however, mainly because it is generally difficult to interpret the results which can also be biased by non-physical properties of the data. In this paper, we review the use and promise of contrastive learning (CL) in astrophysics. CL is a self-supervised representation learning technique that aims to combine the power of unsupervised ML while avoiding some of its most obvious dangers. This review work assumes that the reader is familiar with basic concepts of ML and, in particular, with modern deep learning techniques.
### A brief history of representation learning
_Representation learning_ (e.g., Bengio et al. 2013) refers to the general idea of automatically learning a mapping between raw high-dimensional data and a feature space--typically but not always of smaller dimension --that efficiently captures the relevant and most informative correlations in the data. The concept of representation learning is tightly connected to those of _dimensionality reduction_ and _feature extraction_, although some subtle differences exist. Feature extraction, which is the process of extracting meaningful information from data, can be manual or automatic while representation learning generally refers to techniques with no direct human supervision. _Dimensionality reduction_ methods (e.g., Van Der Maaten et al. 2009) also find a lower-dimension representation of the data but do not necessarily offer a mapping that can be used to evaluate new data points, as opposed to representation learning. The textbooks by Bishop (2006) and Murphy (2022) provide excellent introductions to these basic concepts.
The origins of representation learning go back to principal component analysis (PCA, Pearson 1901), where a high-dimensional space can be represented by a reduced number of orthogonal eigenvectors. With the goal of mapping high-dimensional data onto a lower-dimensional space, algorithms that preserve distances like multidimensional scaling (MDS, Young 1987) were developed. Like PCA, MDS is a robust linear approach to extract features but is based on pairwise distances. While PCA finds linearly uncorrelated parameters that minimize the variance in the input data, MDS finds a linear decomposition that best reproduces the pairwise distances of the input space. However, these methods have poor performance when the data have a nonlinear distribution as they provide a linear decompo
sition of the data. This motivated the development of methods that bypass the non-linearity of the data with an additional transformation (Kernel PCA, Scholkopf et al., 1997) or that also (or only) consider local distances (neighbours in the input space will also be neighbours in the feature space) like isomap (Tenenbaum et al., 2000), locally linear embedding (Roweis and Saul, 2000), Laplacian eigenmaps (Belkin and Niyogi, 2003), Hessian eigenmaps (Donoho and Grimes, 2003) or t-SNE (van der Maaten and Hinton, 2008). A limitation of these latter approaches is that they are unable to predict the projection of a new input object into the lower dimension space which makes them suboptimal for representation learning. More recent tools overcome this (e.g., UMAP, McInnes et al., 2018).
### Representation learning in the deep learning era
With the rapid advances in neural network architectures over the past decade, deep learning-based methods for representation learning have become common. The general idea is to use a neural network to approximate some properties of a dataset (e.g., Hinton and Salakhutdinov, 2006). In the process, the neural network is expected to learn some meaningful representations of the data. There are generally two broad types of approaches which are referred to as _generative_ and _discriminative_. They differ based on the target function the neural network is used to approximate.
Given a dataset \(X\) of high dimension - typically images or spectra in astronomy - and eventually some labels \(Y\) associated with it - class or physical quantity - generative models aim at estimating the probability distribution of the data \(\{x\in X\}\)\(p(x)\) - or \(p(x|y)\) if some labels \(\{y\in Y\}\) are available - by using a latent variable \(z\) of generally lower dimension than \(x\). For example, \(p(x)\) can be the joint probability distribution of the pixel values of a set of images 1. Once trained, \(p(x)\) can be sampled to generate new data points. However, for the purpose of representation learning, the interesting part is that in the process of learning \(p(x)\), the network encapsulates some information about the dataset in the latent variable \(z\), which can be, to some extent, interpreted as a representation of \(x\).
Footnote 1: In practice, a generative model learns \(p\left(x|z\right)\) which can then be used to approximate \(p\left(x\right)\).
Discriminative methods, on the other hand, estimate a conditional probability distribution \(p\left(y|x\right)\). As opposed to generative approaches, which can be applied with or without labels, they require a label by construction. The neural network is indeed used to approximate a non-linear mapping between \(x\) and some label \(y\). As for the generative case, in the process of learning the mapping a (lower) dimensionality projection of the data is represented in the layers of the network, which can be used as a representation space.
Discriminative methods are usually trained with supervised approaches since they require labels. In fact, any modern neural network trained for classification or regression can be used as a representation learning framework (see, for example, Walmsley et al., 2022) for an application to galaxy morphology). Generative approaches can be trained both in supervised and unsupervised mode. The most popular approaches in the astronomy literature over the past years are Variational AutoEncoders (Kingma and Welling, 2013; Rezende et al., 2014; Doersch, 2016; Higgins et al., 2017; Chen et al., 2018) or Generative Adversarial Networks (Goodfellow et al., 2014; Radford et al., 2016; Salimans et al., 2016; Arjovsky et al., 2017; Karras et al., 2019; Brock et al., 2018). Other generative models such as Neural Flows (Rezende and Mohamed, 2015; Dinh et al., 2017; Kingma et al., 2016; Papamakarios et al., 2017; Grathwohl et al., 2019; Chen et al., 2018) or Diffusion Models (Sohl-Dickstein and Weiss, 2015; Song et al., 2019; Ho et al., 2020; Grathwohl et al., 2021; Chen and Bach, 2021; Liu et al., 2021) are rapidly increasing in popularity, although they do not necessarily require dimensionality reduction.
Both approaches have pros and cons. Discriminative models are usually easy to train but require labels, which limits their applicability. Additionally, the representations learned by these methods are limited by the generality of the labels used for training. For instance, a network trained to identify foreground stars in galaxy images may not produce relevant features for studying galaxy morphology. However, Walmsley et al. (2022) showed that a model trained with a combination of labels describing galaxy morphology generalizes well to new tasks. Generative models attempt to overcome some of these issues, but at an important computational cost. Properly modeling \(p(x)\) solely for obtaining representations may be considered overkill and a waste of resources and time.
### Self-supervised learning
Self-supervised approaches try to get the best of both worlds - discriminative and generative - by adapting discriminative approaches to the case where no labels are available. They do so by creating a pretext task \(\hat{y}\), which the neural network is trained to predict. By using this trick, the neural network can be trained under a discriminative setting without the need for labels. Pretext tasks can be of different types; for example, one can predict the rotation angle of a given image. Table 1 concisely summarizes the different approaches to representation learning with neural networks and the role of self-supervised learning. In recent years, however, arguably the most successful self-supervised approaches are the so-called contrastive models, whose origins trace back to DrILM (Chopra et al., 2005; Hadsell et al., 2006). In contrastive models, the pretext task is set to be a measurement of similarity between data points - see Section 2 for a detailed explanation. In particular, contrastive models started to attract attention a few years ago when the representations learned through contrastive learning used in a supervised classification problem, achieved better accuracy than a pure supervised training (Chen et al., 2020). Since then, there have been numerous applications and proposed improvements in the ML literature. For example Le-Khac et al. (2020) give a nice overview of self-supervised and contrastive learning from a pure ML perspective.
Although contrastive learning is a relatively young method, there already exist a number of applications in astrophysics. This short work reviews the main ideas behind self-supervised contrastive learning techniques and how this has been applied to astrophysics so far. We also discuss the potential for future applications from a very practical point of view.
The paper proceeds as follows. In Section 2 we briefly describe the main technical aspects of a contrastive learning framework. Section 3 makes a census of the applications of contrastive learning in astronomy so far and Section 4 discusses some practical considerations regarding contrastive learning. The final Section offers a brief conclusion.
## 2 What is contrastive learning?
Contrastive learning (CL) is a self-supervised framework to learn meaningful representations from a dataset \(X\). It consists of a trainable function \(f:X\rightarrow\mathbb{R}^{n}\), parametrized by a neural network (NN), that
is optimized so that the representations \(\mathbf{z}=f(\mathbf{x})\) - with \(\mathbf{z}\in\mathbb{R}^{n}\) and \(\mathbf{x}\in X\) - become invariant to different views of the same object.
The different views of the same object are identified as positive pairs (there can be more than one pair per object), while any other combination will represent a negative pair. The key difference with other representation learning methods is that the positive pairs (or neighbours) in CL do not necessarily depend on a distance definition in the input space.
The way the positive pairs are defined determines the pretext task that the NN is optimized to solve. Since the similarity is measured in the representation space, positive pairs can be different data types that represent the same objects (e.g., images and spectra of the same source in astrophysics), a cutout of the input data, or any other transformed copy of it, such that the two versions of each individual input are identified as positive pairs. This is particularly beneficial when complex noise and/or selection effects exist in the input data. In such cases, the individual data points that share semantic information may not be well represented by a linear combination of neighbors in the input space.
Modern CL has been predominantly applied to imaging data, where views are easily defined. For example, Figure 1 illustrates some common image augmentations applied to a galaxy image from the Sloan Digital Sky Survey (SDSS) 1DR7 (Abazajian et al., 2009): cutouts, blurring, rotation (and flip), jitter, color shifts, crop, and the addition of noise.
In the next subsections we present the most commonly used loss functions applied to CL approaches and briefly review the most common NN architectures usually employed.
### Contrastive loss
All CL frameworks share some version of the so-called contrastive loss function, which is generally optimized such that each positive pair is projected together in the representation space \(\mathbf{z}\in\mathbb{R}^{n}\) and the negative pairs are repelled, although some of the latest proposed versions do not use negative pairs, as we discuss below.
The loss is calculated in the latent space of representations \(\mathbf{z}\in\mathbb{R}^{n}\), which is the image of the trainable function \(f\), where a pairwise distance metric \(\langle\cdot,\cdot\rangle\) is defined. Given a set of representations \(\mathbf{z}\in\mathbb{R}^{n}\), where each \(\mathbf{z}\) has one identified positive pair \(\mathbf{z}^{+}\) or more, the contrastive loss is a function that is minimized when the distance between positive pairs is minimized. The main difficulty is avoiding the trivial solution where all positive representations collapse into the same representation value. Several approaches have been proposed in the literature, which we briefly review now.
#### 2.1.1 Spring loss
Chopra et al. (2005) and Hadsell et al. (2006) define a loss function that takes the pairwise Euclidean distance as metric. This resembles a spring system, where positive pairs of \(\mathbf{z}_{i}\) are attracted, and the negative pairs that are within an \(m\)-distance of \(\mathbf{z}_{i}\) are repelled. The introduction of negative pairs prevents the collapse into a trivial solution. This loss \(L^{S}\) is defined pairwise for \(\mathbf{z}_{i}\) and \(\mathbf{z}_{j}\) as
\[L^{S}_{i,j}=Y\frac{1}{2}\langle\mathbf{z}_{i},\mathbf{z}_{j}\rangle^{2}+\frac{1}{2}(1 -Y)\max 0,m-\langle\mathbf{z}_{i},\mathbf{z}_{j}\rangle, \tag{1}\]
where \(Y\) is equal to one if \(\mathbf{z}_{i}\) and \(\mathbf{z}_{j}\) are identified as positive pairs, or zero if not. Therefore, either the left or right term in Eq.,1 is canceled, respectively.
#### 2.1.2 Triplet loss
The triplet loss \(L^{T}\)(Weinberger and Saul, 2009), who use a Mahalanobis distance metric (Chechik et al., 2010) with a bilinear model for the similarity measure, is similar to the spring loss, but each term is calculated considering three representations: the anchor (\(i\)), a positive (\(i^{+}\)), and a negative pair (\(n\)), as
\[L^{T}_{i,i^{+},n}=\max 0,\langle\mathbf{z}_{i},\mathbf{z}^{+}_{i}\rangle-\langle\mathbf{z} _{i},\mathbf{z}_{n}\rangle+d. \tag{2}\]
Here, \(d\) is a hyperparameter determining the distance between positive and negative pairs. This results in a more relaxed condition on the positive pairs than the Spring loss, as the latter attracts the positive pair to the same point in the representation space while repelling the negatives. In contrast, the Triplet loss only seeks to repel the negative representations a distance \(d\) longer than the positive pairs, which favors repelling hard negatives.
#### 2.1.3 Normalized cross entropy
Modern contrastive models are based on the noise-contrastive estimation (infoNCE) or normalized temperature-scaled cross entropy (NT-Xent) losses (Oord et al., 2018; Misra and van der Maaten, 2019; Chen et al., 2020). It is generally defined element-wise as
\[L^{P}_{i}=-\log\frac{\exp(\langle\mathbf{z}_{i},\mathbf{z}^{+}\rangle/h)}{\sum k_{i},\exp(\langle\mathbf{z}_{i},\mathbf{z}_{k}\rangle/h)}, \tag{3}\]
for the \(i\)-th object in the set, where \(k\) iterates over all the representations \(\mathbf{z}\) in the batch, and \(\langle\cdot,\cdot\rangle\) is a pairwise similarity measure, generally the cosine distance or a bilinear model. \(h\) is a normalizing factor - _temperature_ - that plays a similar role to \(m\) in the previous approach, as it determines the concentration of the representation space by weighing the pairwise distances. This loss can be understood as a multiclass loss, where each representation's correct class corresponds to its positive pair. The probability that a given object is assigned to its class over the other "wrong" classes is parameterized by the softmax function, and a cross-entropy loss is optimized.
As CL frameworks typically benefit from a large number of negative samples, the \(L^{P}\) losses are more efficient (faster convergence) than the \(L^{S}\) ones as every object accounts for the negative pairs of the object \(\binom{N}{2}\times N\), while in the \(L^{S}\) and \(L^{T}\), only up to \(\binom{N}{2}\) negative pairs are considered in total.
Some works have proposed losses based on mutual information (MI) maximization (Hjelm et al., 2019), where MI measures the dependence that a random variable \(W\) has with another random variable \(V\) and is defined as
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \hline
**Approach** & \multicolumn{2}{c|}{**Discriminative**} & \multicolumn{2}{c|}{**Generative**} \\ \cline{2-5}
**Data type** & Target Function & Method & Target Function & Method \\ \hline
**Labels** & \(p(y|x)\) & All supervised networks with bottleneck & \(p(x|y)\) & Conditional generative models \\
**No Labels** & \(p(\hat{y}|x)\) & Self-supervised learning & \(p(x)\) & Generative models, Autoencoders \\ \hline \end{tabular}
\end{table}
Table 1: Approaches to representation learning with neural networks
\[I(w,v)=\mathbb{E}_{p(w,v)}\left[\log\frac{p(w,v)}{p(w)p(v)}\right]\,. \tag{4}\]
While the MI-based approaches were proposed independently, the infoNCE optimization is analogous to the MI one, as the first is equivalent to maximizing a lower bound on MI (Oord et al., 2018).
#### 2.1.4 Contrastive loss without negative pairs
More recently, Grill et al. (2020) proposed a loss that is calculated with the cosine distance between the positive pairs only and therefore departs from the standard CL setup:
\[L_{t}^{B}=2-2\frac{\langle z_{i},z_{i}^{+}\rangle}{||z_{i}||_{2}||_{2}||z_{i}^{ +}||_{2}} \tag{5}\]
To avoid collapsing to the trivial solution, they use a twin network where one branch (target) is prevented to update its weights through back-propagation (Chen & He, 2020). Instead, the weights of the target branch are a function of those in the symmetric branch (online) which has the same architecture (see Section 2.2 for more details)
In addition to these common augmentations, one of the most attractive properties of CL is the possibility to define custom augmentations which are domain-specific and allow one to marginalize over known nuisance effects. We will discuss this in more detail in sections 3 and 4.
### Contrastive learning architectures
The representations \(z\) are computed using NNs. Over the past years, a number of frameworks have been developed, varying the structure and hyper-parameters of the NNs to obtain better performances.
Given that CL is primarily designed for datasets without labels, the evaluation and comparison of different approaches is not always straightforward. Architectures are therefore generally tested on standardized datasets for which labels do exist. The underlying idea to validate a new approach is that if extracted representations are meaningful, then a simple supervised classifier should be able to correctly classify the data. This way, the performance of the algorithms can be tested on supervised tasks, also known as downstream tasks. Therefore, when an approach is presented as more accurate than another in the literature, it generally refers to the accuracy in the downstream supervised task.
Although it is outside the scope of this brief overview to cover all existing implementations of CL, we attempt to summarize the major architecture conceptions that have resulted in significant performance improvements in downstream tasks in the following. A schematic view of these different architectures is presented in Figure 2.
#### 2.2.1 CMC (Contrastive Multiview Coding)
Hadsell et al. (2006) stated that the representations benefit from a large number of negative examples. However, this comes at the computational cost of calculating an increasing number of loss terms and more representations in each learning step. This motivated following works to include a memory bank (Misra & van der Maaten, 2019; Wu et al., 2018; Tian et al., 2019) that would store the representations calculated in previous iterations to increase the number of negative examples. In each iteration the loss is calculated considering the newly calculated representations in the mini-batch together with \(m\) randomly sampled representations from the memory bank. This approach benefits from an adjustable number of negative examples without needing to compute more representations than those in the mini-batch.
#### 2.2.2 MoCo (Momentum Contrastive)
Other implementations replace the memory bank with a parallel network that acts as a momentum encoder (MoCo, He et al., 2019), whose weights are updated as a moving average of the main branch's weights. The momentum encoder calculates updated representations, which are later queued in a dictionary (a subset of the training set). The oldest representations in the dictionary are replaced by new ones as the network is trained. In contrast to the memory bank approach, the negative examples are sampled from the dictionary which is progressively updated. This prevents the network from using outdated representations from previous epochs as it is restricted to the representations of the immediately previous mini-batches as negative examples. He et al. (2019) found that this framework benefits from a slowly evolving momentum encoder to provide consistent representations over mini-batches and hypothesized that this is due to the rapidly changing main branch encoder.
#### 2.2.3 SimCLR (Simple Contrastive Learning)
Chen et al. (2020) propose a single Convolutional Neural Network (CNN) and no memory bank, so the negative examples are limited by the batch size. This sets a requirement on a minimum amount of memory for training. They show that this setup gets comparable results to the MoCo framework using large batch sizes (1024). Chen et al. (2020) also prove that more parameters in the network, and more and stronger augmentations result in more accurate representations learned. Another important result of this work is that adding a
Figure 1: Common transformations used on galaxy imaging to produce positive pairs. From left to right: original \(gri\)-band composite, horizontal flip, rotation, blurring, noise addition, colour jitter, resizing and position shift.
set of fully connected layers before the loss calculation yields better representations in previous layers. This feature is then included in subsequent works (Chen et al., 2020, Grill et al., 2020).
#### 2.2.4 BYOL (Bootstrap Your Own Latent)
In the case of Grill et al. (2020) and Chen & He (2020), a set of twin networks is implemented. The networks' architectures are identical on both branches except that one branch has an additional projection function. This is the branch where the back-propagation is implemented. The differences between the two implementations shed some light on understanding how these setups work. While BYOL's target branch is progressively updated with the twin network's weights to stabilize the output, SimSiam shares the weights in both branches. BYOL incorporates a set of fully connected layers after the CNN in both branches, although Chen & He (2020) claim that these layers do not improve the representations learned. However, both approaches agree on the need to limit the back-propagation to the online branch to avoid collapsing to a trivial solution, as the loss is calculated only with positive pairs (see subsection 2.1).
#### 2.2.5 CLIP (Contrastive Language-Image Pre-training)
Recent architectures have started to use different types of networks for feature extraction, which are optimized for different data types. This allows for the combination of, for example, text and images. For example, Radford et al. (2021) train language and image encoders with a contrastive learning model to obtain representations of both inputs. They show that introducing descriptive text of an image in the CL setting produces better generalization and improves zero-shot learning.
## 3 Applications of contrastive learning in astronomy
Table 2 summarizes applications of deep CL to astrophysics so far. The first publications are from 2021, which illustrates the fact that the success of CL is relatively recent. Interestingly, the majority of the applications are in the field of galaxy formation. A possible explanation could be that there are large imaging datasets publicly and easily accessible.
Overall, CL is used either for pure data exploration - in which case it is usually followed by some sort of clustering - or to perform a downstream supervised classification or regression / inference using the representations obtained.
### Inference from representations
#### 3.1.1 Classification with limited amount of labels
Hayat et al. (2021) first applied the CL framework MoCo (see Figure 2 and Subsection 2.2) to a set of images from the SDSS I and II. They used the learned representations to perform two downstream tasks: galaxy morphology classification and photometric redshift estimation, as well as for data visualization. The work illustrated two main advantages of self-supervised CL. First, the authors showed the importance of using domain knowledge to obtain more robust representations. For example, they introduced custom augmentation to marginalize over reddening. As discussed in Section 2, CL easily allows for new augmentations to be included, making it a very flexible framework for representation learning. Second, Hayat et al. (2021) showed that when the representations are used for galaxy classification and photometric redshift estimation, one can achieve similar accuracies as with a pure supervised approach but with an order of magnitude fewer objects (Figure 3). This confirms that contrastive learning is helpful for reducing the volume of labeled datasets in astrophysics, as was demonstrated with natural images.
A similar conclusion is reached by Slijepcevic et al. (2022) in an application to radio galaxy classification. By using the BYOL framework (see Figure 2 and subsection 2.2) they show that a classification based on the representations achieves comparable accuracy to a complete supervised approach. Landouar et al. (2022) explore the accuracy of CL for classifying time series of solar magnetic field measurements. They show again that CL is an efficient way of obtaining high classification accuracies when only limited labeled data is available (see also Mercea et al. (2023) for an extensive analysis of CL representations to classify seismic emissions in the solar surface).
#### 3.1.2 Domain adaptation
Wei et al. (2022) also apply CL followed by a morphological classification downstream task, reaching similar conclusions, i.e., accuracies comparable to supervised classifications are reached but with a small amount of labels. Interestingly, they also show that the representations extracted with self-supervised learning generalize well to multiple imaging datasets. The concept of generalization to multiple tasks (i.e., domain adaptation and generalization) is an important property of CL. Domain adaptation with NNs generally refers to techniques used to improve the performance of a NN model trained on a source domain when applied to a different but related unlabeled target domain, by reducing the discrepancy between the two domains (see Li et al., 2021 for a review of the topic). Walmsley et al. (2022) explore this in more detail for galaxy morphology applications. They use, in particular, what they call a _hybrid_ contrastive-supervised approach to perform galaxy classifications from a set of visually classified galaxy images. This is performed by adding a supervised term to the BYOL framework, allowing for the performance of classifications while extracting representations that remain invariant to perturbations. They show that the representations learned by the hybrid approach can be efficiently fine-tuned with a few labels to perform supervised tasks for which no labels were provided. They apply this to find new ring galaxy candidates (Figure 4). In a more recent work, Ciprijanovic et al. (2023) also explore the concept of a _universal domain adaptation_ approach for galaxy morphology. Instead of using a pure CL framework as the ones discussed in this review, they propose instead a custom approach based on an Adaptive Clustering loss (Saito et al., 2020; Li et al., 2021) on the latent representation. They test their approach by transferring a trained model from SDSS to the Dark Energy Camera Legacy Survey (DECaLS; Dey et al., 2019).
#### 3.1.3 Probabilistic inference
In addition to classification, CL representations can be employed as summary statistics for probabilistic inference. This is the case for example of the work by Shen et al. (2022) in which the CL representations are coupled to a Neural Density Estimator (Neural Flow) to estimate an approximate posterior distribution of physical properties of black hole mergers given the gravitational wave emission. In this case, using CL as conditioning for the Neural Flow has the advantage of improving the inference robustness to noise since the augmentations enable a marginalization over S/N.
### Data visualization and clustering
Another set of applications is oriented towards data visualization and exploration. In these cases, a specific downstream task is not sought, but the main purpose of the representations is to explore and find patterns in the data. In that respect, it is closer to a purely unsupervised application.
Sarmiento et al. (2021), for example, use SimCLR (see Figure 2 and Subsection 2.2) to extract meaningful representations of inferred stellar population and kinematic maps for \(\sim 10,000\) galaxies in the Mapping Nearby Galaxies at Apache Point Observatory (MaNGA; Bundy et al., 2015; Abdurro'uf et al., 2022) survey. They show that the contrastive framework naturally orders galaxies based on their physical properties without supervision and _rediscovers_ some well-known relations with a purely data-driven approach. Interestingly, they also find that, for that particular case, other representation learning strategies such as PCA fail to extract physical properties and focus more on instrumental effects. The flexibility of contrastive learning to include custom domain-driven augmentations is key to obtain more physical representations (Figure 5). More recently,Vega-Ferrero et al. (2023) has applied a similar approach to images of galaxies from the Cosmic Evolution Early Release Science survey (CEERS,Finkelstein et al., 2022). By introducing an augmentation that goes from a noiseless to a noisy version of images, they show that the CL representations offer a morphological description that becomes robust to noise, a property lacking in most previous morphology classifications.
Guo et al. (2022) use a similar contrastive-learning (BYOL) followed by clustering applied to galaxy images in the far-infrared from the Wide-field Infrared Survey Explorer (WISE) survey (Wright et al., 2010). They demonstrate that the method successfully organizes objects based on similarity and discuss the properties for the different obtained clusters. The work does not, however, discuss the advantages of CL over other representation learning settings in this particular case.
### Similarity search and anomaly detection
Two straightforward applications of robust representations are similarity search and anomaly detection. Given that CL representations, by construction, are similar for objects with similar properties - marginalized over the nuisance properties encoded in the augmentations - one can query the representations to look for similar data or for isolated objects. Stein et al. (2021) demonstrate efficient similarity search using CL representations, as illustrated in Figure 6 where the model is queried to identify galaxy images with similar properties.
### Other applications
Recent work by Doorenbos et al. (2022) uses CL for a different purpose: generating galaxy spectra from images. This illustrates how meaningful representations can be used for a wide number of applications and shows a possible evolution for the near future. The authors use a conditional diffusion model (e.g., Chen et al., 2021) to generate possible candidates of spectra given an image of a galaxy. Diffusion models are a class of generative models that learn the dynamics of how a distribution evolves over time by iteratively applying a diffusion process to a simple initial distribution, making them well-suited for tasks such as density estimation, generative modeling, and image synthesis (see the introduction section for more references on this
Figure 2: A reduced number of frameworks are shown in a simplified scheme to exemplify the variations of the Contrastive Learning framework zoo. On the left, Contrastive Multiview Coding (CMC, Tian et al., 2019) stores the representations computed in previous mini-batches in a memory bank to increase the number of negative examples. Although CMC proposes a twin network for comparing distinct versions of the input data, we represent in the figure only one branch to exemplify the memory bank setup. Second from the left, Momentum Contrastive (MoCo, He et al., 2019) has a parallel network that provides a queue of updated representations of negative examples. In the centre, Simple Contrastive Learning of visual Representations (SimCLR, Chen et al., 2020) uses a unique CNN with a projection head (\(h\)) to compute the loss and extracts the final representations from the layer before \(h\) after training. Second from the right, Boostrap Your Own Latent (BYOL, Grill et al., 2020) uses a parallel network to compute with the stop gradient operation on one branch to compute a positive pairs-only contrastive loss. On the right, Contrastive Language Image Pre-training (CLIP, Radford et al., 2021) trains simultaneously language and visual encoders to generate representations that match both domains.
type of generative models). Then they use CL to find common representations of real spectra and their corresponding images. Among all candidates resulting from the sampling of the diffusion model, the best candidate is selected by identifying the spectrum producing the closest CL representation to the target image (Figure 7). In addition to the specific application, which is potentially interesting for data exploration, this work illustrates the unique capability of CL to obtain simultaneous representations of heterogeneous data such as spectra and images.
## 4 Discussion
CL has emerged as an efficient approach for representation learning over the past years with some successful applications in astronomy as described in the previous section. We discuss in the following some practical considerations for non-expert readers interested in using CL methods for other astrophysical problems.
### What is contrastive learning useful for?
CL has been shown to be primarily a promising way to address the issue of the lack of labeled data for supervised classification and inference.
Over the past few years, deep learning applications have become more frequent in astronomy, with supervised approaches still being the most common (e.g.,Huertas-Company and Lanusse, 2023). One of the main bottlenecks of such applications is the availability of labeled data. The vast majority of astronomical data is unlabeled, limiting the applicability of ML for classification or inference. In this context, the community needs methods that can work on small datasets when only a limited amount of labels is available or that can generalize well across domains and, therefore, be applied to different datasets (see e.g.,Ciprijanovic et al., 2023). From the few works exploring CL in astronomy, it appears as a promising approach to at least partially solve some of these issues. For instance,Hayat et al. (2021) have shown that CL can reduce the number of required labels for supervised classification and regression by an order of magnitude. Along the same lines,Walmsley et al. (2022) have shown that adding a CL component seems to be a promising avenue towards general "foundation" models for morphology classification, i.e., models that
Figure 3: Application of contrastive learning (MoCo) to galaxy morphology classification by Hayat et al. (2021). The three columns show the classification performance in a supervised setting (left), a linear classifier directly on the self-supervised representations (center), and when fine-tuning the self-supervised encoder for a few epochs (right). \(\eta\) measures the outlier fraction. While a pure supervised setting fails at providing meaningful morphology estimations with 256 examples (top left) a supervised model trained on the representations obtains significantly more accurate results (top middle).
can be applied to a variety of different datasets. In addition to being a feature extraction method for supervised downstream tasks, CL can also be used alone as a visualization/exploratory tool (see e.g.,Sarmiento et al., 2021; Stein et al., 2021). Although there are various techniques for representation learning (see the introduction) and for transferring trained models (see e.g.,Dominguez Sanchez et al., 2019; Ciprijanovic et al., 2020), CL has unique properties that make it particularly useful for astrophysics.
Arguably, the main advantage of CL is that, unlike other approaches, it does not have a predefined encoded metric, allowing it to be adapted for every application using domain knowledge. The similarity pretext task that CL solves is determined by the augmentations, which defines the positive pairs. In that respect, although there are some standard augmentations that can be used for imaging applications (see Figure 1), CL has room for domain-specific augmentations. This allows one to introduce domain knowledge in the representations without explicitly defining a metric. This is particularly relevant for astrophysics, in which there usually exist selection and instrumental effects that are known and fairly easy to model, but can potentially bias other representation learning algorithms with a fixed metric. CL enables a relatively straightforward marginalization over these nuisance parameters. Several works have already shown the advantages of using domain-specific augmentation. Mercea et al. (2023) discuss the use of different transformations on regression power maps derived from solar photospheric dopplergrams. They use time-based mixing (stacking of subsequent maps), a low-pass filter followed by solarization, and random erasing of signals. Sarmiento et al. (2021) use specific observational effects from the MaNGA survey, while Abul Hayat et al. (2021) include reddening as part of the augmentation sets. Generally speaking, any dataset affected by known nuisance effects that can be modeled might potentially be a good candidate for application of CL.
Another important consequence of the flexibility of CL for defining the similarity metric is that it opens the door for obtaining representations of different data types. There are a number of recent successful applications of CL in the ML community combining speech and images, for example (e.g., Ramesh et al., 2021). In astronomy, it is potentially an interesting avenue to extract simultaneous information from images and spectra, a task that is difficult to achieve with other approaches. The first work to have explored this, to our knowledge, is the one by Doorenbos et al. (2022), with encouraging results that open new, interesting research avenues.
### What is contrastive learning not useful for?
It is important to emphasize that CL is not particularly efficient as a dimensionality reduction algorithm. In fact, several works have shown that increasing the dimensionality of the representation space generally results in an improved performance (e.g.,Chen et al., 2020; Sarmiento et al., 2021). As a result, the obtained representations are still of fairly high dimension and not easy to visualize. This is why several authors have coupled the resulting representations with a
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Article & Astronomy field & Data & Based on & Downstream task \\ \hline Hayat et al. (2021) & Galaxies & Photometric Imaging & MoCo & Regression (photo-z, morphology) \\ & & (SDSS) & He et al. (2019) & \\ Sarmiento et al. (2021) & Galaxies & Integral Field Spectroscopy & SimCLR & Clustering \\ & & (MaNGA) & Chen et al. (2020)a & \\ Stein et al. (2021) & Galaxies & Photometric Imaging & MoCo & Similarity search \\ & & (DESI Legacy Imaging Survey) & He et al. (2019) & \\ Walmsley et al. (2022b) & Galaxies & Photometric Imaging & BYOL (loss) & Transfer Learning \\ & & & Grill et al. (2020) & \\ Slijepcevic et al. (2022) & Galaxies & & BYOL & Morphology classification of radio \\ & & & Grill et al. (2020) & \\ Wei et al. (2022) & Galaxies & Photometric Imaging & SimCLR & Morphology classification \\ & & SDSS & Chen et al. (2020)a & \\ Shen et al. (2022) & Gravitational Waves & surrogate waveforms & SimCLR (NT-Xent loss) & Regression (masses, spin, quasinormal modes) \\ & & simulations + LIGO & Chen et al. (2020)a & \\ Doorenbos et al. (2022) & Galaxies & Photometric Imaging and Spectra & SimCLR (NT-Xent loss) & Predict spectra from images (combined with multimodal conditional diffusion models) \\ & & (SDSS) & Chen et al. (2020)a & \\ LAMouar et al. (2022) & Solar & & (Triplet loss) & \\ Guo et al. (2022) & Galaxies & Photometric Imaging & BYOL & Clustering \\ & & (WISE) & Grill et al. (2020) & \\ Mercea et al. (2023) & Solar & Photospheric Dopplergram maps & SimCLR\& SupCon & Classification (sunquake detection) \\ & & Michelson Doppler Imager & Chen et al. (2020)a & \\ Vega-Ferrero et al. (2023) & Galaxies & Photometric Imaging & SimCLR & Domain adaptation and Clustering \\ & & JWST and TNG50 simulations & Chen et al. (2020)a & \\ Ciprijanovic et al. (2023) & Galaxies & Photometric Imaging & Adaptive Clustering & Domain adaptation and Clustering \\ & & SDSS and DECALS & & \\ \hline \end{tabular}
\end{table}
Table 2: CL applied to astrophysics
pure dimensionality reduction algorithm for visualization purposes (see e.g. Hayat et al., 2021). Therefore CL is not the approach to use for inferring the true dimensionality of a dataset. In fact, unlike PCA, there are no constraints on the representations obtained and therefore it is always the case that the different representations are highly correlated.
This also implies a more challenging interpretability of the extracted features compared to other representation learning algorithms. Both the high dimensionality and the correlation between features significantly complicate the task of physically interpreting the extracted representations. CL is probably not the preferred approach for gaining physical insight by exploring the representations. However, CL frameworks can be coupled with other methods to increase interpretability. One could also potentially add regularization terms into the loss functions to penalize strong correlations in the latent space.
## 5 Summary and Conclusions
This work briefly reviews contrastive methods for representation learning and their application to astrophysics. CL is a self-supervised learning method which obtains meaningful representations of high-dimensionality datasets by solving a similarity pretext task. Compared to other representation learning techniques, the similarity metric is not encoded but is defined through augmentations of the input dataset. By defining _positive pairs_ as the set of all augmented versions of the same data point, the representations become invariant to such transformations.
We review a dozen existing works using CL in astrophysics, the vast majority in the field of galaxy formation. Applications are roughly divided in two main families: downstream regression, inference and classification; and data exploration. From these works, we conclude that CL is as a powerful method to train supervised algorithms with a limited amount of labels. The flexibility of the augmentations allowing domain-specific perturbations makes CL a suitable method for marginalizing over known instrumental or other nuisance effects which can be modeled in astronomy. An interesting avenue for the future, which is also enabled by CL and has just started to be explored in astrophysics, is the combinations of different data types such as imaging and spectra.
## Acknowledgements
The authors acknowledge financial support from the State Research Agency (AEIMCINN) of the Spanish Ministry of Science and Innovation under the grant "Galaxy Evolution with Artificial Intelligence" with reference PGC2018-100852-A-I00 and under the grant "The structure and evolution of galaxies and their central regions" with reference PID2019-105602GB-I00/10.13039/501100011033, from the ACIIISI, Consejeria de Economia, Conocimiento y Empleo del Gobierno de Canarias and the European Regional Development Fund (ERDF) under grants with reference PROID2020010057 and PROID2021010044, and from IAC projects P/300724 and P/301802, financed by the Ministry of Science and Innovation, through the State Budget and by the Canary Islands Department of Economy, Knowledge and Employment, through the Regional Budget of the Autonomous Community.
## Data Availability
No data are used in this review work.
|
2302.07810
|
Cartesian Gray-Monoidal Double Categories
|
In this paper we present cartesian structure for symmetric Gray-monoidal
double categories. To do this we first introduce locally cubical Gray
categories, which are three-dimensional categorical structures analogous to
classical, locally globular, Gray categories. The motivating example comprises
double categories themselves, together with their functors, transformations,
and modifications. A one-object locally cubical Gray category is a
Gray-monoidal double category. Braiding, syllepsis, and symmetry for these is
introduced in a manner analogous to that for 2-categories. Adding cartesian
structure requires the introduction of doubly-lax functors of double categories
to manage the order of copies. The resulting theory is algebraically rather
complex, largely due to the bureaucracy of linearizing higher-dimensional
boundary constraints. Fortunately, it has a relatively simple and compelling
representation in the graphical calculus of surface diagrams, which we present.
|
Edward Morehouse
|
2023-02-15T17:48:58Z
|
http://arxiv.org/abs/2302.07810v2
|
# Cartesian Gray-Monoidal Double Categories
###### Abstract
In this paper we present cartesian structure for symmetric Gray-monoidal double categories. To do this we first introduce locally cubical Gray categories, which are three-dimensional categorical structures analogous to classical, locally globular, Gray categories. The motivating example comprises double categories themselves, together with their functors, transformations, and modifications. A one-object locally cubical Gray category is a Gray-monoidal double category. Braiding, syllepsis, and symmetry for these is introduced in a manner analogous to that for \(2\)-categories. Adding cartesian structure requires the introduction of doubly-lax functors of double categories to manage the order of copies. The resulting theory is algebraically rather complex, largely due to the bureaucracy of linearizing higher-dimensional boundary constraints. Fortunately, it has a relatively simple and compelling representation in the graphical calculus of surface diagrams, which we present.
## 1 Introduction
A monoidal extension of a categorical structure allows us to combine together multiple things as a single thing. We can regard this as adding new structure to do the combining in a coherent way. Alternatively, we can view a monoidal extension of a categorical structure as a one-object instance of another categorical structure with an additional dimension, where the monoidal product of things in the original structure corresponds to composition in the loop space of endomorphisms of the extended one.
In the classical case such monoidal structure is _dimension-maximizing_, in the sense that tensoring an \(m\)-cell with an \(n\)-cell yields a \(\max(m\,,\,n)\)-cell. The setting of premonoidal categories gives a glimpse of another possibility. There, the tensor product of \(0\)-cells yields a \(0\)-cell, but to make a \(1\)-cell we tensor either a \(1\)-cell with a \(0\)-cell or a \(0\)-cell with a \(1\)-cell. We can think of such a monoidal structure as _dimension-summing_. In a premonoidal category we don't obtain a piece of structure by tensoring two \(1\)-cells \(f:\mathrm{A}\to\mathrm{B}\) and \(p:\mathrm{X}\to\mathrm{Y}\). However, we may chose to obtain a property, in the form of an _interchange law_, \((f\otimes\mathrm{X})\cdot(\mathrm{B}\otimes p)=(\mathrm{A}\otimes p)\cdot(f \otimes\mathrm{Y})\), which acts as a relation on the \(1\)-dimensional boundary of a forbidden \(2\)-cell.
One dimension up, there are more possibilities. With a \(2\)-category we may obtain a \(0\)-cell by tensoring two \(0\)-cells (\(0=0+0\)), a \(1\)-cell by tensoring a \(1\)-cell and \(0\)-cell or a \(0\)-cell and \(1\)-cell (\(1=1+0=0+1\)), and a \(2\)-cell by tensoring a \(2\)-cell and \(0\)-cell or a \(0\)-cell and \(2\)-cell (\(2=2+0=0+2\)). But we also have another possibility: a \(2\)-cell obtained by tensoring two \(1\)-cells (\(2=1+1\)). This categorifies the property of interchange
into the structure of an interchanger, \(\chi_{(f,p)}\) : \((\operatorname{A}\otimes\operatorname{X}\to\operatorname{B}\otimes \operatorname{Y})\left((f\otimes\operatorname{X})\cdot(\operatorname{B}\otimes p )\to(\operatorname{A}\otimes p)\cdot(f\otimes\operatorname{Y})\right)\), or its inverse.
Gray-monoidal structure for \(2\)-categories refines classical monoidal structure by requiring a choice of ordering on the tensor of \(2\)-cells. We can't tensor two \(2\)-cells \(\varphi:\,(\operatorname{A}\to\operatorname{B})\left(f\to g\right)\) and \(\psi:\,(\operatorname{X}\to\operatorname{Y})\left(p\to q\right)\) directly because the dimension would be too high. But we can tensor the first \(2\)-cell with the \(0\)-cell domain of the second \(2\)-cell, \(\varphi\otimes\operatorname{X}:\,(\operatorname{A}\otimes\operatorname{X} \to\operatorname{B}\otimes\operatorname{X})\left(f\otimes\operatorname{X} \to g\otimes\operatorname{X}\right)\), and tensor the \(0\)-cell codomain of first \(2\)-cell with the second \(2\)-cell, \(\operatorname{B}\otimes\psi:\,(\operatorname{B}\otimes\operatorname{X}\to \operatorname{B}\otimes\operatorname{Y})\left(\operatorname{B}\otimes p\to \operatorname{B}\otimes q\right)\), or vice-versa. This yields a horizontally consecutive pair of \(2\)-cells that we can compose, in this case along \(\operatorname{B}\otimes\operatorname{X}\). A choice of ordering for the tensor of \(2\)-cells determines an ordering on the \(1\)-cells in the boundary of the resulting composite, which can be manipulated by vertical composition with interchangers.
This dimension-summing monoidal structure on \(2\)-categories was studied by Gray, after whom it has been named. The Gray-monoidal structure on \(2\)-categories can be used to define \(3\)-dimensional, locally globular, _Gray categories_. Gray categories are significant in that they are the algebraic structure comprising \(2\)-categories themselves, together with a hierarchy of their morphisms [1]. Moreover, every fully weak tricategory is equivalent to one [1]. Once we have Gray categories we can recognize Gray-monoidal \(2\)-categories as their one-object instances. Like ordinary monoidal \(2\)-categories, Gray-monoidal \(2\)-categories can be given a symmetric braiding structure, which was developed by Kapranov and Voevodsky [12], Baez and Neuchl [1], Day and Street [2], and Crans [1].
Double categories are \(2\)-dimensional categories of cubical shape, in the sense that there are two independent dimensions in which \(1\)-cells may extend, and a \(2\)-cell is a square bounded by a pair of each sort of \(1\)-cell. We may impose Gray-monoidal structure on these as well. However, now we have four sorts of \(1\)-cell interchanger instead of just one. Gray-monoidal structure on double categories has been investigated by Bohm [1]. We use this structure to define a locally cubical analogue of Gray categories, where the homs are double categories rather than \(2\)-categories.
These _locally cubical Gray categories_ are significant in that they are the algebraic structure comprising double categories themselves, together with a hierarchy of their morphisms. Moreover, they generalize classical, locally globular, Gray categories in the sense that the the latter may be seen as instances of the former that are discrete in one dimension. One-object locally cubical Gray categories are _Gray-monoidal double categories_, which we equip with a symmetric braiding structure in a manner similar to that for \(2\)-categories.
A symmetric monoidal structure is _cartesian_ if it allows us to uniformly duplicate and delete things, as was observed in the case of \(1\)-categories by Fox [1]. It may seem counterintuitive that we can have cartesian structure in the context of the dimension-summing Gray-monoidal product. But in fact, in this setting we can still duplicate things provided that we impose and maintain an order on the copies. Consequently, the relationship between one copy and two is no longer strictly functorial, nor is it invariant under swapping the copies, as in the case of the ordinary monoidal product. We propose a notion of (vertical) cartesian structure for Gray-monoidal double categories that is compatible with these constraints.
The plan of the paper is as follows. In section 2 we review the standard definitions of double categories and their hierarchy of morphisms. This serves to introduce the constructions from the double-categorical literature that we need along with the graphical
calculus of surface diagrams that we will use to represent and reason about them. We observe that the hom determined by a given pair of double categories itself has the structure of a double category, and moreover that elements of consecutive homs can be composed.
In section 3 we give an algebraic presentation of the structure formed by double categories, functors, transformations, and modifications, which we call a _locally cubical Gray category_. In fact, there is a family of such structures parameterized by the variance of homogeneous interchangers. Locally cubical Gray categories are 3-dimensional categorical structures that are cubical in two dimensions and globular in the third. They constitute the cubical generalization of classical, locally globular, Gray categories.
In section 4 we introduce one-object instances of locally cubical Gray categories, which we call _Gray-monoidal double categories_, and find them to be essentially Bohm's _double categorical Gray-monoids_. We extend the monoidal structure with braiding, syllepsis, and symmetry in a manner similar to that in the globular case, as developed by Kapranov and Voevodsky, Baez and Neuchl, Day and Street, and Crans.
In section 5 we make our symmetric Gray-monoidal double categories cartesian by equipping them with duplication and deletion structure. In order to make duplication compatible with composition we need the notion of a doubly-lax functor of double categories, whose comparison cells collate copies. We also find that we need multiple duplicator maps in order to account for the different orders in which copies can occur.
## 2 Double Categories
A double category is a 2-dimensional categorical structure of cubical shape. Strict double categories were introduced by Ehresmann [1]. The weak form considered here has been studied extensively by Grandis and Pare in a sequence of articles beginning with [1], many of the results of which are collected in the book [1]. The 2-dimensional representation of constructions in double categories using string diagrams was explored by Myers [21].
We can characterize a double category as a weak category internal to the 2-category of (suitably small) categories, functors, and natural transformations.
**Definition 2.1** (double category): A (weak) _double category_\(\mathbb{D}\) consists of ordinary categories \(\mathbb{D}_{0}\) and \(\mathbb{D}_{1}\) together with functors:
where \((\pi_{0}\,,\pi_{1})\) is a pullback of \((\mathrm{R}\,,\mathrm{L})\), and such that
**identity boundaries:**\(\mathrm{U}\cdot\mathrm{L}=\mathrm{id}\,\mathbb{D}_{0}=\mathrm{U}\cdot\mathrm{R}\)
**composition boundaries:**\(-\,\mathbb{O}-\cdot\mathrm{L}=\pi_{0}\cdot\mathrm{L}\) and \(-\,\mathbb{O}-\cdot\mathrm{R}=\pi_{1}\cdot\mathrm{R}\)
together with coherent natural isomorphisms with the following components
**unitors:**\(\lambda(\mathrm{M}):\mathrm{U}(\mathrm{LM})\,\mathbb{O}\,\mathrm{M}\to\mathrm{M}\) and \(\rho(\mathrm{M}):\mathrm{M}\,\mathbb{O}\,\mathrm{U}(\mathrm{RM})\to\mathrm{M}\)
**associator:**: \(\kappa(\mathrm{M}\,,\,\mathrm{N}\,,\,\mathrm{P}):(\mathrm{M}\odot\mathrm{N})\odot \mathrm{P}\rightarrow\mathrm{M}\odot(\mathrm{N}\odot\mathrm{P})\)
We call objects of \(\mathbb{D}_{0}\)_objects_ or \(0\)_-cells_ of the double category \(\mathbb{D}\), morphisms of \(\mathbb{D}_{0}\) its _arrows_ or _vertical \(1\)-cells_, objects of \(\mathbb{D}_{1}\) its _proarrows_ or _horizontal \(1\)-cells_, and morphisms of \(\mathbb{D}_{1}\) its _squares_ or \(2\)_-cells_.
The functors \(\mathrm{L}\) and \(\mathrm{R}\) pick out the "left" and "right" boundary objects of a proarrow, and arrows of a square, respectively. For a proarrow \(\mathrm{M}:\mathbb{D}_{1}\), we write "\(\mathrm{M}:\mathrm{A}\rightarrow\mathrm{B}\)" to indicate that \(\mathrm{L}(\mathrm{M})=\mathrm{A}\) and \(\mathrm{R}(\mathrm{M})=\mathrm{B}\). The functor \(\mathrm{U}\) gives the _identity_ proarrow on an object, and square on an arrow, and \(-\odot-\) gives the _composite_ of consecutive proarrows, and of squares in the proarrow dimension. For composition in the (strict) arrow dimension we use our generic composition notation "\(-\cdot-\)" with units "id". We write all compositions in left-to-right order.
The coherence of the associator and unitors can be characterized in the same way as for bicategories, namely by Mac Lane's associator coherence "pentagon equation" relating terms of type \(((\mathrm{L}\odot\mathrm{M})\odot\mathrm{N})\odot\mathrm{P}\rightarrow\mathrm{ L}\odot(\mathrm{M}\odot(\mathrm{N}\odot\mathrm{P}))\) and middle unit coherence "triangle equation" relating those of type \((\mathrm{M}\odot\mathrm{U})\odot\mathrm{N}\rightarrow\mathrm{M}\odot\mathrm{ N}\)[10].
If the unitor natural isomorphisms are identities then the double category is called _unitary_. If the associator is an identity as well then it is _strict_. In the following we assume our double categories to be at least strict for identity proarrows, in the sense that \(\mathrm{UA}\odot\mathrm{UA}=\mathrm{UA}\) and \(\lambda(\mathrm{UA})=\mathrm{id}(\mathrm{UA})=\rho(\mathrm{UA})\) (by coherence the transitive equality is true in any double category). By the triangle equation this implies \(\kappa(\mathrm{UA}\,,\,\mathrm{UA})=\mathrm{id}(\mathrm{UA})\) as well. Such double categories are sometimes called _preunitary_[1].
We write "\(\mathrm{M}_{f}\Diamond_{\mathrm{N}}^{g}\)" for the configuration of morphisms given by arrows \(f:\mathrm{A}\rightarrow\mathrm{C}\) and \(g:\mathrm{B}\rightarrow\mathrm{D}\) and proarrows \(\mathrm{M}:\mathrm{A}\rightarrow\mathrm{B}\) and \(\mathrm{N}:\mathrm{C}\rightarrow\mathrm{D}\). A square with this boundary, \(\alpha:\mathrm{M}_{f}\Diamond_{\mathrm{N}}^{g}\), can be depicted as the following string diagram, where our convention is to draw the proarrow dimension horizontally and the arrow dimension vertically. The point representing \(\alpha\) has been "fattened up" to a bead to facilitate labeling.
(2.1)
Composition of squares is depicted as pasting along a compatible shared boundary morphism in the appropriate dimension,
(2.2)
By the functoriality of \(\odot\) we have the equations
\[(\alpha\cdot\gamma)\odot(\beta\cdot\delta)=(\alpha\odot\beta)\cdot(\gamma \odot\delta)\quad\text{and}\quad\mathrm{id}\,\mathrm{M}\odot\mathrm{id}\, \mathrm{N}=\mathrm{id}(\mathrm{M}\odot\mathrm{N}),\]
the former of which is a 2-dimensional associative law known as _middle-four exchange_. These imply that each of the following diagrams has a unique interpretation.
\[\begin{array}{ccccc}\text{M}&\text{N}&&&&\text{M}&\text{N}\\ &&&&\\ f&\text{\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./}}}&\text{ \raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./}}}&\text{,}&\\ g&\text{\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./}}}&\text{ \raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./}}}&\text{M}&\text{N} \\ &&&&\\ \text{M}^{\prime\prime}&\text{N}^{\prime\prime}&&&&\text{M}&\text{N}\\ \end{array}\]
By the functoriality of \(\text{U}\) we have the equations
\[\text{U}(f\cdot g)=\text{U}f\cdot\text{U}g\quad\text{and}\quad\text{U}(\text {id}\,\text{A})=\text{id}(\text{UA}),\]
the latter of which provides a well defined notion of (double) _identity square_ on an object, which we write as "\(\text{id}^{2}\text{A}\)". These imply that each of the following diagrams has a unique interpretation.
\[\begin{array}{ccccc}\text{\raisebox{-1.0pt}{\includegraphics[height=14.226378 pt]{./}}}&\text{\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./}}}&\text{,}&\text{A}\\ &&&&\\ g&\text{\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./}}}&\text{ \raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./}}}&\text{ \raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./}}}\end{array}\]
Note the convention of suppressing the drawing of composition unit cells. In order to declutter notation we may also use _dimensional promotion_ to elide an "id" or "U" from a subterm when its dimension is evident from the context.
Unitor naturality implies that squares can "slide past" them, in the sense that for any \(\alpha:\,{}_{f}^{\text{M}}\!\circ\!\!
is strictly unital and associative because it is ordinary 1-categorical composition in \(\mathbb{D}_{1}\). Composition of arrow disks in the proarrow dimension is also strictly unital and associative by preunitarity.
When we say that a globular square is "invertible" we mean that it has an inverse in the dimension in which it has nontrivial boundary. For example, to say that \(\alpha:f\twoheadrightarrow g\) is invertible means that there is \(\alpha^{\mbox{\tiny-1}}:g\twoheadrightarrow f\) with \(\alpha\odot\alpha^{\mbox{\tiny-1}}=\operatorname{U}f\) and \(\alpha^{\mbox{\tiny-1}}\odot\alpha=\operatorname{U}g\). Double identity squares are, of course, invertible in both dimensions.
The sub-double category determined by arrow disks can be identified with a (strict) 2-category. Indeed, there is a functor \(\operatorname{\textsc{Arr}}:\operatorname{\textsc{DblCat}}\to 2 \operatorname{\textsc{Cat}}\) that does this1. It has a left adjoint that fully embeds a 2-category as a double category where all proarrows are trivial, making \(2\operatorname{\textsc{Cat}}\) a coreflective subcategory of \(\operatorname{\textsc{DblCat}}\)[11]. We refer to both of these as the _arrow \(2\)-category_ of a double category. Similarly, each double category has a _proarrow bicategory_.
Footnote 1: We haven’t defined morphisms of double categories yet, but we will do so shortly.
Strict double categories are congenial to diagrammatics because they let us depict a compatible configuration of squares without explicit bracketing. There is a coherence theorem for double categories [10] which implies that given a diagram in a strict double category, any elaboration of its proarrow boundary to terms of a weak double category can be extended to the interior by elaboration with unitors and associators; and moreover, that all ways of doing so result in diagrams representing the same composite square. Except for the sake of emphasis, in diagrams we will omit explicit bracketing of proarrow boundaries, as well as explicit coherator proarrow disks. When presenting an equation that holds up to coherentors, we will write "\(\cong\)" rather than "\(=\)" as a reminder that coherentors may be inserted to unify the proarrow boundaries.
A double category that plays an important theoretical role is the _walking square double category_, whose only non-identity cells of each dimension are depicted in diagram (2.1). There is also a _singleton double category_\(\mathbb{1}\) comprising just one object, one arrow, one proarrow, and one square. Given a pair of double categories \(\mathbb{C}\) and \(\mathbb{D}\), we define the _cartesian ordered pair double category_\(\mathbb{C}\times\mathbb{D}\) with \((\mathbb{C}\times\mathbb{D})_{i}=\mathbb{C}_{i}\times\mathbb{D}_{i}\) for \(i\in\{0,1\}\) and the composition structure given factor-wise.
A functor between double categories is an internal functor between internal categories.
**Definition 2.2** (strict functor of double categories):
A (strict) _functor_ of double categories, \(\operatorname{F}:\mathbb{C}\to\mathbb{D}\), consists of a pair of functors between categories \(\operatorname{F}_{0}:\mathbb{C}_{0}\to\mathbb{D}_{0}\) and \(\operatorname{F}_{1}:\mathbb{C}_{1}\to\mathbb{D}_{1}\) that are compatible with the structural boundary functors \(\operatorname{L}\) and \(\operatorname{R}\) in the sense that \(\operatorname{F}_{1}\cdot\operatorname{L}^{\mathbb{D}}=\operatorname{L}^{ \mathbb{C}}\cdot\operatorname{F}_{0}\) and \(\operatorname{F}_{1}\cdot\operatorname{R}^{\mathbb{D}}=\operatorname{R}^{ \mathbb{C}}\cdot\operatorname{F}_{0}\),
which strictly preserve the proarrow-dimension composition structure in the sense
that \(\mathrm{F}_{0}\cdot\mathrm{U}^{\mathbb{D}}=\mathrm{U}^{\mathbb{C}}\cdot\mathrm{F}_{1}\) and \(\mathrm{F}_{1}\times_{\mathrm{F}_{0}}\mathrm{F}_{1}\cdot\bigodot^{\mathbb{D}}= \bigodot^{\mathbb{C}}\cdot\mathrm{F}_{1}\),
and which strictly preserve the unitors and associator as well, in the sense that
\[\mathrm{F}_{1}(\lambda^{\mathbb{C}}\mathrm{M})=\lambda^{\mathbb{D}}(\mathrm{F }_{1}\mathrm{M}),\quad\mathrm{F}_{1}(\rho^{\mathbb{C}}\mathrm{M})=\rho^{ \mathbb{D}}(\mathrm{F}_{1}\mathrm{M}),\quad\mathrm{F}_{1}(\kappa^{\mathbb{C}} (\mathrm{M},\!\mathrm{N},\!\mathrm{P}))=\kappa^{\mathbb{D}}(\mathrm{F}_{1} \mathrm{M},\!\mathrm{F}_{1}\mathrm{N},\!\mathrm{F}_{1}\mathrm{P}).\]
Except for the sake of emphasis we will omit the subscripts on a functor's constituent maps. For dimensional uniformity we will occasionally refer to the functor image of something as a "component".
For a functor of double categories \(\mathrm{F}:\mathbb{C}\to\mathbb{D}\), the F-image of a composable diagram in \(\mathbb{C}\) looks like the same diagram in \(\mathbb{D}\), but with an "F" added to each label. It will be useful to regard such a string diagram in \(\mathbb{D}\) as the projection in a dimension orthogonal to both the arrow and proarrow dimensions of the surface diagram formed by juxtaposing a surface representing the functor \(\mathrm{F}\) with a diagram of global elements2 of \(\mathbb{C}\). The globular version of this surface diagram calculus is presented in [10].
Footnote 2: We have not yet defined the higher-dimensional cells involved, but will do so momentarily.
\[\xy(0,0)*+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{ \rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F }+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\rm F}+{\
**object-component arrows:**: for each object of the domain double category \(\mathrm{A}:\mathbb{C}\) an arrow of the codomain double category \(\alpha\mathrm{A}:\mathbb{D}\left(\mathrm{FA}\to\mathrm{GA}\right)\),
**proarrow-component squares:**: for each proarrow of the domain double category \(\mathrm{M}:\mathbb{C}\left(\mathrm{A}\mapsto\mathrm{B}\right)\) a square of the codomain double category \(\alpha\mathrm{M}:\mathbb{D}\left({}^{\mathrm{FM}}_{\alpha\mathrm{A}}\Diamond^{ \alpha\mathrm{B}}_{\mathrm{GM}}\right)\),
**arrow-component disks:**: for each arrow of the domain double category \(f:\mathbb{C}\left(\mathrm{A}\to\mathrm{A}^{\prime}\right)\) an arrow disk of the codomain double category \(\alpha f:\mathbb{D}\left(\mathrm{F}f\cdot\alpha\mathrm{A}^{\prime}\mapsto \alpha\mathrm{A}\cdot\mathrm{G}f\right)\).
This data is required to satisfy the following relations.
**preservation of proarrow composition:**: for an object \(\mathrm{A}\) and consecutive proarrows \(\mathrm{M}:\mathrm{A}\mapsto\mathrm{B}\) and \(\mathrm{N}:\mathrm{B}\mapsto\mathrm{C}\) of the domain double category we have
\[\alpha(\mathrm{U}\,\mathrm{A})=\mathrm{U}(\alpha\mathrm{A})\quad\text{and} \quad\alpha(\mathrm{M}\Diamond\mathrm{N})=\alpha\mathrm{M}\Diamond\alpha \mathrm{N}, \tag{2.2}\]
**compatibility with arrow composition:**: for an object \(\mathrm{A}\) and consecutive arrows \(f:\mathrm{A}\to\mathrm{A}^{\prime}\) and \(f^{\prime}:\mathrm{A}^{\prime}\to\mathrm{A}^{\prime\prime}\) of the domain double category we have
\[\alpha(\mathrm{id}\,\mathrm{A})=\mathrm{U}(\alpha\mathrm{A})\quad\text{and} \quad\alpha(f\cdot f^{\prime})=(\mathrm{U}(\mathrm{F}f)\cdot\alpha f^{\prime}) \Diamond(\alpha f\cdot\mathrm{U}(\mathrm{G}f^{\prime})), \tag{2.3}\]
**naturality for squares:**: for a square \(\varphi:\underset{f}{\mathrm{M}}\Diamond^{g}_{\mathrm{N}}\) of the domain double category we have
\[(\mathrm{F}\varphi\cdot\alpha\mathrm{N})\Diamond\alpha g\cong\alpha f\Diamond( \alpha\mathrm{M}\cdot\mathrm{G}\varphi). \tag{2.4}\]
In surface diagrams we draw an arrow-dimension transformation as a line that vertically separates the surfaces representing its boundary functors.
Proarrow-component squares arise as projections of the juxtaposition of the arrow-dimension transformation with the global element corresponding to a proarrow.
Arrow-component disks arise as projections of the juxtaposition of the arrow-dimension transformation with the global element corresponding to an arrow.
The oplax variance of an arrow-dimension transformation corresponds to an "upward" slope of its line relative to lines representing arrows in the domain double category.
In string diagrams we usually don't label the points depicting component squares, and instead represent them as crossings of their boundary lines, with the line corresponding to the transformation drawn as crossing "behind" the one corresponding to the arrow or proarrow as a mnemonic for the fact that it comes "later".
Preservation of binary proarrow composition (2.2) says that the two possible ways to read the surface diagram on the left are equal, giving the equation between their projection string diagrams on the right.
Compatibility with binary arrow composition (2.3) says that the two possible ways to read the surface diagram on the left are equal, giving the equation between their projection string diagrams on the right.
Note that the proarrow boundaries are equal by preunitarity.
Preservation of nullary proarrow composition (2.2) and compatibility with nullary arrow composition (2.3) say that the three possible ways to read the surface diagram on the left are equal, giving the equation between their projection string diagrams on the right.
Naturally for squares (2.4) says that the projection string diagrams of the two boundary-preserving perturbations of the surface diagram on the left are equal, as shown on the right.
We can unify the proarrow boundaries of these string diagrams by conjugating them by unitors. Note that the surface diagram on the left does not represent a structure in \(\mathbb{D}\) because the definition of arrow-dimension oplax transformation does not specify a square-component _anything_. Instead, it represents a relation between the structures represented by its admissible boundary-preserving perturbations. The criteria for admissibility are discussed in [12]. Essentially, it means that in the projection string
diagram lines intersect one another only pairwise and transversely (i.e. each intersection point has a neighborhood in which it forms a crossing), points don't intersect one another at all, and lines intersect only those points on their own boundary.
In the case that the square \(\varphi\) has trivial proarrow boundary we obtain the following naturality relation for arrow disks.
(2.5)
In the case that the square \(\varphi\) has trivial arrow boundary we obtain the following naturality relation for proarrow disks, which is a strict equality by unitor naturality.
(2.6)
An _arrow-dimension lax transformation_ has arrow-component disks arising from a "downward" slope of the line representing the transformation relative to any lines representing arrows in the domain double category. An _arrow-dimension pseudo transformation_ is one that is both lax and oplax with invertible arrow-component disks. For an arrow-dimension pseudo transformation \(\alpha:\mathrm{F}\to\mathrm{G}\) we will consider the oplax variance the "forward" one, and for arrow \(f:\mathrm{A}\to\mathrm{B}\) write "\(\alpha f\)" for the component disk with oplax orientation and "\((\alpha f)^{-1}\)" for the one with lax orientation. The invertibility of these disks gives us the following equations, which are strict by preunitarity.
\[\alpha f\odot(\alpha f)^{-1}=\mathrm{U}(\mathrm{F}f\cdot\alpha\mathrm{B}) \qquad,\qquad(\alpha f)^{-1}\odot\alpha f=\mathrm{U}(\alpha\mathrm{A}\cdot \mathrm{G}f) \tag{2.7}\]
They identify the projection string diagrams of the boundary-preserving perturbations of each of the following surface diagrams.
(2.8)
An arrow-dimension transformation is _strict_ if it has identity arrow-component disks.
**Remark 2.4** (double categorical transformations as transformation pairs):
An arrow-dimension transformation \(\alpha:\mathrm{F}\to\mathrm{G}\) of the (op)lax/pseudo/strict variance decomposes into a pair of transformations \(\alpha_{0}:\mathrm{F}_{0}\to\mathrm{G}_{0}\) and \(\alpha_{1}:\mathrm{F}_{1}\to\mathrm{G}_{1}\), where \(\alpha_{1}\) is an ordinary natural transformation but \(\alpha_{0}\) is a transformation between 2-functors of the corresponding variance, in the sense that for each arrow \(f:\mathrm{A}\to\mathrm{B}\) the naturality disk \(\alpha_{0}f\) bounded by \(\mathrm{F}f\cdot\alpha\mathrm{B}\) and \(\alpha\mathrm{A}\cdot\mathrm{G}f\) is oriented in one way or the other, is an isomorphism, or is an identity.
Arrow-dimension transformations compose as suggested by their diagrammatics. For proarrow \(\mathrm{M}:\mathrm{A}\leftrightarrow\mathrm{B}\) we have
\[(a\cdot\beta)\mathrm{M}=\alpha\mathrm{M}\cdot\beta\mathrm{M}\quad\text{and} \quad(\mathrm{id}\,\mathrm{F})\mathrm{M}=\mathrm{id}\,(\mathrm{FM}) \tag{2.8}\]
or
\[\begin{array}{c}\includegraphics[width=142.364pt]{images/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/fig/figfig/figfig/fig/figfig/figfig/figfig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/fig/figfig/fig/fig/figfig/figfig/fig/fig/fig/fig/figfig/fig/fig/figfig/figfig/fig/fig/fig/figfig/fig/figfig/fig/fig/figfig/figfig/figfig/fig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfig/figfigfigfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfigfigfigfig/figfigfig
**proarrow-component disks:**: for each proarrow of the domain double category \(\mathrm{M}:\mathbb{C}\left(\mathrm{A}\,\leftrightsquigarrow\mathrm{B}\right)\) a proarrow disk of the codomain double category \(\gamma\mathrm{M}:\mathbb{D}\left(\mathrm{FM}\,\odot\,\gamma\mathrm{B}\to \gamma\mathrm{A}\,\odot\,\mathrm{GM}\right)\).
This data is required to satisfy the following relations.
**preservation of arrow composition:**: for an object \(\mathrm{A}\) and consecutive arrows \(f:\mathrm{A}\to\mathrm{A}^{\prime}\) and \(g:\mathrm{A}^{\prime}\to\mathrm{A}^{\prime\prime}\) of the domain double category we have
\[\gamma(\mathrm{id}\,\mathrm{A})=\mathrm{id}(\gamma\mathrm{A})\quad\text{and} \quad\gamma(f\cdot g)=\gamma f\cdot\gamma g, \tag{2.10}\]
**compatibility with proarrow composition:**: for an object \(\mathrm{A}\) and consecutive proarrows \(\mathrm{M}:\mathrm{A}\lnot\mathrm{B}\) and \(\mathrm{N}:\mathrm{B}\to\mathrm{C}\) of the domain double category we have
\[\begin{array}{l}\gamma(\mathrm{U}\,\mathrm{A})\cong\mathrm{id}(\gamma \mathrm{A})\quad\text{and}\\ \gamma(\mathrm{M}\,\odot\,\mathrm{N})\cong(\mathrm{id}(\mathrm{FM})\,\odot\, \gamma\mathrm{N})\cdot\kappa^{-1}(\mathrm{FM}\,,\gamma\mathrm{B}\,,\mathrm{ GN})\cdot(\gamma\mathrm{M}\,\odot\,\mathrm{id}(\mathrm{GN})),\end{array} \tag{2.11}\]
**naturality for squares:**: for a square \(\varphi:{}_{f}^{\mathrm{M}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
projection string diagrams on the right.
Compatibility with binary proarrow composition (2.11) says that the two possible ways to read the surface diagram on the left are equal, giving the equation between their projection string diagrams on the right.
We can unify the proarrow boundaries of the these string diagrams as \((\operatorname{FM}\bigcirc\operatorname{FN})\bigcirc\gamma\operatorname{C}\to \gamma\operatorname{A}\bigcirc(\operatorname{GM}\bigcirc\operatorname{GN})\) by conjugating the latter by associators. Moreover, that diagram implicitly contains an associator disk \(\kappa^{\cdot-1}(\operatorname{FM}\,,\gamma\operatorname{B}\,,\,\operatorname {GN})\). Such bureaucracy is the price we must pay for weak composition structure. Fortunately, the diagrammatics keeps it out of our way when we don't care about it; and when we do, we need only elaborate our diagrams with coherator cells.
Preservation of nullary arrow composition (2.10) and compatibility with nullary proarrow composition (2.11) say that the three possible ways to read the surface diagram on the left are equal, giving the relation between their projection string diagrams on the right.
We can unify the proarrow boundaries of these string diagrams by conjugating the last one by unitors.
Naturality for squares (2.12) says that the projection string diagrams of the two boundary-preserving perturbations of the surface diagram on the left are equal, as shown on the right.
There are again special cases for globular squares corresponding to equations (2.5) and (2.6). The composition structure of proarrow-dimension transformations is obvious from their diagrammatics. We write composites of proarrow-dimension transformations using the same notation as for composites of proarrows, namely \(-\odot-\) and U.
Compatible pairs of transformations in each dimension together determine square-shaped boundaries. Cells that inhabit these boundaries are known as cubical modifications.
**Definition 2.6** (modification):
For parallel functors of double categories \(\mathrm{F},\mathrm{G},\mathrm{F}^{\prime},\mathrm{G}^{\prime}:\mathbb{C}\to \mathbb{D}\), arrow-dimension polar transformations \(\alpha:\mathrm{F}\to\mathrm{F}^{\prime}\) and \(\beta:\mathrm{G}\to\mathrm{G}^{\prime}\), and proarrow-dimension polar transformations \(\gamma:\mathrm{F}\to\mathrm{G}\) and \(\delta:\mathrm{F}^{\prime}\to\mathrm{G}^{\prime}\), a (cubical) _modification_\(\mu:\;(\mathbb{C}\to\mathbb{D})\left(\begin{subarray}{c}\mathrm{F}\to \mathbb{G}\\ \mathrm{F}\to\mathrm{F}^{\prime}\end{subarray}\odot\mathrm{G}^{\prime}\to \mathrm{G}^{\prime}\right)\left(\begin{subarray}{c}\gamma\,\diamondsuit\, \beta\\ \alpha\end{subarray}\right)\) consists of the following data.
**object-component squares:**: for each object of the domain double category \(\mathrm{A}:\mathbb{C}\) a square of the codomain double category \(\mu\mathrm{A}:\mathbb{D}\left(\begin{subarray}{c}\gamma^{\mathrm{A}}_{\mathrm{ A}}\end{subarray}\right)\diamondsuit_{\delta\mathrm{A}}^{\beta}\)).
This data is required to satisfy the following relations.
**naturality for arrows:**: for an arrow \(f:\mathrm{A}\to\mathrm{A}^{\prime}\) of the domain double category we have
\[\alpha f\odot(\mu\mathrm{A}\cdot\delta f)\cong(\gamma f\cdot\mu\mathrm{A}^{ \prime})\odot\beta f, \tag{2.13}\]
**naturality for proarrows:**: for a proarrow \(\mathrm{M}:\mathrm{A}\to\mathrm{B}\) of the domain double category we have
\[\gamma\mathrm{M}\cdot(\mu\mathrm{A}\odot\beta\mathrm{M})=(\alpha\mathrm{M} \odot\mu\mathrm{B})\cdot\delta\mathrm{M}. \tag{2.14}\]
Naturality for arrows (2.13) says that the projection string diagrams of the two boundary-preserving perturbations of the surface diagram on the left are equal, as shown on the right.
\(\alpha\)\(\beta\)\(\beta\
We can similarly define modifications for any of the other combinations of transformation variance, so long as the variance in each dimension is consistent. Modifications compose as suggested by their diagrammatics.
A modification is _globular_ if its object-component squares are globular. A globular modification is _invertible_ if its object-component disks are invertible, and is an _identity_ if they are double identity squares.
For globular modification \(\mu:\alpha\dashrightarrow\beta\) we obtain the following naturality relation for arrows.
(2.15)
Similarly, we have the following naturality relation for proarrows.
(2.16)
There are, of course, analogous relations for globular modifications whose components are proarrow disks.
For each pair of double categories \(\mathbb{C}\) and \(\mathbb{D}\) the functors, arrow- and proarrow-dimension transformations, and modifications bounded by them comprise the objects, arrows, proarrows, and squares of a hom double category [10]. Moreover, the components of these structures provide a form of composition for elements of consecutive homs. For example, we can compose an arrow-dimension transformation \(\alpha:\,(\mathbb{C}\rightarrow\mathbb{D})\,(\mathrm{F}\rightarrow\mathrm{F}^ {\prime})\) with a proarrow-dimension transformation \(\gamma:\,(\mathbb{D}\rightarrow\mathbb{E})\,(\mathrm{G}\dashrightarrow\mathrm{G }^{\prime})\) to obtain a modification \(\gamma\alpha:\,\gamma^{\mathrm{F}}_{\mathrm{G}\alpha}\circ\gamma^{\mathrm{G}^ {\prime}\alpha}_{\gamma^{\mathrm{F}^{\prime}}}\) whose object-component squares are the arrow-component squares of \(\gamma\) acting on the object-component arrows of \(\alpha\).
## 3 Locally Cubical Gray Categories
A classical Gray category may be thought of as a category enriched in 2-dimensional _globular_ categories under the Gray tensor product. The motivating example is the algebraic structure formed by 2-categories, together with their functors, (oplax and/or lax) transformations, and (globular) modifications. This arises from the fact that the category of 2-categories is closed monoidal under the Gray tensor product, as was shown by Gray [10].
Analogously, a locally cubical Gray category may be thought of as a category enriched in 2-dimensional _cubical_ categories under a suitable Gray tensor product. Our motivating example is the algebraic structure formed by double categories, together with their functors, (oplax and/or lax) transformations, and (cubical) modifications. It has
been shown by Bohm that the category of double categories is also closed monoidal under a cubical version of the Gray tensor product [1].
In the globular setting, \(2\)-dimensional categories may have transformations of several possible variances, but in only one dimension. In contrast, double categories have transformations in two independent dimensions, reflecting the two types of morphisms within double categories themselves. In locally cubical Gray categories this gives rise to four distinct types of interchanger, which we can think of as vertical-vertical, vertical-horizontal, horizontal-vertical, and horizontal-horizontal, and where we have a choice of variance in each dimension independently. The homogeneous cubical interchangers behave like the globular interchangers of classical Gray-categories, while the heterogeneous ones give rise to non-globular natural squares.
This determines a family of \(3\)-dimensional categorical structures that are cubical in two dimensions and globular in the third. Rather than trying to enumerate all of the structures of different variances and strictnesses that result, we instead present the one corresponding to the choices made in section 2; namely, _preunitary weak_ double categories with _strict_ functors and transformations that are _oplax_ in both dimensions. Other variants may be constructed analogously.
We break the following definition into parts in order to introduce notation and diagrammatics as we go, and to try to explain the (relatively simple) geometric intuition behind the (rather cumbersome) formalism.
**Definition 3.1** (locally cubical Gray category: \(n\)-cells)
A _locally cubical Gray category_\(\mathbb{C}\) has the following cell data.
\(0\)**-cells:**: a collection of objects known as \(0\)-cells,
\(1\)**-cells:**: for each pair of \(0\)-cells A and B a collection of \(1\)-cells, \(\mathbb{C}\left(\text{A}\rightarrow\text{B}\right)\),
**vertical \(2\)-cells:**: for each parallel pair of \(1\)-cells \(f,f^{\prime}:\mathbb{C}\left(\text{A}\rightarrow\text{B}\right)\) a collection of vertical \(2\)-cells, \(\mathbb{C}\left(\text{A}\rightarrow\text{B}\right)\left(f\to f^{ \prime}\right)\),
**horizontal \(2\)-cells:**: for each parallel pair of \(1\)-cells \(f,g:\mathbb{C}\left(\text{A}\rightarrow\text{B}\right)\) a collection of horizontal \(2\)-cells, \(\mathbb{C}\left(\text{A}\rightarrow\text{B}\right)\left(f\nrightarrow g\right)\),
**\(3\)-cells:**: for each parallel quadruple of \(1\)-cells \(f,g,f^{\prime},g^{\prime}:\mathbb{C}\left(\text{A}\rightarrow\text{B}\right)\), vertical \(2\)-cells \(\alpha:\mathbb{C}\left(\text{A}\rightarrow\text{B}\right)\left(f\to f^{ \prime}\right)\) and \(\beta:\mathbb{C}\left(\text{A}\rightarrow\text{B}\right)\left(g\to g ^{\prime}\right)\), and horizontal \(2\)-cells \(\gamma:\mathbb{C}\left(\text{A}\rightarrow\text{B}\right)\left(f\nrightarrow g\right)\) and \(\delta:\mathbb{C}\left(\text{A}\rightarrow\text{B}\right)\left(f^{\prime} \nrightarrow g^{\prime}\right)\), a collection of \(3\)-cells, \(\mathbb{C}\left(\text{A}\rightarrow\text{B}\right)\binom{f\nrightarrow g}{f \nrightarrow g^{\prime}}\binom{\gamma}{f^{\prime}+\nrightarrow g^{\prime}} \binom{(\gamma}{\alpha}\gamma\delta^{\beta}_{\delta})\).
For brevity we may omit any prefix of a boundary specification that can be inferred from context or is irrelevant. This lets us describe a \(3\)-cell \(\varphi:\genfrac{}{}{0.0pt}{}{\gamma}{\alpha}\Diamond^{\beta}_{\delta}\), leaving the lower-dimensional structure implicit.
In surface diagrams we represent \(0\)-cells as volumes, \(1\)-cells as planes separating their boundary volumes in the "principal" or "transverse" dimension, vertical \(2\)-cells as lines vertically separating their boundary \(1\)-cells, horizontal \(2\)-cells as lines horizontally separating their boundary \(1\)-cells, and \(3\)-cells as points horizontally separating their boundary vertical \(2\)-cells and vertically separating their boundary horizontal \(2\)-cells. We typically "fatten up" these points into beads to facilitate labeling. Thus, we may
depict the 3-cell \(\varphi\) above as follows.
**Definition 3.2** (locally cubical Gray category: local structure)
For a given pair of 0-cells A and B, the 1-cells, vertical 2-cells, horizontal 2-cells, and 3-cells with 0-cell boundary A \(\to\) B constitute the objects, arrows, proarrows, and squares of a(n, in our case, preunitary weak) double category.
We use the notation for double categories, both linear and graphical, established in section 2 to describe this local structure.
**Definition 3.3** (locally cubical Gray category: whiskerings)
We may compose 1-cells as follows.
\(1\)**-cell nullary composition:**: for each 0-cell A we have an identity 1-cell \(\operatorname{Id}\operatorname{A}:\operatorname{A}\to\) A,
**1-cell binary composition:**: for consecutive 1-cells \(f:\operatorname{A}\to\) B and \(g:\operatorname{B}\to\) C we have a composite 1-cell \(f\,\circled{\circ}\,g:\operatorname{A}\to\) C.
We may compose a 2-cell or 3-cell having 0-cell boundary \(\operatorname{A}\to\operatorname{B}\) with a 1-cell \(a:\operatorname{A}^{\prime}\to\) A or with a 1-cell \(b:\operatorname{B}\to\operatorname{B}^{\prime}\) as follows.
**vertical \(2\)-cell whiskering:**: for a vertical 2-cell \(\alpha:f\to f^{\prime}\) we have vertical 2-cells \(a\,\circled{\circ}\,\alpha:a\,\circled{\circ}\,f\to a\,\circled{\circ}\,f^{\prime}\) and \(\alpha\,\circled{\circ}\,b:f\,\circled{\circ}\,b\to f^{\prime}\,\circled{\circ}\,b\),
**horizontal \(2\)-cell whiskering:**: for a horizontal 2-cell \(\gamma:f\twoheadrightarrow g\) we have horizontal 2-cells \(a\,\circled{\circ}\,\gamma:a\,\circled{\circ}\,f\twoheadrightarrow a\,\circled{ \circ}\,g\) and \(\gamma\,\circled{\circ}\,b:f\,\circled{\circ}\,b\twoheadrightarrow g\,\circled{ \circ}\,b\),
**3-cell whiskering:**: for a 3-cell \(\varphi:\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{ \circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{ \circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{ \circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{ \circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{ \circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{ \circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{ \circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{ \circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{ \circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{ \circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{ \circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{ \circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{ \circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{ \circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{ \circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ} {\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ} {\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\,\circled{\circ}\, \circled{\circ}\,\circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ}, \,\circled{\circ}\,\circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ}, \circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ}, \circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ}, \,\circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ} {\circ},\circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ},\circled{ \circ},\circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ}, \circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ}, \circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ}, \circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ},\circled{ \circ},\circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ}, \circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ},\circled{ \circ},\circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ}, \circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ},\circled{ \circ},\circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ}, \circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ}, \circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ}, \circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ},\circled{\circ}, \cir
**vertical-horizontal interchanger:**: for vertical 2-cell \(\alpha:\,(\mathrm{A}\to\mathrm{B})\,(f\to f^{\prime})\) and horizontal 2-cell \(\delta:\,(\mathrm{B}\to\mathrm{C})\,(g\to g^{\prime})\) we have a 3-cell \(\chi_{(\alpha,\delta)}:{\,\stackrel{{ f\otimes\theta}}{{\alpha \otimes g^{\prime}}}}{\,\stackrel{{ g\otimes\theta}}{{f^{ \prime}\otimes\delta}}},\)
**horizontal-vertical interchanger:**: for horizontal 2-cell \(\gamma:\,(\mathrm{A}\to\mathrm{B})\,(f\to f^{\prime})\) and vertical 2-cell \(\beta:\,(\mathrm{B}\to\mathrm{C})\,(g\to g^{\prime})\) we have a 3-cell \(\chi_{(\gamma,\beta)}:{\,\stackrel{{\gamma\otimes\theta}}{{f^{ \prime}\otimes\delta}}}{\,\stackrel{{\gamma^{\prime}\otimes \theta}}{{\gamma\otimes g^{\prime}}}},\)
**vertical-vertical interchanger:**: for vertical 2-cells \(\alpha:\,(\mathrm{A}\to\mathrm{B})\,(f\to f^{\prime})\) and \(\beta:\,(\mathrm{B}\to\mathrm{C})\,(g\to g^{\prime})\) we have a 3-cell \(\chi_{(\alpha,\beta)}:{\,\stackrel{{\mathrm{U}(f\otimes\beta)}}{{ (\alpha\otimes g)}}}{\,\stackrel{{\mathrm{U}(f\otimes\beta)}}{{ (\alpha\otimes g^{\prime})}}},\)
**horizontal-horizontal interchanger:**: for horizontal 2-cells \(\gamma:\,(\mathrm{A}\to\mathrm{B})\,(f\to f^{\prime})\) and \(\delta:\,(\mathrm{B}\to\mathrm{C})\,(g\to g^{\prime})\) we have a 3-cell \(\chi_{(\gamma,\delta)}:{\,\stackrel{{\gamma\otimes\theta}}{{ \alpha\otimes g}}}{\,\stackrel{{\mathrm{U}(f\otimes\delta)}}{{ \alpha\otimes g^{\prime}}}}{\,\stackrel{{\mathrm{U}(f\otimes \delta)}}{{\alpha\otimes g^{\prime}}}}.\)
In surface diagrams we depict the _heterogeneous interchanger_ 3-cells like this:
and the _homogeneous interchanger_ 3-cells like this:
It is the orientations of the homogeneous interchangers that determines the variance of a locally cubical Gray category. Here, we have oriented them so that both vertical and horizontal 2-cells are "eager" relative to those that precede them in the transverse dimension. This corresponds to the oplax variance. If instead 2-cells were "lazy" with respect to their predecessors then the locally cubical Gray category would be lax in that dimension, and if they were "indifferent" then it would be pseudo.
We may regard whiskerings and interchangers as aspects of a single dimension-summing composition operation along 0-cells, but with shifted indices in the sense that composing an \((m+1)\)-cell with an \((n+1)\)-cell yields an \((m+n+1)\)-cell. When either \(m=0\) or \(n=0\) this is a whiskering, and when \(m=n=1\) it is an interchanger. This lets us represent both whiskerings and interchangers uniformly in surface diagrams as juxtapositions of string diagrams embedded on consecutive surfaces.
**Definition 3.5** (locally cubical Gray category: whiskering laws): For a 3-cell \(\varphi:{\,\stackrel{{\gamma}}{{\alpha}}}{\,\stackrel{{ \gamma}}{{\delta}}}\) with 0-cell boundary \(\mathrm{A}\to\mathrm{B}\),
**nullary composite whiskering:**: identity 1-cells are neutral for whiskering:
\[\mathrm{Id}\,\mathrm{A}\,\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{ \eps
two-sided whiskering:**: whiskering on both sides together is associative:
(3.3)
**whiskering vertical functoriality:**: whiskering is functorial with respect to vertical composition:
(3.4)
**whiskering horizontal functoriality:**: whiskering is functorial with respect to horizontal composition:
(3.5)
**whiskering horizontal**: whiskering preserves the horizontal composition:
(3.6)
Note that these equations between 3-cells imply equations between their respective lower-dimensional boundary cells. Because whiskering a 2-cell or a 3-cell with any number of 1-cells on either side is unambiguous we adopt an unbracketed notation for this, letting us write the unique 3-cells in each of equations (3.2) and (3.3) as,, and.
The whiskering laws can be understood as saying that transverse composition with 1-cells is strictly associative and unital, as well as functorial for the local double categorical composition structure.
**Definition 3.6** (locally cubical Gray category: interchanger laws): For 1-cells and,
**interchanger extremal whiskering:**: for a 2-cell with 0-cell boundary, each either horizontal or vertical, and 1-cells and and,
(3.7)
**interchanger medial whiskering:**: for a 2-cell with 0-cell boundary, each either horizontal or vertical, and 1-cell,
(3.8)
**vertical-horizontal composite interchangers:**: for vertical 2-cells and, and horizontal 2-cells and,
(3.9)
**horizontal-vertical composite interchangers:**: for horizontal 2-cells \(\gamma:f\looparrowleft f^{\prime}\) and \(\delta:f^{\prime}\looparrowleft f^{\prime\prime}\), and vertical 2-cells \(\alpha:g\to g^{\prime}\) and \(\beta:g^{\prime}\to g^{\prime\prime}\),
\[\begin{array}{l}\chi_{(\operatorname{U}f,\alpha)}=\operatorname{U}(f \mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0pt}}}\alpha) \quad,\quad\chi_{(\gamma\mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{fig-1.0pt}}}\delta,\alpha)}=\chi_{(\gamma,\alpha)}\mathbin{\raisebox{-1.0pt}{ \includegraphics[height=1.0pt]{fig-1.0pt}}}\chi_{(\delta,\alpha)}\quad,\\ \chi_{(\gamma,\operatorname{id}g)}=\operatorname{id}(\gamma\mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0pt}}}g)\quad,\quad\chi_{(\gamma, \alpha\mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0pt}}} \beta)}=\chi_{(\gamma,\alpha)}\cdot\chi_{(\gamma,\beta)}\end{array} \tag{3.10}\]
**vertical-vertical composite interchangers:**: for vertical 2-cells \(\alpha:f\looparrowleft f^{\prime}\), \(\alpha^{\prime}:f^{\prime}\to f^{\prime\prime}\), \(\beta:g\to g^{\prime}\), and \(\beta^{\prime}:g^{\prime}\to g^{\prime\prime}\),
\[\begin{array}{l}\chi_{(\operatorname{id}f,\beta)}=\operatorname{U}(f \mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0pt}}}\beta) \quad,\quad\chi_{(\alpha\mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{fig-1.0pt}}}\alpha^{\prime},\beta)}=(\operatorname{U}(\alpha\mathbin{ \raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0pt}}}g)\cdot\chi_{(\alpha^{ \prime},\beta)})\mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0 pt}}}\chi_{(\alpha,\beta)}\cdot\operatorname{U}(\alpha^{\prime} \mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0pt}}}g^{\prime})) \mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0pt}}}\chi_{( \alpha,\operatorname{id}g)}=\operatorname{U}(\alpha\mathbin{\raisebox{-1.0pt }{\includegraphics[height=1.0pt]{fig-1.0pt}}}g)\quad,\quad\chi_{(\alpha,\beta \mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0pt}}}\beta^{ \prime})}=(\chi_{(\alpha,\beta)}\cdot\operatorname{U}(f^{\prime}\mathbin{ \raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0pt}}}\beta^{\prime})) \mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0pt}}}\mathrm{(U}(f \mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0pt}}}\beta) \cdot\chi_{(\alpha,\beta^{\prime})})\end{array} \tag{3.11}\]
**horizontal-horizontal composite interchangers:**: for horizontal 2-cells \(\gamma:f\looparrowleft f^{\prime}\), \(\gamma^{\prime}:f^{\prime}\to f^{\prime\prime}\), \(\delta:g\to g^{\prime}\), and \(\delta^{\prime}:g^{\prime}\to g^{\prime\prime}\),
\[\begin{array}{l}\chi_{(\operatorname{U}f,\delta)}=\lambda\cdot\rho^{\neg 1}\quad,\quad\chi_{(\gamma\mathbin{\raisebox{-1.0pt}{ \includegraphics[height=1.0pt]{fig-1.0pt}}}\gamma^{\prime},\delta)}=\kappa\cdot( \operatorname{id}(\gamma\mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{fig-1.0pt}}}g)\mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{fig-1.0pt}}}\chi_{(\gamma^{\prime},\delta)})\cdot\kappa^{-1}\cdot(\chi_{( \gamma,\delta)}\mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0 pt}}}\operatorname{id}(\gamma^{\prime}\mathbin{ \raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0pt}}}g^{\prime}))\cdot\kappa\\ \chi_{(\gamma,\operatorname{U}g)}=\rho\cdot\lambda^{\neg 1}\quad,\quad\chi_{(\gamma,\delta \mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0pt}}}\delta^{ \prime})}=\kappa^{-1}\cdot(\chi_{(\gamma,\delta)}\mathbin{\raisebox{-1.0pt}{ \includegraphics[height=1.0pt]{fig-1.0pt}}}\operatorname{id}(f^{\prime}\mathbin{ \raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0pt}}}\delta^{\prime})) \cdot\kappa\cdot(\operatorname{id}(f\mathbin{\raisebox{-1.0pt}{\includegraphics[ height=1.0pt]{fig-1.0pt}}}\delta)\mathbin{\raisebox{-1.0pt}{ \includegraphics[height=1.0pt]{fig-1.0pt}}}\chi_{(\gamma,\delta^{\prime})})\cdot \kappa^{-1}\end{array} \tag{3.12}\]
where we have suppressed the horizontal composition coherator indices for readability.
**interchanger naturality:**: for a 3-cell \(\varphi:\mathop{\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0pt}}} \chi_{\gamma^{\prime}}^{\mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{fig-1.0pt}}}\alpha^{\prime}}\) with 0-cell boundary \(\operatorname{A}\to\operatorname{B}\), a horizontal 2-cell \(\delta:\,(\operatorname{B}\to\operatorname{C})\,(g\looparrowleft g^{\prime})\), and a vertical 2-cell \(\beta:\,(\operatorname{B}\to\operatorname{C})\,(g\to g^{\prime})\),
\[\begin{array}{l}((\varphi\mathbin{\raisebox{-1.0pt}{\includegraphics[height=1. 0pt]{fig-1.0pt}}}g)\mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{fig-1.0pt}}}\chi_{(\alpha^{\prime},\delta)})\cdot\chi_{(\gamma^{\prime},\delta)}=\chi_{( \gamma,\delta)}\cdot(\chi_{(\alpha,\delta)}\mathbin{\raisebox{-1.0pt}{ \includegraphics[height=1.0pt]{fig-1.0pt}}}\chi_{(\rho\mathbin{\raisebox{-1.0 pt}{\includegraphics[height=1.0pt]{fig-1.0pt}}}\delta)}g^{\prime})) \quad\text{and}\\ \rho^{\neg 1}\cdot[((\varphi\mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0 pt}}}g)\cdot\chi_{(\gamma^{\prime},\beta)})\mathbin{\raisebox{-1.0pt}{ \includegraphics[height=1.0pt]{fig-1.0pt}}}\chi_{(\alpha^{\prime},\beta)})]\cdot \rho=\lambda^{-1}\cdot[\chi_{(\alpha,\beta)}\mathbin{\raisebox{-1.0pt}{ \includegraphics[height=1.0pt]{fig-1.0pt}}}\left(\chi_{(\alpha,\beta)}\mathbin{ \raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0pt}}}(\chi_{(\gamma, \beta)}\cdot(\varphi\mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{fig-1.0pt}}}\theta^{\prime}))\right]\cdot\lambda\\ \end{array} \tag{3.13}\]
and for a 3-cell \(\psi:\mathop{\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0pt}}} \delta_{\beta}\mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0 pt}}}\delta_{\beta^{\prime}}\) with 0-cell boundary \(\operatorname{B}\to\operatorname{C}\), a horizontal 2-cell \(\gamma:\,(\operatorname{A}\to\operatorname{B})\,(f\looparrowleft f^{\prime})\), and a vertical 2-cell \(\alpha:\,(\operatorname{A}\to\operatorname{B})\,(f\looparrowleft f^{\prime})\),
\[\begin{array}{l}\chi_{(\gamma,\delta)}\cdot((f\mathbin{\raisebox{-1.0pt}{ \includegraphics[height=1.0pt]{fig-1.0pt}}}\psi)\mathbin{\raisebox{-1.0pt}{ \includegraphics[height=1.0pt]{fig-1.0pt}}}\chi_{(\gamma,\delta^{\prime})})=( \chi_{(\gamma,\beta)}\mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{fig-1.0 pt}}}(f^{\prime}\mathbin{\raisebox{-1.0pt}{\includegraphics[height=1.0 pt]{fig-1.0pt}}}\psi))\cdot\chi_{(\gamma,\delta^{\prime})} \quad\text{and}\\ \lambda^{-1}\cdot[\chi_{(\alpha,\beta)}\mathbin{\raisebox{-1.0pt}{\includegraphics [height=1.0pt]{fig-1.0pt}}}\left((f\mathbin{\raisebox{-1.0pt}{\includegraphics[ height=1.0 pt]{fig-1.0pt}}}\psi)\cdot\chi_{(\alpha,\beta^{\prime})}\right)]\cdot\lambda=\rho^{ \neg 1}\cdot[(\
The algebraic presentation of these relations is rather cumbersome. This is largely a consequence of using a 1-dimensional notation to encode a 3-dimensional theory. In surface diagrams they become more perspicuous. The composite whiskering laws (3.1), (3.3), and (3.2), assert that each of following diagrams represents a unique 3-cell.
\(\alpha\)\(\beta\)\(\beta\)\(\alpha\)\(\beta\)\(\beta\)\(\alpha\)\(\beta\)\(\beta\)\(\beta\)\(\alpha\)
The heterogeneous binary composite interchanger laws in equations (3.9) and (3.10) assert that each of following diagrams, along with its \((\,\odot\,\,,\,\,\cdot\,\,)\)-reflection, represents a unique 3-cell.
The homogeneous binary composite interchanger laws in equations (3.11) and (3.12) assert that each of following diagrams, along with its \((\,\odot\,\,,\,\,\cdot\,\,)\)-reflection elaborated by associators, represents a unique 3-cell.
Finally, the interchanger naturality laws (3.13) and (3.14) assert the equations obtained by boundary-preserving perturbation of each of the following surface diagrams.
This structure was chosen so that double categories together with their morphisms constitute a model.
**Proposition 3.7**: There is a locally cubical Gray category, \(\mbox{DblCat}_{\rm G}\), having double categories as 0-cells, strict functors as 1-cells, arrow-dimension oplax transformations as vertical
2-cells, proarrow-dimension oplax transformations as horizontal 2-cells, and cubical modifications as 3-cells.
Proof.: A functor of double categories \(\mathrm{F}:\mathbb{C}\to\mathbb{D}\) is given by whiskering on the right \(-\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)
**Definition 4.1** (Gray-monoidal double category): A _Gray-monoidal double category_ is the loop space of a one-object locally cubical Gray category.
Taking the loop space has the effect of shifting everything down by a dimension, moving the local structure of the hom double category to center stage. The only \(0\)-cell, \(\star\), goes away. It's identity \(1\)-cell, \(\operatorname{Id}\star\), becomes the tensor unit object, written "I". Other \(1\)-cells, \(\operatorname{A},\operatorname{B}:\star\to\star\), become objects as well, and their composition, \(\operatorname{A}\oplus\operatorname{B}\), becomes their tensor product, written "\(\operatorname{A}\otimes\operatorname{B}\)". Whiskering a \(2\)- or \(3\)-cell by a \(1\)-cell, \(-\operatorname{\mathfrak{G}}\operatorname{A}\) or \(\operatorname{A}\oplus-\), becomes tensoring with an object, \(-\otimes\operatorname{A}\) or \(\operatorname{A}\otimes-\). Interchanger \(3\)-cells for vertical and horizontal \(2\)-cells become interchanger squares for arrows and proarrows. In short, the dimension-summing transverse combination operation has its indices un-shifted so that combining an \(m\)-cell with an \(n\)-cell yields an \((m+n)\)-cell, manifesting as a tensor when one of \(m\) or \(n\) is \(0\), and as an interchanger when they are both \(1\). The surface diagram calculus remains unchanged, except that we never need to label the volumes.
We can think of a Gray-monoidal double category as a double category with additional structure. In order to do this we first introduce the tensor product of double categories. In the globular setting, Gray constructed a tensor product functor for \(2\)-categories \(-\otimes-:2\operatorname{Cat}\times 2\operatorname{Cat}\to 2 \operatorname{Cat}\) and showed it to be left adjoint to the internal hom in the sense that \(-\otimes\operatorname{\mathbb{A}}\dashv\operatorname{\mathbb{A}}\to-\)[10]. In the cubical setting, Bohm characterized the corresponding functor for double categories \(-\otimes-:\operatorname{DblCat}\times\operatorname{DblCat}\to\operatorname{ DblCat}\)[1] by applying results about representability [1] to the fact that \(\operatorname{DblCat}\) is locally presentable [12] with the walking square double category as a strong generator.
Recall that for double categories \(\mathbb{C}\) and \(\mathbb{D}\) we can form the cartesian ordered pair double category \(\mathbb{C}\times\mathbb{D}\). This gives the object map for a cartesian product functor for double categories \(-\times-:\operatorname{DblCat}\times\operatorname{DblCat}\to\operatorname{DblCat}\). If we instead use Bohm's tensor product functor \(-\otimes-:\operatorname{DblCat}\times\operatorname{DblCat}\to\operatorname{ DblCat}\) then we get a different ordered pair double category.
**Definition 4.2** (Gray ordered pair double category): We may regard double categories \(\mathbb{C}\) and \(\mathbb{D}\) as consecutive hom double categories of a locally cubical Gray category, say \(\mathbb{C}=\operatorname{X}\to\operatorname{Y}\) and \(\mathbb{D}=\operatorname{Y}\to\operatorname{Z}\). The _Gray ordered pair double category_\(\mathbb{C}\otimes\mathbb{D}\) is the sub-double category of the transitive hom \(\operatorname{X}\to\operatorname{Z}\) consisting of those elements that factor through \(\operatorname{Y}\).
Explicitly, a generating \((\mathbb{C}\otimes\mathbb{D})\)-object of type \((0\,,0)\), which is the only type, is the ordered pair of a \(\mathbb{C}\)-object and a \(\mathbb{D}\)-object. This corresponds to the composition of \(1\)-cells in a locally cubical Gray category \(\mathbb{G}\). A generating \((\mathbb{C}\otimes\mathbb{D})\)-arrow of type \((1,0)\) is the ordered pair of a \(\mathbb{C}\)-arrow and a \(\mathbb{D}\)-object, and one of type \((0\,,1)\) is the ordered pair of a \(\mathbb{C}\)-object and a \(\mathbb{D}\)-arrow. These correspond to the two types of whiskerings of vertical \(2\)-cells by \(1\)-cells in \(\mathbb{G}\). Similarly, we have generating \((\mathbb{C}\otimes\mathbb{D})\)-proarrows of types \((1\,,0)\) and \((0\,,1)\), corresponding to horizontal \(2\)-cell whiskerings in \(\mathbb{G}\). A generating \((\mathbb{C}\otimes\mathbb{D})\)-square is one of the following possible types. Type \((2\,,0)\)- and \((0\,,2)\)-squares are ordered pairs of a square and an object. These correspond to whiskerings of \(3\)-cells by \(1\)-cells in \(\mathbb{G}\). Then there are four subtypes of \((1\,,\,1)\)-type squares, which we call \((v\,,v)\), \((v\,,h)\), \((h\,,v)\), and \((h\,,h)\), consisting of ordered pairs of arrows or proarrows. These correspond to the four types of cubical interchangers, \(\{\text{vertical},\text{horizontal}\}\times\operatorname{\text{horizontal}\}\)
\(\{\text{vertical},\text{horizontal}\}\). Note that the variances of the homogeneous \((1\,,\,1)\)-type disks are parameters corresponding to the variances of the homogeneous interchangers.
The composition structure and relations are precisely those of a locally cubical Gray category. For example, by whiskering vertical functoriality (3.4) we can "merge" consecutive \((1\,,0)\)- or \((0\,,\,1)\)-type arrows
\[(f\,,\text{X})\cdot(g\,,\text{X})=(f\cdot g\,,\text{X})\text{ and }(\text{A}\,,p)\cdot(\text{A}\,,q)=(\text{A}\,,p\cdot q), \tag{4.1}\]
and an ordered pair containing an identity arrow is itself an identity arrow
\[(\text{id}\,\text{A}\,,\text{X})=\text{id}(\text{A}\,,\text{X})=(\text{A}\,, \text{id}\,\text{X}). \tag{4.2}\]
By whiskering horizontal functoriality (3.5) we have the same results for proarrows
\[(\text{M}\,,\text{X})\odot(\text{N}\,,\text{X})=(\text{M}\odot\text{N}\,, \text{X})\text{ and }(\text{A}\,,\text{S})\odot(\text{A}\,,\text{T})=(\text{A}\,,\text{S}\odot \text{T}), \tag{4.3}\]
with identity proarrow
\[(\text{U}\,\text{A}\,,\text{X})=\text{U}(\text{A}\,,\text{X})=(\text{A}\,, \text{U}\,\text{X}). \tag{4.4}\]
Compound morphisms are formed by composing along compatible boundaries. For example, given arrows \(f:\mathbb{C}\,(\text{A}\to\text{B})\) and \(p:\mathbb{D}\,(\text{X}\to\text{Y})\) we can form the arrow \((f\,,\text{X})\cdot(\text{B}\,,p):(\text{A}\,,\text{X})\to(\text{B}\,,\text{ Y})\) by composition along \((\text{B}\,,\text{X})\).
We represent squares of the double category \(\mathbb{C}\otimes\mathbb{D}\) as surface diagrams containing two surfaces, one for each of \(\mathbb{C}\) and \(\mathbb{D}\). For example, the following diagram represents the square \(((\alpha\,,\text{X})\odot(g\,,\text{V}))\cdot((\text{N}\,,p)\odot(\text{B}^{ \prime}\,,\varphi))\), which by middle-four exchange is equal to \(((\alpha\,,\text{X})\cdot(\text{N}\,,p))\odot((g\,,\text{V})\cdot(\text{B}^{ \prime}\,,\varphi))\).
A Gray-monoidal double category \(\mathbb{C}\) is a double category with additional structure in the following sense. We can define a functor \(\otimes_{\mathbb{C}}:\mathbb{C}\otimes\mathbb{C}\to\mathbb{C}\), which sends an ordered pair to the corresponding whiskering or interchanger. We can also define a functor \(\text{I}_{\mathbb{C}}:\mathbb{\mathbb{I}}\to\mathbb{C}\), which picks out the unit for \(\otimes\). A double category is Gray-monoidal when these functors determine a monoid. Intuitively, this lets us combine any number of cells into a single cell in this Gray, dimension-summing way. We use the same surface diagram representation for a Gray-monoidal double category \(\mathbb{C}\) as for the iterated Gray ordered pair double category \(\mathbb{C}\otimes\ldots\otimes\mathbb{C}\).
Using the Gray ordered pair double category we can define a functor analogous to the swap functor for the cartesian ordered pair double category.
**Definition 4.3** (swap functor): For double categories \(\mathbb{C}\) and \(\mathbb{D}\) the _swap functor_\(\text{S}_{(\mathbb{C},\mathbb{D})}:\mathbb{C}\otimes\mathbb{D}\to\mathbb{D} \otimes\mathbb{C}\) transposes the factors of each ordered pair, sending \((x\,,y)\) to \((y\,,x)\).
For \(i+j=k\), the swap functor turns \((i\,,\,j)\)-type \(k\)-cells into \((j\,,\,i)\)-type \(k\)-cells. Moreover, it swaps the heterogeneous \((1\,,\,1)\)-type \(2\)-cells \((v\,,h)\) and \((h\,,\,v)\), and sends oplax homogeneous \((1\,,\,1)\)-type \(2\)-cells \((v\,,\,v)\) and \((h\,,\,h)\) to lax ones, and vice-versa. Thus for \(v,h\in\{\mathrm{oplax},\mathrm{lax}\}\), if \(\mathbb{C}\otimes\mathbb{D}\) has interchanger variance \((v\,,\,h)\) then \(\mathbb{D}\otimes\mathbb{C}\) should have the complement interchanger variance \((\tilde{v}\,,\tilde{h})\). Swapping is an involution, in the sense that
\[\mathrm{S}_{(\mathbb{C},\mathbb{D})}\cdot\mathrm{S}_{(\mathbb{D},\mathbb{C})} \ =\ \mathrm{id}(\mathbb{C}\otimes\mathbb{D}). \tag{4.5}\]
We represent swapping in surface diagrams as permuting the order of the surfaces. For example, the mapping of an oplax \((h\,,\,h)\)-type \(2\)-cell to a lax one is depicted as follows.
**Definition 4.4** (braiding):
Let \(\mathbb{C}\) be a Gray-monoidal double category with pseudo interchangers in both dimensions. A(n arrow-dimension) _braiding_ on \(\mathbb{C}\) is an arrow-dimension pseudo transformation
\[\sigma:\,(\mathbb{C}\otimes\mathbb{C}\to\mathbb{C})\,(\otimes_{\mathbb{C}} \to\mathrm{S}_{(\mathbb{C},\mathbb{C})}\cdot\,\otimes_{\mathbb{C}})\]
satisfying the following relations.
**nullary tensor braiding coherence:**: Braiding with the tensor unit is trivial in the sense that
\[\sigma(\mathrm{A}\,,\mathrm{I})\ =\ \mathrm{id}\,\mathrm{A}\ =\ \sigma(\mathrm{I}\,,\mathrm{A}) \tag{4.6}\]
and these equations constitute the object-components of identity modifications
\[\sigma(-\,,\mathrm{I})\mapsto\mathrm{id}-\quad\mathrm{and}\quad\sigma( \mathrm{I}\,,-)\mapsto\mathrm{id}- \tag{4.7}\]
**binary tensor braiding coherence:**: Braiding with a tensor product is given by successive braidings in the sense that
\[\begin{array}{rcl}\sigma(\mathrm{A}\,,\mathrm{B}\otimes\mathrm{C})&=&( \sigma(\mathrm{A}\,,\mathrm{B})\otimes\mathrm{C})\cdot(\mathrm{B}\otimes \sigma(\mathrm{A}\,,\mathrm{C}))\\ \sigma(\mathrm{A}\otimes\mathrm{B}\,,\mathrm{C})&=&(\mathrm{A}\otimes\sigma( \mathrm{B}\,,\mathrm{C}))\cdot(\sigma(\mathrm{A}\,,\mathrm{C})\otimes\mathrm{B })\end{array} \tag{4.8}\]
and these equations constitute the object-components of identity modifications
\[\begin{array}{rcl}\sigma(\overset{1}{-}\,,\overset{2}{-}\otimes\overset{3}{- })\mapsto(\sigma(\overset{1}{-}\,,\overset{2}{-})\otimes\overset{3}{-}) \cdot(\overset{2}{-}\otimes\sigma(\overset{1}{-}\,,\overset{3}{-}))\quad \mathrm{and}\\ \sigma(\overset{1}{-}\otimes\overset{2}{-}\,,\overset{3}{-})\mapsto( \overset{1}{-}\otimes\sigma(\overset{2}{-}\,,\overset{3}{-}))\cdot(\sigma( \overset{1}{-}\,,\overset{3}{-})\otimes\overset{2}{-})\end{array} \tag{4.9}\]
**Yang-Baxteror braiding coherence:**: Reversing the order of three objects using three braidings is coherent in the sense that
\[\sigma(\sigma(\mathrm{A}\,,\mathrm{B})\,,\mathrm{C})\ =\ \sigma(\mathrm{A}\,, \sigma(\mathrm{B}\,,\mathrm{C}))^{\scalebox{0.5}{$-$1$}} \tag{4.10}\]
where, recalling our convention, \(\sigma(\sigma(\mathrm{A}\,,\mathrm{B})\,,\mathrm{C})\) is an arrow-component disk with oplax variance and \(\sigma(\mathrm{A}\,,\sigma(\mathrm{B}\,,\mathrm{C}))^{\scalebox{0.5}{$-$1$}}\) is one with lax variance. We refer to these as "Yang-Baxterators", and describe them in detail below.
A Gray-monoidal double category equipped with a braiding is a _braided Gray-monoidal double category._
We now unfold this definition and introduce the corresponding graphical notation. First we recall what it means for \(\sigma\) to be a arrow-dimension oplax transformation.
Braiding object-component arrowsFor objects \(\mathrm{A},\mathrm{X}:\mathbb{C}\) we have a braiding component arrow \(\sigma(\mathrm{A}\;,\mathrm{X}):\mathbb{C}\,(\mathrm{A}\otimes\mathrm{X} \rightarrow\mathrm{X}\otimes\mathrm{A})\). We represent this in a string diagram in the \((\otimes\;,\;\cdot)\)-plane as a crossing of the wires representing \(\mathrm{A}\) and \(\mathrm{X}\).
Braiding proarrow-component squaresFor a proarrow \(\mathrm{M}:\mathrm{A}\rightarrow\mathrm{B}\) and object \(\mathrm{X}\) we have a \((1,0)\)-type proarrow \((\mathrm{M},\mathrm{X}):(\mathrm{A}\;,\mathrm{X})\nrightarrow(\mathrm{B}\,, \mathrm{X})\) and a \((0,1)\)-type proarrow \((\mathrm{X}\,,\mathrm{M}):(\mathrm{X}\,,\mathrm{A})\nrightarrow(\mathrm{X}\,, \mathrm{B})\). These give us the following component squares, which we depict as shown.
(4.11)
Braiding arrow-component disksFor an arrow \(f:\mathrm{A}\rightarrow\mathrm{B}\) and object \(\mathrm{X}\) we have a \((1\,,0)\)-type arrow \((f\,,\mathrm{X}):(\mathrm{A}\;,\mathrm{X})\rightarrow(\mathrm{B}\,,\mathrm{X})\) and a \((0\,,1)\)-type arrow \((\mathrm{X}\,,f):(\mathrm{X}\,,\mathrm{A})\rightarrow(\mathrm{X}\,,\mathrm{B})\). These give us the following component disks, which we depict as shown.
(4.12)
Braiding composite \(1\)-cellsThe nullary clause of preservation of proarrow composition (2.2) and the nullary clause of compatibility with arrow composition (2.3)
together with equation (4.4) say that for objects A and X we have the following equations.
\[\sigma(\mathrm{UA}\,,\mathrm{X})=\sigma(\mathrm{A}\,,\mathrm{UX})=\mathrm{U}( \sigma(\mathrm{A}\,,\mathrm{X}))=\sigma(\mathrm{id}\mathrm{A}\,,\mathrm{X})= \sigma(\mathrm{A}\,,\mathrm{id}\mathrm{X}) \tag{4.13}\]
This provides a unique interpretation to the following surface diagram.
Similarly, the binary clauses of (2.2) and (2.3) provide unique interpretations for all instances of this diagram obtained by "painting" onto it two consecutive arrows or proarrows, each independently of type \((1\,,\,0)\) or \((0\,,\,1)\), with non-intersecting \(\,\otimes\,\)-dimension projections. For example, given arrows \(f:\mathrm{A}\to\mathrm{B}\) and \(p:\mathrm{X}\to\mathrm{Y}\) the equation (2.3) instance
\[\sigma((f\,,\mathrm{X})\cdot(\mathrm{B}\,,p))=(\mathrm{U}(f\otimes\mathrm{X}) \cdot\sigma(\mathrm{B}\,,p))\odot(\sigma(f\,,\mathrm{X})\cdot\mathrm{U}(p \otimes\mathrm{B}))\]
provides a unique interpretation to the following surface diagram on the left. Likewise, for arrow \(g:\mathrm{B}\to\mathrm{C}\) equations (2.3) and (4.1) together imply
\[\sigma(f\cdot g\,,\mathrm{X})=(\mathrm{U}(f\otimes\mathrm{X})\cdot\sigma(g\,, \mathrm{X}))\odot(\sigma(f\,,\mathrm{X})\cdot\mathrm{U}(\mathrm{X}\otimes g)) \tag{4.14}\]
providing a unique interpretation to the one on the right.
Braiding naturality for squaresFor square \(\varphi:{}^{\mathrm{M}}_{f}\!\!\!\circ^{g}_{\mathrm{N}}\) we have a \((2,0)\)-type square \((\varphi\,,\mathrm{X})\) and a \((0\,,\,2)\)-type square \((\mathrm{X}\,,\varphi)\). In this case naturality for squares (2.4) says
\[\begin{array}{rcl}((\varphi\otimes\mathrm{X})\cdot\sigma(\mathrm{N}\,, \mathrm{X}))\odot\sigma(g\,,\mathrm{X})&\cong&\sigma(f\,,\mathrm{X})\odot( \sigma(\mathrm{M}\,,\mathrm{X})\cdot(\mathrm{X}\otimes\varphi))\\ ((\mathrm{X}\otimes\varphi)\cdot\sigma(\mathrm{X}\,,\mathrm{N}))\odot\sigma( \mathrm{X}\,,g)&\cong&\sigma(\mathrm{X}\,,f)\odot(\sigma(\mathrm{X}\,,\mathrm{ M})\cdot(\varphi\otimes\mathrm{X}))\end{array} \tag{4.15}\]
This identifies the two boundary-preserving perturbations of each of the following surface diagrams that move the square \(\varphi\) up or down away from the braiding.
For arrow \(f:{\rm A}\to{\rm B}\) and proarrow \({\rm S}:{\rm X}\to{\rm Y}\) we have a \((v\,,\,h)\)-type interchanger square \((f\,,{\rm S})\) and a \((h\,,v)\)-type interchanger square \(({\rm S}\,,f)\). In this case naturality for squares (2.4) says
\[\begin{array}{rcl}(\chi_{(f,{\rm S})}\cdot\sigma({\rm B}\,,{\rm S}))\odot \sigma(f\,,{\rm Y})&\cong&\sigma(f\,,{\rm X})\odot(\sigma({\rm A}\,,{\rm S}) \cdot\chi_{({\rm S},f)})\\ (\chi_{({\rm S},f)}\cdot\sigma({\rm S}\,,{\rm B}))\odot\sigma({\rm Y}\,,f)& \cong&\sigma({\rm X}\,,f)\odot(\sigma({\rm S}\,,{\rm A})\cdot\chi_{(f,{\rm S}) })\end{array}\]
This identifies the two boundary-preserving perturbations of each of the following surface diagrams that move the interchanger of \(f\) and \({\rm S}\) up or down away from the braiding.
For proarrows \({\rm M}:{\rm A}\to{\rm B}\) and \({\rm S}:{\rm X}\to{\rm Y}\) we have an \((h\,,h)\)-type oplax interchanger disk \(({\rm M}\,,{\rm S})\) and an \((h\,,h)\)-type lax interchanger disk \(({\rm S}\,,{\rm M})\). In this case the instance of naturality for squares in equation (2.6) says
\[\chi_{({\rm M},{\rm S})}\cdot(\sigma({\rm A}\,,{\rm S})\odot\sigma({\rm M}\,,{ \rm Y}))\quad=\quad(\sigma({\rm M}\,,{\rm X})\odot\sigma({\rm B}\,,{\rm S})) \cdot\chi_{({\rm S},{\rm M})}{}^{-1}\]
where, according to our convention on variance, \(\chi_{({\rm M},{\rm S})}\) is an interchanger disk with oplax orientation and \(\chi_{({\rm S},{\rm M})}{}^{-1}\) is one with lax orientation. This identifies the two boundary-preserving perturbations of the following surface diagram that move the interchanger of M and S up or down away from the braiding.
Note that we need a braided Gray-monoidal double category to have pseudo interchangers for proarrows because the swap functor inverts the interchanger variance below the braiding.
For arrows \(f:{\rm A}\to{\rm B}\) and \(p:{\rm X}\to{\rm Y}\) we have a \((v\,,v)\)-type oplax interchanger disk \((f\,,p)\) and a \((v\,,v)\)-type lax interchanger disk \((p\,,f)\). In this case the instance of naturality for squares in equation (2.5) says
\[\begin{array}{l}[\chi_{(f,p)}\cdot{\rm U}(\sigma({\rm B}\,,{\rm Y}))]\odot[{ \rm U}({\rm A}\otimes p)\cdot\sigma(f\,,{\rm Y})]\odot[\sigma({\rm A}\,,p) \cdot{\rm U}({\rm Y}\otimes f)]\\ =\\ [{\rm U}(f\otimes{\rm X})\cdot\sigma({\rm B}\,,p)]\odot[\sigma(f\,,{\rm X}) \cdot{\rm U}(p\otimes{\rm B})]\odot[{\rm U}(\sigma({\rm A}\,,{\rm X}))\cdot \chi_{(p,f)}{}^{-1}]\end{array}\]
This identifies the two boundary-preserving perturbations of the following surface
diagram that move the interchanger of \(f\) and \(p\) up or down away from the braiding.
Note that we need a braided Gray-monoidal double category to have pseudo interchangers for arrows because the swap functor inverts the interchanger variance below the braiding.
Braiding pseudo naturalityThe requirement that \(\sigma\) be an arrow-dimension pseudo transformation means that we also have arrow-component disks with the lax variance
\[\sigma(f\,,\mathrm{X})^{-1}:\sigma(\mathrm{A}\,,\mathrm{X})\cdot( \mathrm{X}\otimes f) \mapsto(f\otimes\mathrm{X})\cdot\sigma(\mathrm{B}\,,\mathrm{X})\quad \text{and}\] \[\sigma(\mathrm{X}\,,f)^{-1}:\sigma(\mathrm{X}\,,\mathrm{A})\cdot( f\otimes\mathrm{X}) \mapsto(\mathrm{X}\otimes f)\cdot\sigma(\mathrm{X}\,,\mathrm{B})\]
which are proarrow-dimension inverse (2.7) to \(\sigma(f\,,\mathrm{X})\) and \(\sigma(\mathrm{X}\,,f)\) respectively. For example, the relation is depicted below.
\((m\,,n)\)-braiding coherenceThe nullary and binary tensor braiding laws (4.6) and (4.8) assert that each of the following string diagrams in the \((\otimes\,,\,\cdot\,)\)-plane represents a unique arrow.
\[\begin{array}{ccccc}\mathrm{A}&\mathrm{A}&\mathrm{B}&\mathrm{C}&\mathrm{A }&\mathrm{B}&\mathrm{C}\\ \mathrm{A}&\mathrm{B}&\mathrm{C}&\mathrm{A}&\mathrm{C}&\mathrm{A}&\mathrm{B} \end{array} \tag{4.16}\]
By induction, the braiding of an \(m\)-ary tensor of objects with an \(n\)-ary tensor of objects is well defined. We refer to these as "\((m\,,\,n)\)-braidings".
The requirement that equations (4.6) and (4.8) be the object-component disks of identity modifications (4.7) and (4.9) is a succinct way to make them natural for arrows (2.15) and for proarrows (2.16) in all indices. This implies that if we "paint" an arrow or proarrow onto any surface of one on the following diagrams, the result has a unique interpretation.
\[\raisebox{-11.381102pt}{\includegraphics[]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/fig/figfig/fig/figfig/figfig/fig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/figfig/figfig/fig/fig/figfig/fig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/figfig/figfig/figfig/fig/fig/figfig/figfig/fig/fig/figfig/fig/figfig/figfig/fig/fig/figfig/fig/figfig/figfig/figfig/figfig/fig/figfig/fig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/fig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfigfig/figfig/figfigfig/fig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfigfig/figfig/figfigfig/figfig/figfig/figfigfig/figfigfig/figfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfig/figfig/figfigfig/figfig/figfigfig/figfig/figfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfigfig/figfigfig/figfigfigfigfigfig/figfigfigfig/figfigfigfigfig/figfigfigfig/figfig
These provide unique interpretations for the following surface diagrams.
\[\begin{array}{c}\includegraphics[]{figure/M}\end{array}\quad,\
depicted below.
By equation (4.8) these also have the proarrow-dimension boundary depicted in diagram (4.20), right-to-left in the first case and left-to-right in the second.
Yang-Baxterator braiding coherence asserts that \(\sigma(\sigma(\mathrm{A}\:,\mathrm{B})\:,\mathrm{C})^{-1}\) and \(\sigma(\mathrm{A}\:,\sigma(\mathrm{B}\:,\mathrm{C}))^{-1}\) are respectively \(\sigma(\mathrm{A}\:,\sigma(\mathrm{B}\:,\mathrm{C}))\) and \(\sigma(\sigma(\mathrm{A}\:,\mathrm{B})\:,\mathrm{C})\); that is, that \(\sigma(\sigma(\mathrm{A}\:,\mathrm{B})\:,\mathrm{C})\) and \(\sigma(\mathrm{A}\:,\sigma(\mathrm{B}\:,\mathrm{C}))\) are proarrow-dimension inverse to one another (4.10).
A natural question to ask is what happens if we have two successive braidings on the same pair of objects. In a one-dimensional monoidal category the only thing we can do is impose a relation, known as _symmetry_. But as with the Yang-Baxterators, given more dimensions we can categorify this property to a structure known as a syllepsis.
**Definition 4.5** (syllepsis):
A _syllepsis_ for a braided Gray-monoidal double category \(\mathbb{C}\) is an invertible globular modification
\[\upsilon:\:(\:\otimes_{\mathbb{C}}\rightarrow\:\otimes_{\mathbb{C}})\:(\mathrm{ id}(\:\otimes_{\mathbb{C}})\twoheadrightarrow\sigma\cdot(\mathrm{S}\:\cdot\:\sigma))\]
That is, for objects \(\mathrm{A}\) and \(\mathrm{B}\), syllepsis \(\upsilon\) has object-component disk
\[\upsilon(\mathrm{A}\:,\mathrm{B}):\mathrm{id}(\:\otimes\:\mathrm{B})\twoheadrightarrow \sigma(\mathrm{A}\:,\mathrm{B})\cdot\sigma(\mathrm{B}\:,\mathrm{A})\]
It is required to satisfy the following relations.
**nullary tensor syllepsis coherence:**: Syllepsis with the tensor unit is trivial in the sense that
\[\upsilon(\mathrm{A}\:,\mathrm{I})\ =\ \mathrm{id}^{2}\:\mathrm{A}\ =\ \upsilon( \mathrm{I}\:,\mathrm{A}) \tag{4.21}\]
**binary tensor syllepsis coherence:**: Syllepsis with a tensor product is given by nested syllpeses in the sense that
\[\begin{array}{l}\upsilon(\mathrm{A}\:,\mathrm{B}\otimes\:\mathrm{C})\:=\:( \upsilon(\mathrm{A}\:,\mathrm{B})\otimes\:\mathrm{C})\:\odot\:[\mathrm{U}( \sigma(\mathrm{A}\:,\mathrm{B})\otimes\:\mathrm{C})\cdot(\mathrm{B}\otimes \upsilon(\mathrm{A}\:,\mathrm{C}))\cdot\mathrm{U}(\sigma(\mathrm{B}\:,\mathrm{ A})\otimes\:\mathrm{C})]\\ \upsilon(\mathrm{A}\otimes\mathrm{B}\:,\mathrm{C})\:=\:(\mathrm{A}\otimes \upsilon(\mathrm{B}\:,\mathrm{C}))\:\odot\:[\mathrm{U}(\mathrm{A}\otimes \sigma(\mathrm{B}\:,\mathrm{C}))\cdot(\upsilon(\mathrm{A}\:,\mathrm{C}) \otimes\mathrm{B})\cdot\mathrm{U}(\mathrm{A}\otimes\sigma(\mathrm{C}\:, \mathrm{B}))]\end{array} \tag{4.22}\]
A braided Gray-monoidal double category with a syllepsis is a _sylleptic Gray-monoidal double category._
A syllepsis relates the "un-braiding" to a consecutive pair of braidings. We draw
\(v(\text{A}\,,\text{B})\) in surface diagrams as shown.
The nullary tensor syllepsis coherence laws (4.21) are dependent upon the nullary tensor braiding coherence laws (4.6), and ensure that the following surface diagram continues to have a unique interpretation.
The binary tensor syllepsis coherence laws (4.22) are dependent upon the binary tensor braiding coherence laws (4.8), and provide unique interpretations to each of the following surface diagrams.
To better understand these it may help to render the proarrow disks on the right of equations (4.22) as rewrites yielding the following sequence of "slice" string diagrams in the \((\,\otimes\,,\,\cdot\,)\)-plane. As is customary in rewriting, we label the transitions for only the parts of the diagram that are changed. This reduces notational clutter by suppressing identity morphisms.
\[\begin{array}{
**Definition 4.6** (symmetry)
A syllepsis \(\upsilon\) is called a _symmetry_ if \((\sigma,\mathrm{S}\cdot\cdot\sigma,\upsilon,\mathrm{S}\cdot\cdot\cdot\upsilon^{ \text{-}1})\) form an adjoint equivalence; that is, if for objects \(\mathrm{A}\) and \(\mathrm{B}\) we have an adjunction \(\sigma(\mathrm{A}\,,\mathrm{B})\dashv\sigma(\mathrm{B}\,,\mathrm{A})\) with unit \(\upsilon(\mathrm{A}\,,\mathrm{B})\) and counit \(\upsilon(\mathrm{B}\,,\mathrm{A})^{\text{-}1}\).
A sylleptic Gray-monoidal double category with a symmetry is a _symmetric Gray-monoidal double category_.
We represent the adjunction laws in surface diagrams as follows.
In the globular literature symmetry is characterized by the following equation [10].
\[\upsilon(\mathrm{A}\,,\mathrm{B})\cdot\mathrm{U}(\sigma(\mathrm{A}\,,\mathrm{B }))\quad=\quad\mathrm{U}(\sigma(\mathrm{A}\,,\mathrm{B}))\cdot\upsilon( \mathrm{B}\,,\mathrm{A}) \tag{4.23}\]
Our definition is equivalent.
**Proposition 4.7**
A syllepsis is a symmetry just in case equation (4.23) holds.
Proof.: Assuming the adjoint equivalence, we have
where in the first step we rewrite the identity arrow disk \(\mathrm{U}(\sigma(\mathrm{B}\,,\mathrm{A})\cdot\sigma(\mathrm{A}\,,\mathrm{B}))\) to the composite of inverses \(\upsilon(\mathrm{B}\,,\mathrm{A})^{\text{-}1}\odot\upsilon(\mathrm{B}\,, \mathrm{A})\), and in the second we apply the left adjunction law.
Conversely, assuming equation (4.23), we have
where in the first step we rewrite by equation (4.23) forwards, and in the second we rewrite the composite of inverses \(\upsilon(\mathrm{B}\,,\,\mathrm{A})\odot\upsilon(\mathrm{B}\,,\,\mathrm{A})^{-1}\) to the identity square \(\mathrm{id}^{2}(\mathrm{B\otimes A})\). Similarly, rewriting by equation (4.23) backwards we get the other adjunction law.
\[=\]
Related ConstructionsThe development of symmetric Gray-monoidal double categories presented here is inspired by Bohm's _double category analogue of Gray monoids_[1], by Garner and Gurski's presentation of _monoidal double categories_[10], and by Shulman's development of _symmetric monoidal double categories_[23].
Another thread in influence, coming from the globular perspective, is the idea of a braided _Gray-monoidal \(2\)-category_ (or "Gray monoid" or "semistrict monoidal \(2\)-category"), which was introduced by Kapranov and Voevodsky [11]. This was further developed by Baez and Neuchl [1], who added Yang-Baxterator braiding coherence, by Day and Street [1], who added syllepsis and symmetry, and by Crans [12], who added nullary tensor braiding coherence.
Our Gray-monoidal double categories are closest to Bohm's double categorical Gray monoids. Our construction differs in being derived from locally cubical Gray categories, and thus being parametric in the variance of the homogeneous interchangers. We also go on to define braided, sylleptic, and symmetric structure.
Garner and Gurski define monoidal double categories as one-object locally cubical bicategories (section 3), making their monoidal structure weak, but not Gray. This is different from our construction in that the monoidal structure on a double category \(\mathbb{C}\) is given by a functor from the cartesian ordered pair double category \(\mathbb{C}\times\mathbb{C}\to\mathbb{C}\), whereas for us it is given by a functor from the Gray ordered pair double category \(\mathbb{C}\otimes\mathbb{C}\to\mathbb{C}\). Intuitively, in their setting you can do several things at once, whereas in ours you can do only one thing at a time, though you may change the order using suitably oriented interchangers if those things are independent.
Shulman gives a direct presentation by generators and relations of monoidal double categories, and goes on to define symmetric braided structure for them. Shulman's braiding is a strict arrow-dimension transformation, while ours is pseudo. We defend our choice on topological grounds: the boundary arrows of a braiding arrow-component disk, depicted in diagram (4.12), are topologically distinct, so we prefer not to identify them.
Our construction differs from those on Gray-monoidal \(2\)-categories mentioned above most notably in that the two-dimensional categories involved are cubical rather than globular. Another important difference is that in the definition of braiding we impose the tensor braiding coherence equations (4.8). By doing so we are, in a sense, leaving food on the table because these equations strictly identify terms whose dimension is not maximal. In the globular literature cited above the modifications corresponding to (4.9) are only invertible, not identities. This results in an additional layer of tensor braiding coherence, which we avoid.
While the other approach is certainly more general, we defend our choice on the following grounds. First, maximal weakness need not be our goal. If we don't consider them meaningful then compounding coherentors only serves to add bureaucracy to the constructions we are trying to perform. The choice to use a strictly associative and unital Gray tensor product already precludes fully weak constructions, so we are not the only ones leaving food on the table.
More affirmatively, our choice is guided by topology. The diagrammatic representations of the terms in each of equations (4.8) are topologically identical, as depicted in diagram (4.16). This allows us to treat \((m\,,n)\)-braidings strictly compositionally. Baez and Neuchl use a similar topological argument to justify the addition of Yang-Baxterator coherence to the braiding coherence laws originally proposed by Kapranov and Voevodsky, although in that case the dimension of the structures involved is maximal so no food is wasted.
## 5 Cartesian Structure
A presentation of cartesian products in \(1\)-categories is typically given by the universal construction of a terminal cone over a functor from a discrete category. Such a presentation is "behavioral" in the sense that it distinguishes product cones by a universal property in relation to all possible cones. Of course, this presentation is equivalent to one determining an isomorphism of Set-products of homs and homs to product objects. Sadly, when we move away from the setting of \(1\)-categories, such happy coincidences no longer obtain [12].
An alternative presentation is that of Fox, who observed that in the case of symmetric monoidal \(1\)-categories, cartesian structure can be characterized by a pair of natural transformations that are compatible with the monoidal structure and whose components endow objects with the structure of cocommutative comonoids [13]. The comonoid comultiplication acts as a _duplicator_ and its counit as a _deletor_. This "structural" characterization of cartesian products has the benefit that it can be given a presentation by generators and relations.
It is such a "Fox cartesian structure" for Gray-monoidal double categories that we present here. In the Gray-monoidal setting our duplicators must produce their copies in a given order, which is permuted by the interchagers and braidings. Of course, composition of arrows and of proarrows needs to be compatible with this ordering. One consequence of this is that the map making two copies of a thing from one is not a strict functor (definition 2.2).
A definition of lax functor for double categories was proposed by Grandis and Pare [14], but their maps are lax in only the proarrow dimension. In order to copy composites of both arrows and proarrows, we need maps of double categories that are lax in both dimensions.
**Definition 5.1** (lax functor of double categories)
A (doubly) _lax functor_ of double categories, \(\mathrm{F}:\mathbb{C}\to\mathbb{D}\), consists of a collection of
boundary-preserving maps of objects, arrows, proarrows, and squares,
together with a natural and coherent _comparison structure_ in each dimension, \(\theta\) and \(\theta^{\odot}\), as follows. We typically omit the dimension of a comparison structure as well as the arity of a comparitor disk when they are clear from the context.
arrow comparison structureFor each object \(\mathrm{A}\) there is a nullary comparitor arrow disk \(\theta_{0}^{\odot}(\mathrm{A}):\mathrm{id}(\mathrm{FA})\to\mathrm{F}(\mathrm{id }\,\mathrm{A})\), and for consecutive arrows \(f:\mathrm{A}\to\mathrm{B}\) and \(g:\mathrm{B}\to\mathrm{C}\) there is a binary comparitor arrow disk \(\theta_{2}(f\,,g):\mathrm{F}f\cdot\mathrm{F}g\to\mathrm{F}(f\cdot g)\).
(5.1)
These are coherent in the sense that
(5.2)
and natural in the sense that
(5.3)
and
(5.4)
(5.5)
proarrow comparison structureFor each object \(\mathrm{A}\) there is a nullary comparitor proarrow disk \(\theta_{0}^{\odot}(\mathrm{A}):\mathrm{U}(\mathrm{FA})\to\mathrm{F}(\mathrm{UA})\), and for consecutive proarrows \(\mathrm{M}:\mathrm{A}\to\mathrm{B}\) and \(\mathrm{N}:\mathrm{B}\to\mathrm{C}\) there is a binary comparitor proarrow disk \(\theta_{2}^{\odot}(\mathrm{M}\,,\mathrm{N}):\mathrm{FM}\odot\mathrm{FN}\to\mathrm{F}(f \cdot g\cdot h)\).
(5.6)
proarrow comparison structureFor each object \(\mathrm{A}\) there is a nullary comparitor proarrow disk \(\theta_{0}^{\odot}(\mathrm{A}):\mathrm{U}(\mathrm{FA})\to\mathrm{F}(\mathrm{UA})\), and for consecutive proarrows \(\mathrm{M}:\mathrm{A}\to\mathrm{B}\) and \(\mathrm{N}:\mathrm{B}\to\mathrm{C}\) there is a binary comparitor proarrow disk \(\theta_{2}^{\odot}(\mathrm{M}\,,\mathrm{N}):\mathrm{FM}\odot\mathrm{FN}\to\mathrm{F}(f \cdot g\cdot h)\).
(5.7)
proarrow comparison structureFor each object \(\mathrm{A}\) there is a nullary comparitor proarrow disk \(\theta_{0}^{\odot}(\mathrm{A}):\mathrm{U}(\mathrm{FA})\to\mathrm{F}(\mathrm{UA})\), and for consecutive proarrows \(\mathrm{M}:\mathrm{A}\to\mathrm{B}\) and \(\mathrm{N}:\mathrm{B}\to\mathrm{C}\) there is a binary comparitor proarrow disk \(\theta_{2}^{\odot}(\mathrm{M}\,,\mathrm{N}):\mathrm{FM}\odot\mathrm{FN}\to\mathrm{F}(f \cdot g\cdot h)\).
(5.8)
proarrow comparison structureFor each object \(\mathrm{A}\) there is a nullary comparitor proarrow disk \(\theta_{0}^{\odot}(\mathrm{A}):\mathrm{U}(\mathrm{FA})\to\mathrm{F}(\mathrm{UA})\), and for consecutive proarrows \(\mathrm{M}:\mathrm{A}\to\mathrm{B}\) and \(\mathrm{N}:\mathrm{B}\to\mathrm{C}\) there is a binary comparitor proarrow disk \(\theta_{2}^{\odot}(\mathrm{M}\,,\mathrm{N}):\mathrm{FM}\odot\mathrm{FN}\to\mathrm{F}(f \cdot g\cdot h)\).
(5.9)
proarrow comparison structureFor each object \(\mathrm{A}\) there is a nullary comparitor proarrow disk \(\theta_{0}^{\odot}(\mathrm{A}):\mathrm{U}(\mathrm{FA})\to\mathrm{F}(\mathrm{UA})\), and for consecutive proarrows \(\mathrm{M}:\mathrm{A}\to\mathrm{B}\) and \(\mathrm{N}:\mathrm{B}\to\mathrm{C}\) there is a binary comparitor proarrow disk \(\theta_{2}^{\odot}(\mathrm{M}\,,\mathrm{N}):\mathrm{FM}\odot\mathrm{FN}\to\mathrm{F}(f \cdot g\cdot h)\).
(5.10)
proarrow comparison structureFor each object \(\mathrm{A}\) there is a nullary comparitor proarrow disk \(\theta_{0}^{\odot}(\mathrm{A}):\mathrm{U}(\mathrm{FA})\to\mathrm{F}(\mathrm{UA})\), and for consecutive proarrows \(\mathrm{M}:\mathrm{A}\to\mathrm{B}\) and \(\mathrm{N}:\mathrm{B}\to\mathrm{C}\) there is a binary comparitor proarrow disk \(\theta_{2}^{\odot}(\mathrm{M}\,,\mathrm{N}):\mathrm{FM}\odot\mathrm{FN}\to\mathrm{F}(f \cdot g\cdot h)\).
(5.11)
proarrow comparison structureFor each object \(\mathrm{A}\) there is a nullary comparitor proarrow disk \(\theta_{0}^{\odot}(\mathrm{A}):\mathrm{U}(\mathrm{FA})\to\mathrm{F}(\mathrm{UA})\), and for consecutive proarrows \(\mathrm{M}:\mathrm{A}\to\mathrm{B}\) and \(\mathrm{N}:\mathrm{B}\to\mathrm{C}\) there is a binary comparitor proarrow disk \(\theta_{2}^{\odot}(\mathrm{M}\,,\mathrm{N}):\mathrm{FM}\odot\mathrm{FN}\to\mathrm{F}(f \cdot g\cdot h)\).
(5.12)
proarrow comparison structureFor each object \(\mathrm{A}\) there is a nullary comparitor proarrow disk \(\theta_{0}^{\odot}(\mathrm{A}):\mathrm{U}(\mathrm{FA})\to\mathrm{F}(\mathrm{UA})\), and for consecutive proarrows \(\mathrm{M}:\mathrm{A}\to\mathrm{B}\) and \(\mathrm{N}:\mathrm{B}\to\mathrm{C}\) there is a binary comparitor proarrow disk \(\theta_{2}^{\odot}(\mathrm{M}\,,\mathrm{N}):\mathrm{FM}\odot\mathrm{FN}\to\mathrm{F}(f \cdot g\cdot h)\).
(5.13)
proarrow comparison structureFor each object \(\mathrm{A}\) there is a nullary comparitor proarrow disk \(\theta_{0}^{\odot}(\mathrm{A}):\mathrm{U}(\mathrm{FA})\to\mathrm{F}(\mathrm{UA})\), and for consecutive proarrows \(\mathrm{M}:\mathrm{A}\to\mathrm{B}\) and \(\mathrm{N}:\mathrm{B}\to\mathrm{C}\) there is a binary comparitor proarrow disk \(\theta_{2}^{\odot}(\mathrm{M}\,,\mathrm{N}):\mathrm{FM}\odot\mathrm{FN}\to\mathrm{F}(f \cdot g\cdot h)\).
(5.14)
proarrow comparison structureFor each object \(\mathrm{A}\) there is a nullary comparitor proarrow disk \(\theta_{0}^{\odot}(\mathrm{A}):\mathrm{U}(\mathrm{FA})\to\mathrm{F}(\mathrm{UA})\), and for consecutive proarrows \(\mathrm{M}:\mathrm{A}\to\mathrm{B}\) and \(\mathrm{N}:\mathrm{B}\to\mathrm{C}\) there is a binary comparitor proarrow disk \(\theta_{2}^{\odot}(\mathrm{M}\,,\mathrm{N}):\mathrm{FM}\odot\mathrm{FN}\to\mathrm{F}(f \cdot g\cdot h)\).
(5.15)
proarrow comparison structureFor each object \(\mathrm{A}\) there is a nullary comparitor proarrow disk \(\theta_{0}^{\odot}(\mathrm{A}):\mathrm{U}(\mathrm{FA})\to\mathrm{F}(\mathrm{UA})\), and for consecutive proarrows \(\mathrm{M}:\mathrm{A}\to\mathrm{B}\) and \(\mathrm{N}:\mathrm{B}\to\mathrm{C}\) there is a binary comparitor proarrow disk \(\theta_{2}^{\odot}(\mathrm{M}\,,\mathrm{N}):\mathrm{FM}\odot\mathrm{FN}\to\mathrm{F}(f \cdot g\cdot h)\).
(5.16)
proarrow comparison structureFor each object \(\mathrm{A}\) there is a nullary comparitor proarrow disk \(\theta_{0}^{\odot}(\mathrm{A}):\mathrm{U}(\mathrm{FA})\to\mathrm{F}(\mathrm{UA})\), and for consecutive proarrows \(\mathrm{M}:\mathrm{A}\to\mathrm{B}\) and \(\mathrm{N}:\mathrm{B}\to\mathrm{C}\) there is a binary comparitor proarrow disk \(\theta_{2}^{\odot}(\mathrm{M}\,,\mathrm{N}):\mathrm{FM}\odot\mathrm{FN}\to\mathrm{F}(f \cdot g\cdot h)\).
(5.17)
proarrow comparison structureFor each object \(\mathrm{A}\) there is a nullary comparitor proarrow disk \(\theta_{0}^{\odot}(\mathrm{A}):\mathrm{U}(\mathrm{FA})\to\mathrm{F}(\mathrm{UA})\), and for consecutive proarrows \(\mathrm{M}:\mathrm{A}\to\mathrm{B}\) and \(\mathrm{N}:\mathrm{B}\to\mathrm{C}\) there is a binary comparitor proarrow disk \(\theta_{2}^{\odot}(\mathrm{M}\,,\mathrm{N}):\mathrm{FM}\odot\mathrm{FN}\to\mathrm{F}(f \cdot g\cdot h)\).
(5.18)
proarrow comparison structureFor each object \(\mathrm{A}\) there is a nullary comparitor proarrow disk \(\theta_{0}^{\odot}(\mathrm{A}):\mathrm{U}(\mathrm{FA})\to\mathrm{F}(\mathrm{UA})\), and for consecutive proarrows \(\mathrm{M}:\mathrm{A}\to\mathrm{B}\) and \(\mathrm{N}:\mathrm{B}\to\mathrm{C}\) there is a binary comparitor proarrow disk \(\theta_{2}^{\odot}(\mathrm{M}\,,\mathrm{N}):\mathrm{FM}\odot\mathrm{FN}\to\mathrm{F}(f \cdot g\cdot h)\).
(5.19)
proarrow comparison structureFor each object \(\mathrm{A}\) there is a nullary comparitor proarrow disk \(\theta_{0}^{\odot}(\mathrm{A}):\mathrm{U}(\mathrm{FA})\to\mathrm{F}(\mathrm{UA})\), and for consecutive proarrows \(\mathrm{M}:\mathrm{A}\to\mathrm{B}\) and \(\mathrm{N}:\mathrm{B}\to\mathrm{C}\) there is a binary comparitor proarrow disk \(\theta_{2}^{\odot}(\mathrm{M}\,,\mathrm{N}):\mathrm{FM}\odot\mathrm{FN}\to\mathrm{F}(f \cdot g\cdot h)\).
(5.20)
proarrow comparison structureFor each object \(\mathrm{A}\) there is a nullary comparitor proarrow disk \(\theta_{0}^{\odot}(\mathrm{A}):\mathrm{U}(\mathrm{FA})\to\mathrm{F}(\mathrm{UA})\), and for
\(\mathrm{F}(\mathrm{M}\odot\mathrm{N})\).
(5.6)
These are coherent in the sense that
(5.7)
and
(FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN FN) FP (FM FN) FP (FM FN FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN FN) FP (FM) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN) FP (FM FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN) FP (FM FN FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN FN) FP (FM FN) FP (FM FN) FP (FM FN FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN) FP (FM FN) FP (FM FN FN) FP (FM) FP (FM FN FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN FN) FP (FM FN) FP (FM FN FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM FN) FP (FM) FP (FM FN) FP (FM) FP (FM) FP (FM FN) FP (FM) FP (FM) FP (FM) FP (FM FN) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP (FM) FP FP (FM) FP FP FP (FM)
dimensions. We define the unitary lax _Gray diagonal functor_\(\Delta_{\mathbb{C}}:\mathbb{C}\to\mathbb{C}\otimes\mathbb{C}\) so that it sends a square \(\alpha:\,{}^{\mathrm{M}}_{f}\!\!\circ_{\mathrm{N}}^{g}\) to \(((\mathrm{A}\,,\alpha)\odot(\mathrm{M}\,,g))\cdot((f\,,\mathrm{N})\odot(\alpha \,,\mathrm{D}))\) as shown.
(5.11)
For consecutive arrows \(f:\mathrm{A}\to\mathrm{B}\) and \(g:\mathrm{B}\to\mathrm{C}\) it has comparitor disk \(\theta_{2}(f\,,g)=\mathrm{U}(\mathrm{A}\,,f)\cdot(f\,,g)\cdot\mathrm{U}(g\,, \mathrm{C})\), and for consecutive proarrows \(\mathrm{M}:\mathrm{A}\twoheadrightarrow\mathrm{B}\) and \(\mathrm{N}:\mathrm{B}\twoheadrightarrow\mathrm{C}\) it has comparitor disk \(\theta_{2}(\mathrm{M}\,,\mathrm{N})=\mathrm{id}(\mathrm{A}\,,\mathrm{M}) \odot(\mathrm{M}\,,\mathrm{N})\odot\mathrm{id}(\mathrm{N}\,,\mathrm{C})\) (suppressing the associator terms), as shown below. These act to _collate_ copies of composite arrows and proarrows, respectively.
(5.12)
If \(\mathbb{C}\otimes\mathbb{C}\) has lax interchangers in both dimensions then we can define another Gray diagonal functor \(\Delta_{\mathbb{C}}{}^{\prime}:\mathbb{C}\to\mathbb{C}\otimes\mathbb{C} \coloneqq\Delta_{\mathbb{C}}\cdot\mathrm{S}_{(\mathbb{C},\mathbb{C})}\), which swaps the order of the factors. Once we have lax functors we need transformations for them as well.
**Definition 5.3** (arrow-dimension oplax transformation of lax functors):
An _arrow-dimension oplax transformation of lax functors_ of double categories \(\alpha:(\mathbb{C}\to\mathbb{D})\,(\mathrm{F}\to\mathrm{G})\) has object-component arrows, proarrow-component squares, and arrow-component disks, just like those of an arrow-dimension oplax transformation of strict functors (definition 2.3). It satisfies the naturality condition for squares (2.4) as well. However, the proarrow composition preservation condition (2.2) no longer make sense because
\[\alpha(\mathrm{U}\,\mathrm{A}):{}^{\mathrm{F}(\mathrm{UA})}\!\!\circ_{\mathrm{ G}(\mathrm{UA})}^{\alpha\mathrm{A}}\quad\text{and}\quad\alpha(\mathrm{M}\odot \mathrm{N}):{}^{\mathrm{F}(\mathrm{M}\!\!\odot\!\mathrm{N})}\!\!\circ_{\mathrm{ G}(\mathrm{M}\!\!\odot\!\mathrm{N})}^{\alpha\mathrm{C}},\]
whereas
\[\mathrm{U}(\alpha\mathrm{A}):{}^{\mathrm{U}(\mathrm{FA})}\!\!\circ_{\alpha \mathrm{A}}^{\alpha\mathrm{A}}\quad\text{and}\quad\alpha\mathrm{M}\odot \alpha\mathrm{N}:{}^{\mathrm{FM}\!\!\odot\!\mathrm{FN}}\!\!\circ_{\alpha \mathrm{A}}^{\alpha\mathrm{C}}\!\!\!\circ_{\mathrm{GM}\!\!\odot\!\mathrm{GN}} ^{\alpha\mathrm{C}}.\]
Likewise, the arrow composition compatibility condition (2.3) no longer make sense because
\[\alpha(\mathrm{id}\,\mathrm{A}):\mathrm{F}(\mathrm{id}\,\mathrm{A})\cdot \alpha\mathrm{A}\twoheadrightarrow\alpha\mathrm{A}\cdot\mathrm{G}(\mathrm{id} \,\mathrm{A})\quad\text{and}\quad\alpha(f\cdot g):\mathrm{F}(f\cdot g)\cdot \alpha\mathrm{C}\twoheadrightarrow\alpha\mathrm{A}\cdot\mathrm{G}(f\cdot g),\]
whereas3
Footnote 3: Here we repeat the boundary of \(\mathrm{U}(\alpha\mathrm{A})\) using disk notation to facilitate comparison with that of \(\alpha(\mathrm{id}\,\mathrm{A})\).
\[\mathrm{U}(\alpha\mathrm{A}):\alpha\mathrm{A}\twoheadrightarrow\alpha\mathrm{A }\quad\text{and}\quad(\mathrm{U}(\mathrm{F}f)\cdot\alpha g)\odot(\alpha f\cdot \mathrm{U}(\mathrm{G}g)):\mathrm{F}f\cdot\mathrm{F}g\cdot\alpha\mathrm{C} \twoheadrightarrow\alpha\mathrm{A}\cdot\mathrm{G}f\cdot\mathrm{G}g.\]
We replace these with the following compari
proarrow comparitor compatibility: \(\theta_{0}^{\rm F}({\rm A})\cdot\alpha({\rm UA})={\rm U}(\alpha{\rm A})\cdot \theta_{0}^{\rm G}({\rm A})\),
(5.13)
and \(\theta_{2}^{\rm F}({\rm M}\,,{\rm N})\cdot\alpha({\rm M}\odot{\rm N})=(\alpha{ \rm M}\odot\alpha{\rm N})\cdot\theta_{2}^{\rm G}({\rm M}\,,{\rm N})\).
(5.14)
**arrow comparitor compatibility: \((\theta_{0}^{\rm F}({\rm A})\cdot{\rm U}(\alpha{\rm A}))\odot\alpha({\rm id }\,{\rm A})={\rm U}(\alpha{\rm A})\cdot\theta_{0}^{\rm G}({\rm A})\),**
(5.15)
and \((\theta_{2}^{\rm F}(f\,,g)\cdot{\rm U}(\alpha{\rm C}))\odot\alpha(f\cdot g)=({ \rm U}({\rm F}f)\cdot\alpha g)\odot(\alpha f\cdot{\rm U}({\rm G}g))\odot({\rm U }(\alpha{\rm A})\cdot\theta_{2}^{\rm G}(f\,,g))\).
(5.16)
Proarrow-dimension transformations of lax functors are defined similarly. A cubical modification of transformations of lax functors is defined just as those involving strict functors in definition 2.6, without additional coherences involving the object-component squares of the modification and the comparator disks of the lax functors.
Given all of this we can define duplication for symmetric Gray-monoidal double categories. Note that because a braided Gray-monoidal double category has invertible interchangers in both dimensions the unitary lax Gray diagonal functors \(\Delta\) and \(\Delta^{\prime}\) are necessarily pseudo (i.e., have invertible comparitors).
**Definition 5.4** (duplication structure)
A(n arrow-dimension) _duplication structure_ for a symmetric Gray-monoidal double category \(\mathbb{C}\) consists of the following structure.
**duplicators:**: arrow-dimension _duplicator_ oplax transformations of unitary pseudo functors
\[\delta:\,(\mathbb{C}\to\mathbb{C})\,({\rm id}\,\mathbb{C}\to\Delta_{\mathbb{C} }\cdot\,\otimes\,_{\mathbb{C}})\quad\mbox{and}\quad\delta^{\prime}:\,(\mathbb{ C}\to\mathbb{C})\,({\rm id}\,\mathbb{C}\to\Delta_{\mathbb{C}}{}^{\prime}\cdot\, \otimes\,_{\mathbb{C}})\]
**coassociators:**: invertible globular _coassociator_ modifications for the duplicators with object-component disks
\[s{\rm A}:\delta{\rm A}\cdot(\delta{\rm A}\otimes{\rm A})\dashrightarrow\delta{ \rm A}\cdot({\rm A}\otimes\delta{\rm A})\quad\mbox{and}\quad s^{\prime}{\rm A }:\delta^{\prime}{\rm A}\cdot(\delta^{\prime}{\rm A}\otimes{\rm A})\dashrightarrow \delta^{\prime}{\rm A}\cdot({\rm A}\otimes\delta^{\prime}{\rm A})\]
**cocommutors:**: invertible globular _cocommutor_ modifications for the duplicators with object-component disks
\[c{\rm A}:\delta{\rm A}\dashrightarrow\delta^{\prime}{\rm A}\cdot\sigma({ \rm A}\,,{\rm A})\quad\mbox{and}\quad c^{\prime}{\rm A}:\delta^{\prime}{\rm A }\dashrightarrow\delta{\rm A}\cdot\sigma({\rm A}\,,{\rm A})\]
These must satisfy the following relations.
**nullary tensor duplicator coherence:**: Duplicating the tensor unit is trivial in the sense that
\[\delta\mathrm{I}=\mathrm{id}\,\mathrm{I}=\delta^{\prime}\mathrm{I} \tag{5.17}\]
**binary tensor duplicator coherence:**: Duplicating a tensor product is given by duplicating the factors sequentially and using the braiding to permute the result into the required order.
\[\begin{array}{l}\delta(\mathrm{A}\otimes\mathrm{X})=(\mathrm{A}\otimes \delta\mathrm{X})\cdot(\delta\mathrm{A}\otimes\mathrm{X}\otimes\mathrm{X}) \cdot(\mathrm{A}\otimes\sigma(\mathrm{A}\,,\mathrm{X})\otimes\mathrm{X})\text{ and}\\ \delta^{\prime}(\mathrm{A}\otimes\mathrm{X})=(\delta^{\prime}\mathrm{A} \otimes\mathrm{X})\cdot(\mathrm{A}\otimes\mathrm{A}\otimes\delta^{\prime} \mathrm{X})\cdot(\mathrm{A}\otimes\sigma(\mathrm{A}\,,\mathrm{X})\otimes \mathrm{X})\end{array} \tag{5.18}\]
**homogeneous coassociator coherence:**: The two arrow disks with each of the following boundaries built using only coassociator components and lax interchangers for duplicators are identified, as described by equation (5.20) below.
\[\begin{array}{l}\delta\mathrm{A}\cdot(\delta\mathrm{A}\otimes\mathrm{A}) \cdot(\delta\mathrm{A}\otimes\mathrm{A}\otimes\mathrm{A})\leftrightarrow \delta\mathrm{A}\cdot(\mathrm{A}\otimes\delta\mathrm{A})\cdot(\mathrm{A} \otimes\mathrm{A}\otimes\delta\mathrm{A})\text{ and}\\ \delta^{\prime}\mathrm{A}\cdot(\delta^{\prime}\mathrm{A}\otimes\mathrm{A}) \cdot(\delta^{\prime}\mathrm{A}\otimes\mathrm{A}\otimes\mathrm{A}) \leftrightarrow\delta^{\prime}\mathrm{A}\cdot(\mathrm{A}\otimes\delta^{\prime} \mathrm{A})\cdot(\mathrm{A}\otimes\mathrm{A}\otimes\delta^{\prime}\mathrm{A}) \end{array}\]
**coassociator cocommutor coherence:**: The coassociators are interdefinable using the cocommutors, as described by equation (5.21) below.
**cocommutor syllepsis coherence:**: The cocommutors are interdefinable using the syllepsis, as described by equation (5.22) below.
We now unravel this definition and introduce the corresponding graphical syntax.
duplicators:For each object \(\mathrm{A}\) we have a component arrow \(\delta\mathrm{A}:\mathrm{A}\rightarrow\mathrm{A}\otimes\mathrm{A}\), for each proarrow \(\mathrm{M}:\mathrm{A}\rightarrow\mathrm{B}\) we have a component square \(\delta\mathrm{M}:\stackrel{{\mathrm{M}}}{{{}_{\delta\mathrm{A}} \wedge}}\stackrel{{\mathrm{\delta B}}}{{{}_{(\mathrm{A}\otimes \mathrm{M})\odot(\mathrm{M}\otimes\mathrm{B})}}}\), and for each arrow \(f:\mathrm{A}\rightarrow\mathrm{B}\) we have a component disk \(\delta f:f\cdot\delta\mathrm{B}\rightarrow\delta\mathrm{A}\cdot(\mathrm{A} \otimes f)\cdot(f\otimes\mathrm{B})\).
We represent the component arrow \(\delta\mathrm{A}\) in a string diagram in the \((\otimes\,,\ \cdot)\)-plane as a splitting of the wire representing \(\mathrm{A}\) into two wires. We represent the component squares \(\delta\mathrm{M}\) and \(\delta f\) in surface diagrams as a splitting of the surface containing \(\mathrm{M}\) or \(f\) into two surfaces along a "seam", with the proarrow or arrow in the second surface preceding that in the first surface in the relevant composition order, as prescribed by \(\Delta\) in diagram (5.11).
(5.19)
The proarrow-component squares and arrow-component disks of \(\delta^{\prime}\) produce their copies in the order opposite that for \(\delta\). In string diagrams we use a black dot \(\mathtt{e}\) to represent \(\delta\), and a white dot \(\mathtt{o}\) to represent \(\delta^{\prime}\). When referring to either duplicator generically we omit the distinguishing dot.
We defined \(\delta\) to be an oplax transformation of unitary pseudo functors. Its domain, \(\operatorname{id}\mathbb{C}\), is a strict functor; that is, regarded as a lax functor it has identity comparitor disks. But its codomain, \(\Delta_{\mathbb{C}}\cdot\otimes\mathbb{C}\), has nontrivial comparitor disks that collate the copies using interchangers as shown in diagram (5.12). The proarrow compairto compatibility realtion (5.14) says that for consecutive proarrows \(\operatorname{M}:\operatorname{A}\twoheadrightarrow\operatorname{B}\) and \(\operatorname{N}:\operatorname{B}\twoheadrightarrow\operatorname{C}\) we have (up to suppressed proarrow associators)
\[\delta(\operatorname{M}\odot\operatorname{N})=(\delta\operatorname{M}\odot \delta\operatorname{N})\cdot(\operatorname{id}(\operatorname{A}\otimes \operatorname{M})\odot\chi_{(\operatorname{M},\operatorname{N})}\odot \operatorname{id}(\operatorname{N}\otimes\mathbb{C})),\]
and the arrow compairtor compatibility relation (5.16) says that for consecutive arrows \(f:\operatorname{A}\to\operatorname{B}\) and \(g:\operatorname{B}\to\operatorname{C}\) we have
\[\delta(f\cdot g)=[\operatorname{U}f\cdot\delta g]\odot[\delta f\cdot \operatorname{U}(\operatorname{B}\otimes g)\cdot\operatorname{U}(g\otimes \operatorname{C})]\odot[\operatorname{U}(\delta\operatorname{A})\cdot \operatorname{U}(\operatorname{A}\otimes f)\cdot\chi_{(f,g)}\cdot\operatorname {U}(g\otimes\operatorname{C})].\]
These are represented by the following surface diagrams,
which have the following projection string diagrams.
The naturality relation for squares (2.4) says that for a square \(\alpha:\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
coassociators:The invertible object-component disks for coassociators \(s\) and \(s^{\prime}\) relate the two possible ways to make three ordered copies using their respective duplicates.
**Lemma 5.5** (coassociator braiding coherence): Coassociators are coherent for braiding in the sense that we have the following equation between invertible arrow disks.
We have analogous braiding coherences involving \(\delta^{\prime}\) and \(s^{\prime}\), as well as coherences where we duplicate X rather than A.
Proof.: The last three squares on the left compose to \(\sigma(\delta\mathrm{A}\cdot\left(\mathrm{A}\otimes\delta\mathrm{A}\right), \mathrm{X})\) and the first three squares on the right compose to \(\sigma(\delta\mathrm{A}\cdot\left(\delta\mathrm{A}\otimes\mathrm{A}\right), \mathrm{X})\) by (4.14) and (4.17). The result then follows from braiding naturality for \(\left(2\,,0\right)\)-type squares (4.15).
cocommutors:The invertible object-component disks for the cocommutor \(c\) relate duplication by \(\delta\) to duplication by \(\delta^{\prime}\) followed by a braiding.
The components for \(c^{\prime}\) swap the roles of \(\delta\) and \(\delta^{\prime}\). The naturality requirement for an arrow \(f:\mathrm{A}\to\mathrm{B}\) (2.15),
and for a proarrow \(\mathrm{M}:\mathrm{A}\to\mathrm{B}\) (2.16),
reveal the need for distinct duplicators that differ in the order of their copies. If we had only a single duplicator then the naturality equations could not hold because the boundaries wouldn't agree.
**Lemma 5.6** (cocommutor braiding coherence):
Cocommutors are coherent for braiding in the sense that we have the following equation between invertible arrow disks.
We have analogous relations obtained by swapping \((\delta,c)\) and \((\delta^{\prime},c^{\prime})\), and by braiding on the other side.
Proof.: The last two squares on the left compose to \(\sigma(\delta^{\prime}\mathrm{A}\cdot\sigma(\mathrm{A}\,,\mathrm{A})\,,\, \mathrm{X})\) by (4.14). The result then follows from braiding naturality for \((2\,,\,0)\)-type squares (4.15).
tensor duplicator coherence:Equation (5.17) for nullary tensor duplicator coherence is well-bounded because \(\mathrm{I}\) is a strict unit for \(\otimes\) so \(\mathrm{I}\otimes\mathrm{I}=\mathrm{I}\). It asserts that the empty diagram continues to have a unique interpretation in the presence of duplication.
Equation (5.18) for binary tensor duplicator coherence says what it means to duplicate a tensor product of objects. In particular, we duplicate the objects independently and use the braiding to collate the copies.
As with equation (4.8) for tensor braiding coherence we are leaving some food on the table here by asking for strict equality rather than just an invertible arrow disk. This choice seems consistent with our decision to use strictly associative and unital Gray-monoidal structure, but may be worth reconsidering in future.
Because we have object-component arrows for \(\delta\) we also have for each object \(\mathrm{A}\) an arrow-component disk \(\delta(\delta\mathrm{A}):\delta\mathrm{A}\cdot\delta(\mathrm{A}\otimes \mathrm{A})\)\(\mapsto\delta\mathrm{A}\cdot(\mathrm{A}\otimes\delta\mathrm{A})\cdot( \delta\mathrm{A}\otimes\mathrm{A}\otimes\mathrm{A})\) depicted on the left, with proarrow-dimension boundaries shown on the right.
Similarly, we have disks \(\delta^{\prime}(\delta^{\prime}\mathrm{A})\), \(\delta(\delta^{\prime}\mathrm{A})\), and \(\delta^{\prime}(\delta\mathrm{A})\).
homogeneous coassociator coherence:This is the dual of Mac Lane's associator coherence "pentagon equation" [14], but made a hexagon by the explicit interchanger.
(5.20)
Likewise for \(\delta^{\prime}\).
coassociator cocommutor coherence:The following composite of invertible arrow disks is equal to the coassociator \(s\)A.
(5.21)
Dually, we can express \(s^{\prime}\)A in terms of \(s\)A by swapping the roles of \(\delta\) and \(\delta^{\prime}\).
**cocommutor syllepsis coherence:** The following composite of invertible arrow disks is equal to the cocommutor \(c\)A.
(5.22)
Dually, we can express \(c^{\prime}\)A in terms of \(c\)A by swapping the roles of \(\delta\) and \(\delta^{\prime}\).
Using cocommutors we can define "heterogeneous coassociators". We can do so either by using \(s\):
(5.23)
or by using \(s^{\prime}\):
(5.24)
**Lemma 5.7**
The above heterogeneous coassociators are equal.
Proof.: Substitute the term for \(s^{-1}\) from (5.21) into the first term above and cancel the resulting consecutive inverses. The first two terms of the result are a pair of consecutive cocommutors; rewrite them to a syllepsis by (5.22). The final \(\upsilon^{-1}\) term is independent of all the intermediate terms; permute it to directly follow the \(\upsilon\) term. Finally, cancel this inverse pair, and what remains is the second term above.
A duplication structure for a symmetric Gray-monoidal double category allows us to make more than one ordered copy of an object, arrow, proarrow, or square. We would like to be able to make fewer than one copy as well. For this we will need deletion.
**Definition 5.8** (deletion structure):
A(n arrow-dimension) _deletion structure_ for a symmetric Gray-monoidal double category \(\mathbb{C}\) with a duplication structure consists of the following structure.
**deletor:**: an arrow-dimension _deletor_ oplax transformation of strict functors
\[\varepsilon:\,(\mathbb{C}\to\mathbb{C})\,(\mathrm{id}\,\mathbb{C}\to\mathrm{!} _{\mathbb{C}}\cdot\mathrm{I}_{\mathbb{C}})\]
**counitors:**: invertible globular left and right _counitor_ modifications for the duplications with object-component disks
\[\begin{array}{l}l\mathrm{A}:\delta\mathrm{A}\cdot(\varepsilon\mathrm{A} \otimes\mathrm{A})\nrightarrow\mathrm{id}\,\mathrm{A},\quad\quad r\mathrm{A}: \delta\mathrm{A}\cdot(\mathrm{A}\otimes\varepsilon\mathrm{A})\nrightarrow \mathrm{id}\,\mathrm{A},\\ l^{\prime}\mathrm{A}:\delta^{\prime}\mathrm{A}\cdot(\varepsilon\mathrm{A} \otimes\mathrm{A})\nrightarrow\mathrm{id}\,\mathrm{A},\quad r^{\prime} \mathrm{A}:\delta^{\prime}\mathrm{A}\cdot(\mathrm{A}\otimes\varepsilon\mathrm{ A})\nrightarrow\mathrm{id}\,\mathrm{A}\end{array}\]
These must satisfy the following relations.
**nullary tensor deletor coherence:**: Deleting the tensor unit is trivial in the sense that
\[\varepsilon\mathrm{I}=\mathrm{id}\,\mathrm{I} \tag{5.23}\]
**binary tensor deletor coherence:**: Deleting a tensor product is given by deleting the factors sequentially
\[\varepsilon(\mathrm{A}\otimes\mathrm{X})=(\varepsilon\mathrm{A}\otimes \mathrm{X})\cdot\varepsilon\mathrm{X} \tag{5.24}\]
**counter coassociator coherence:**: The two arrow disks with each of the following boundaries built using only counitor and coassociator components are identified, as described by equation (5.26) below.
\[\begin{array}{l}\delta\mathrm{A}\cdot(\delta\mathrm{A}\otimes\mathrm{A}) \cdot(\mathrm{A}\otimes\varepsilon\mathrm{A}\otimes\mathrm{A})\nrightarrow \delta\mathrm{A}\cdot(\mathrm{A}\otimes\delta\mathrm{A})\cdot(\mathrm{A} \otimes\varepsilon\mathrm{A}\otimes\mathrm{A})\text{ and }\\ \delta^{\prime}\mathrm{A}\cdot(\delta^{\prime}\mathrm{A}\otimes\mathrm{A})\cdot( \mathrm{A}\otimes\varepsilon\mathrm{A}\otimes\mathrm{A})\nrightarrow\delta^{ \prime}\mathrm{A}\cdot(\mathrm{A}\otimes\delta^{\prime}\mathrm{A})\cdot( \mathrm{A}\otimes\varepsilon\mathrm{A}\otimes\mathrm{A})\end{array}\]
**counter cocommutor coherence:**: The counitors are interdefinable using the co-commutors, as described by equation (5.27) below.
We now unravel this definition and introduce the corresponding graphical syntax.
deletor:For each object \(A\) we have a component arrow \(\varepsilon A:A\to I\), for each proarrow \(M:A\nrightarrow B\) we have a component square \(\varepsilon M:\,\stackrel{{ M}}{{\varepsilon A}}\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
between invertible arrow disks.
Proof.: The first three squares on the left compose to \(\sigma(\delta\mathrm{A}\cdot\left(\varepsilon\mathrm{A}\otimes\mathrm{A}\right), \mathrm{X})\) by (4.14) and (4.17). Braiding an identity is itself an identity by (4.13), so the result follows from braiding naturality for \((2\,,0)\)-type squares (4.15).
tensor deletor coherenceEquation (5.23) for nullary tensor deletor coherence implies that the empty diagram continues to have a unique interpretation in the presence of deletion.
Equation (5.24) for binary tensor deletor coherence says what it means to delete a tensor product of objects, as shown on the left. The choice of ordering is arbitrary and is related to the other possibility by the isomorphism \(\chi_{(\varepsilon\mathrm{A},\varepsilon\mathrm{X})}\), as shown on the right.
As with equation (5.18) for binary tensor duplicator coherence, we could have chosen to make this an invertible arrow disk rather than a strict equality.
counter coassociator coherence:This is the dual of Mac Lane's middle unit coherence "triangle equation" [14]. It says that the following cycle of invertible modification components is the identity.
(5.26)
Likewise for \(\delta^{\prime}\).
counter cocommutor coherence:The following composite of invertible arrow disks is equal to the counitor \(l\Lambda\).
(5.27)
Note that this factorization of \(l\Lambda\) through \(r^{\prime}\Lambda\) is equivalent to a factorization of \(r^{\prime}\Lambda\) through \(l\Lambda\) because all of the arrow disks involved are invertible. Dually, we get factorizations of the counitors \(l^{\prime}\Lambda\) and \(r\Lambda\) through one another by swapping the roles of \(\delta\) and \(\delta^{\prime}\).
We have now described duplication and deletion structure for symmetric Gray-monoidal double categories that is natural for squares by virtue of being defined in terms of arrow-dimension oplax transformations. We feel justified in calling this structure "cartesian", at least in the "Fox" sense described above.
**Definition 5.10** (cartesian Gray-monoidal double category): A _cartesian Gray-monoidal double category_ is a symmetric Gray-monoidal double category equipped with duplication and deletion structure.
We currently do not have a universal construction characterization of our proposed notion of cartesian structure for Gray-monoidal double categories. One obvious strategy is to define duplication structure using some sort of right adjoints to the Gray diagonal functors \(\Delta_{\mathbb{C}},\Delta_{\mathbb{C}^{\prime}}:\mathbb{C}\to\mathbb{C} \otimes\mathbb{C}\) and deletion structure using a right adjoint to the unique functor \(\mathbb{I}_{\mathbb{C}}:\mathbb{C}\to\mathbb{1}\) so that the duplicators and deletor emerge as the units of these adjunctions. We have not yet managed to do this.
Related ConstructionsThere is substantial work on cartesian-in the sense of finite product-structure for \(2\)-dimensional categories using the ordinary, as opposed to the Gray, monoidal product. In the globular setting, Carboni, Kelly, Walters, and Wood define finite products for bicategories, in the sense of bilimits as natural equivalence of hom categories, and prove that they give rise to a canonical symmetric monoidal structure, establishing a partial \(2\)-dimensional Fox theorem for bicategories [12]. It is the cubical and Gray version of this finite product structure (with the topologically-motivated strictifications previously discussed) that we have sought to capture here.
Carboni et al. reserve the term "cartesian bicategory" for a bicategory where only the locally full sub-bicategory of maps is required to have bicategorical finite products, while all hom categories must have \(1\)-categorical finite products locally. The term "maps" refers to \(1\)-cells that have right adjoints. These can be regarded as the arrows of a double category with companion and conjoint proarrows for all arrows (also known as a "fibrant double category" or "proarrow equipment"). Shulman has observed that in many cases a cartesian bicategory can be regarded as the proarrow bicategory of such a double category [25].
This perspective is further developed by Aleiferi [1], who defines a notion of _cartesian double category_ in which the cartesian structure is given by right adjoints to the cartesian diagonal functor \(\Delta_{\mathbb{C}}:\mathbb{C}\to\mathbb{C}\times\mathbb{C}\) and the unique functor to the
singleton double category \(\mathfrak{l}_{\mathbb{C}}:\mathbb{C}\to\mathbb{1}\) in the 2-category of double categories, functors that are strict for arrows and pseudo for proarrows, and strict arrow-dimension transformations.
Aleiferi's concise adjoint-theoretic presentation of cartesian structure for double categories is compelling. It is too strict for our purposes, relying on the ordinary, rather than the Gray, monoidal product, and using strict, rather than oplax, arrow-dimension transformations. Still, it serves as a model for our goal of a characterization by universal construction.
## 6 Conclusion
In this paper we have characterized the algebraic structure comprising double categories together with their functors, transformations, and modifications as a locally cubical Gray category, have embedded the classical Gray categories into the locally cubical ones, have identified their one-object instances as Gray-monoidal double categories, have added braided, sylleptic, and symmetric structure to these, and have proposed a notion of cartesian structure. Each of these constructions has been accompanied by a graphical representation in the calculus of surface diagrams.
We have compared our constructions with those from the literature, where the existing globular constructions tend to be weaker than ours, while the cubical ones tend to be stricter. While our design choices have a pleasing correspondence to homotopy, the absence of a universal construction characterization leaves us uncertain whether we have managed to capture the "right" notion of cartesian Gray-monoidal double category, and further investigation is warranted.
|
2310.08932
|
TIDE: Temporally Incremental Disparity Estimation via Pattern Flow in
Structured Light System
|
We introduced Temporally Incremental Disparity Estimation Network (TIDE-Net),
a learning-based technique for disparity computation in mono-camera structured
light systems. In our hardware setting, a static pattern is projected onto a
dynamic scene and captured by a monocular camera. Different from most former
disparity estimation methods that operate in a frame-wise manner, our network
acquires disparity maps in a temporally incremental way. Specifically, We
exploit the deformation of projected patterns (named pattern flow ) on captured
image sequences, to model the temporal information. Notably, this newly
proposed pattern flow formulation reflects the disparity changes along the
epipolar line, which is a special form of optical flow. Tailored for pattern
flow, the TIDE-Net, a recurrent architecture, is proposed and implemented. For
each incoming frame, our model fuses correlation volumes (from current frame)
and disparity (from former frame) warped by pattern flow. From fused features,
the final stage of TIDE-Net estimates the residual disparity rather than the
full disparity, as conducted by many previous methods. Interestingly, this
design brings clear empirical advantages in terms of efficiency and
generalization ability. Using only synthetic data for training, our extensitve
evaluation results (w.r.t. both accuracy and efficienty metrics) show superior
performance than several SOTA models on unseen real data. The code is available
on https://github.com/CodePointer/TIDENet.
|
Rukun Qiao, Hiroshi Kawasaki, Hongbin Zha
|
2023-10-13T07:55:33Z
|
http://arxiv.org/abs/2310.08932v1
|
# TIDE: Temporally Incremental Disparity Estimation via Pattern Flow in Structured Light System
###### Abstract
We introduced Temporally Incremental Disparity Estimation Network (TIDE-Net), a learning-based technique for disparity computation in mono-camera structured light systems. In our hardware setting, a static pattern is projected onto a dynamic scene and captured by a monocular camera. Different from most former disparity estimation methods that operate in a frame-wise manner, our network acquires disparity maps in a temporally incremental way. Specifically, We exploit the deformation of projected patterns (named _pattern flow_) on captured image sequences, to model the temporal information. Notably, this newly proposed pattern flow formulation reflects the disparity changes along the epipolar line, which is a special form of optical flow. Tailored for pattern flow, the TIDE-Net, a recurrent architecture, is proposed and implemented. For each incoming frame, our model fuses correlation volumes (from current frame) and disparity (from former frame) warped by pattern flow. From fused features, the final stage of TIDE-Net estimates the residual disparity rather than the full disparity, as conducted by many previous methods. Interestingly, this design brings clear empirical advantages in terms of efficiency and generalization ability. Using only synthetic data for training, our extensive evaluation results (w.r.t. both accuracy and efficiently metrics) show superior performance than several SOTA models on unseen real data. The code will be available on _[https://github.com/CodePointer/TIDENet_](https://github.com/CodePointer/TIDENet_) soon.
Range sensing, RGB-D perception, Structured light systems, Active sensor, Deep learning methods.
## I Introduction
Various variants of disparity estimation methods built upon monocular structured light systems [1, 2] have been proposed in the literature. They have drawn wide attention from both academia and industry due to their promises in important application areas like augmented reality, sport analysis, and medical robots. However, dynamic scene acquisition is still an unsolved challenging problem. While object scanning in static scenes can leverage rich information provided by multiple patterns, this setting is not suitable for dynamic scenes due to the difficulty of feature matching. As such, for dynamic scenes, single-pattern structured light systems are preferred, as is refereed to as _one-shot scan_ in the literature. Since input sequences in the one-shot scan setting tend to be sparse and unstable, recently several learning-based methods are proposed and widely recognized as a promising alternative solution to address these two challenges [3, 4, 5].
Although these deep neural network (DNN) models have achieved promising accuracy, there are two obstacles preventing us from training robust models with large-scale real-world datasets: 1. getting the ground truth of dense correspondences of dynamic scenes is virtually impossible; 2. even if we could get these ground truths, the data would be biased towards the specific pattern and/or device in the hardware design. Therefore, training on the synthetic data and testing on real-world scenarios is a more practical choice. In such a situation, the generalization ability of the model is the most significant factor to consider.
In this paper, the focus is to exploit the temporal coherence in the image sequence to improve the generalization ability, while guaranteeing efficiency. To this end, we propose a neural network named TIDE-Net (**T**emporally **I**ncremental **D**isparity **E**stimation). Rather than estimating a disparity map for each frame from scratch, TIDE-Net conducts residual estimation based upon the former frame. Notably our method can model even earlier frames, through hidden layers in a recurrent architecture. This incrementally updated scheme allows the network size to be concise while maintaining accuracy.
Utilizing temporal coherence between frames in a video has been studied intensively for correspondence search [5, 6]. However, the naive practice of aggregating features extracted from frames by concatenation ignores the pixel displacement caused by scene motion, and thus, it usually results in low accuracy [7]. To address this issue, we propose to leverage a novel formulation named _pattern flow_, which is the pattern deformation in the observed image and a constrained version of generic optical flow. Although a former study has mentioned the pattern flow concept [8], they treat it as a feature instead of explicitly using the pattern flow for correspondence search between a captured image and a projected pattern.
In contrast, we analyzed the pattern flow and found that it has a unique feature absent in the generic optical flow formulation: The disparity changes between frames can be computed explicitly given the estimated pattern flow. Once the pattern flow is estimated, we can pixel-wisely warp both disparity and hidden layer from the former frame. Thus, features and information can be aligned along the temporal space credit to this operation. Experimental results illustrated that our method required less computational cost than SOTA methods while guaranteeing better accuracy and generalization ability.
In summary, our contributions are threefold:
* We propose an incremental disparity estimation framework based on TIDE-Net which fully utilized the local and sequential nature of images from dynamic scenes. By focusing on the incremental non-linear portion, parameter size can be reduced while guaranteeing accuracy and efficiency.
* We propose a novel algorithm to estimate pattern flow, which represents correspondences of projected pattern between adjacent frames of structured light systems, and the information is fed into TIDE-Net.
* Comprehensive experiments proved that, although TIDE-Net is only trained by synthetic data and evaluated by real one without any adaptation, our method performs better accuracy than several SOTA models as well as computational cost, showing efficient domain-invariant generalization ability of the method.
## II Related Work
The active light systems can achieve robust and accurate depth information even if there is no texture or high-frequency shape on the object surface. Thus, they are considered a practical solution for 3D scan [1, 2]. For reconstruction on dynamic scenes, many spatial coding techniques have been proposed based on pattern decoding and global matching [8, 9, 10, 11]. The pattern can be binary dots [12], strips [13], grids [14], etc. These methods focus on encoding coordinate features into one projected pattern, and then extracting features from observed images by decoding on the spatial domain. The projected pattern needs to be recognizable in the images; thus, the computed disparity map is always sparse.
As for the utilization of temporal information, some methods embedded spatial features in the multiple different patterns projected periodically. To compensate moving parts in the scene for a one-shot scan, several methods are proposed, such as detecting individual motion [6, 15], directly computing disparity of the scene by the blurs of line-based patterns [8], or utilizing DNN [5]. In the method [5], they propose a geometric loss between image pairs for their model training to improve the spatio-temporal consistency. The loss requires objects to have a rigid motion, and related position \((R,t)\) is needed for model training. Our research does not have rigid motion assumptions, and no registration is needed for training.
Stereo matching and optical flow focusing on the correspondences between image pairs are also related to our problem since a pattern projector is optically the same as a camera. Many networks designed for those problems can be applied to structured light systems with several additional modifications [16, 17, 18]. We exploit ideas from the stereo methods to our rectified mono-camera setup, where the reference pattern is warped to the observed image by correlation layer for disparity estimation inside the network structure.
The domain generalization ability of networks is also an important issue and drawing wide attention recently. Some methods apply self-supervised training [5, 19] or fine-tuning when transfer to a new data domain. However, such solutions require a large number of data from the target domain with or without ground truth. Some methods apply online adaptation techniques to update the network during the inference process [20, 21], while the BP process results in the loss of efficiency. [22] proposed DSMNet equipped with a novel normalization layer and graph-based filtering layer to improve the generalization ability for passive stereo matching. In contrast, we do not design a specific framework in our network. We utilize the temporal coherence from the input sequence to guarantee the accuracy and efficiency of our network.
## III Methods
In this part, we will illustrate our method in detail. We first go through the definition of incremental disparity computing from an image sequence in structured light systems. Then, we explain the definition and analysis of pattern flow. Finally, we introduce the entire framework combined with the incremental warping technique.
### _Problem Definition_
Our research aims at the 3D depth estimation from sequential frames where scenes are projected by a static pattern projector and captured by a single camera. For such systems, the pattern projector can be regarded as a second camera considering its image plane as the reference pattern. Therefore, we can define disparity for the camera similar to stereo vision accordingly. We assume that the camera and the projector are calibrated and rectified by pre-processing. Generally, scenes are captured by the camera, and captured images are fed into
Fig. 1: _Pattern flow_ from adjacent image pairs. We denote red and blue colors for different moving directions, while intensity stands for the value. The forehead is moving back and the jaw is moving forward. Top-right is a sketch map for the geometric property of the pattern flow in structured light systems.
the algorithm to estimate disparity maps for each frame. Here, we denote \(\mathbf{I}^{t}\) as the input camera image and \(\mathbf{D}^{t}\) as the output disparity, where \(t\) represents the frame number. The designed pattern is denoted as \(\mathbf{P}\). In our system, a pseudo-random dot pattern [23] is used, which guarantees patch uniqueness in each row and is repeated periodically to the vertical direction.
We assume online disparity estimation for our system, that is, for each input frame \(\mathbf{I}^{t}\), all the history information before time \(t\) is available for calculation, denoted as \(H^{t-1}\). Thus, the disparity of the current frame can be estimated through incremental processing:
\[\mathbf{D}^{t}=\mathbf{D}^{t-1}+g\left(\mathbf{I}^{t},\mathbf{P},H^{t-1} \right), \tag{1}\]
where we assume \(t>0\). Our incremental process, denoted as function \(g\), focuses on the residual estimation instead of the full disparity computation for each frame. This usually helps to improve accuracy and efficiency. Rather than some method utilizing temporal information as an additional dimension for temporal-window-based batch matching [24], our method focuses on the step-by-step updating processing. We utilize a deep neural network (DNN) model with a temporal module to find the function \(g\), where the hidden layer is passed through the sequence as the history information \(H\). This is a straightforward idea and has been developed in stereo matching problems.
However, simply concatenating multiple frames usually ignores the motion of objects in the scene, resulting in the overfitting on specific motion features in the synthetic dataset. To achieve the network to learn the motion-invariant consistent features explicitly, we pre-warp the history information consisting of previous frames to retrieve accurate correspondences between adjacent frames. We proposed to use pattern flow between frames to accomplish the warping. The pattern flow can be robustly calculated locally with a simple but effective algorithm with our proposed method. From the ablation study in Sec. IV, we can find that the pattern flow can improve the performance compared to simply passing the hidden layer directly.
### _Pattern Flow in Structured Light Systems_
In a structured light system, the projector emits a reference pattern into the 3D space. The projected pattern is deformed by the shape of the object's surface and observed by the camera. If objects in the scene are moving, the observed patterns are deformed, because the shape (depth) of the scene also changes as shown in Fig. 1(left), where two adjacent frames are shown to visualize the motion of the projected pattern. We define this kind of light flow caused by projected pattern as _pattern flow_, denoted as \(\mathbf{F}^{t}\), which represents the motion of the projected patterns. This concept was mentioned by Furukawa _et al._, who use strip pattern flow to estimate depth for fast-moving object [8]. In our research, we put forward the pattern flow defined on the pixel level to propagate information precisely.
We first give out the geometric relationship between the disparity map and the pattern flow. Considering a light ray projected from \(\mathbf{P}(x_{p},y)\) on reference pattern to the 3D position \(\mathbf{p}(X,Y,Z)\) at time \(t\). In the next frame, the projected position moves to a new place caused by the scene motion, denoted as \(\mathbf{p}(X+\Delta X,Y,Z+\Delta Z)\). Notice that these two positions are not guaranteed to be the same physical point, but they are on the same light ray projected from the projector. Therefore the \(\mathbf{p}\) can only move along the epipolar plane, and we do not need to consider vertical direction on the rectified image. We
Fig. 3: The architecture of the TIDE-Net.
Fig. 2: The general framework of our method. The initialization process is only applied for the first frame, then we incrementally estimate the disparity map for every frame using TIDE-Net.
denote \(f\) as the focal length, \(b\) as the baseline, and \((x,y)\) as the observed pixel in camera space. Then we have
\[\begin{cases}x=\dfrac{f}{Z}X\\ x_{p}=\dfrac{f}{Z}(X-b)\end{cases}. \tag{2}\]
By taking the derivative of Eq.2, we have
\[\begin{cases}\dfrac{\mathrm{d}x}{\mathrm{d}t}=\dfrac{f}{Z}\dfrac{\mathrm{d}X} {\mathrm{d}t}-\dfrac{fX}{Z^{2}}\dfrac{\mathrm{d}Z}{\mathrm{d}t}\\ \dfrac{\mathrm{d}x_{p}}{\mathrm{d}t}=\dfrac{f}{Z}\dfrac{\mathrm{d}X}{\mathrm{d }t}-\dfrac{f(X-b)}{Z^{2}}\dfrac{\mathrm{d}Z}{\mathrm{d}t}\end{cases}. \tag{3}\]
Since the two projected positions belong to the same projected ray, we have \(\frac{\mathrm{d}x_{p}}{\mathrm{d}t}=0\). The pattern flow can also be denoted as a horizontal vector \(\mathbf{F}(x,y)=(u,0)\), where \(u=\frac{\mathrm{d}x}{\mathrm{d}t}\). The Eq. 3 can be further simplified as
\[u=-\dfrac{fb}{Z^{2}}\dfrac{\mathrm{d}Z}{\mathrm{d}t}. \tag{4}\]
On the other hand, suppose \(d\) is the disparity value for pixel \(\mathbf{D}(x,y)\). Therefore we have \(d=\frac{fb}{Z}\) according to the epipolar geometry. Take the derivative of \(d\) we have
\[\frac{\mathrm{d}d}{\mathrm{d}t}=-\dfrac{fb}{Z^{2}}\dfrac{\mathrm{d}Z}{ \mathrm{d}t}. \tag{5}\]
Thus from Eq. 4, 5 we can have:
\[u=\frac{\mathrm{d}d}{\mathrm{d}t}, \tag{6}\]
**which means that the estimated** _pattern flow_ **is the change of the disparity between images.** The geometric relationship is shown in Fig 1(top-right) for better understanding. With the help of pattern flow, the disparity can be propagated between frames according to Eq.5. In our framework, we propagate both disparity map and hidden layer to eliminate the motion factor between frames.
In structured light systems, the intensity is decided by the projected ray and object texture together. However, in our practice, the projected light always holds a dominant position in IR image in most cases. Therefore, the local and fast Lucas-Kanade method along the epipolar line could be applied for pattern flow calculation.
Notice that although we apply a classical method from optical flow calculation, the pattern flow has several differences to classic optical flow in the passive sensor:
* Firstly, the optical flow is the motion for physical points, while the pattern flow is the displacement of projected light rays, which means it only appears along the 1D epipolar line, rather than the 2D image.
* Secondly, pattern flow in active light is always easier to calculate, for the pattern always has enough features for local matching. No semi-global optimization is needed for dense pattern flow calculation.
* Lastly, although the pattern flow has the before-mentioned advantage to optical flow, it does not guarantee the tracked points in two frames belong to the same physical points. Pattern flow lacks physical correspondence information and cannot be used for motion estimation. However, it can be used for disparity propagation from the previous disparity.
The differences between pattern flow and optical flow are listed in Table I for better illustration.
### _TIDE Network_
Given an observed image sequence clip \(\mathbf{I}^{\mathrm{t-k:t}}\) from the camera and pre-designed pattern \(\mathbf{P}\), we estimate the disparity sequence with iterative updating process frame by frame. Fig. 2 shows an overview of our approach, and the network architecture is shown in Fig. 3. We now describe all the components in detail.
#### Iii-C1 Feature extraction
We concatenate the input image \(\mathbf{I}^{\mathrm{t}}\) and the normalized image from local contrast normalization(LCN) method [16, 5] as the input of feature extraction. For each frame inside the temporal window, two encoders with the same architecture are applied to the input image \(\mathbf{I}^{\mathrm{t}}\) and pattern \(\mathbf{P}\). We use the similar feature extraction layers mentioned in [18] but reduce the parameter numbers. Then, the correlation pyramid is built which will be indexed by updating block.
#### Iii-C2 Update block
We use the similar recurrent updating block in [18] for disparity refinement, where a ConvGRU block is applied to provide disparity residual estimation. The input of ConvGRU block is concatenated from 3 parts: The warped disparity \(\mathbf{\hat{D}}^{t-1}\), the feature from the context encoder, and the retrieved correlated features from the correlation pyramid given the predicted disparity. The ConvGRU block outputs disparity at 1/8 resolution. We then upsample the disparity map to full resolution by taking an upsampling mask as the interpolation weights. Although the network used in [18] can acquire accurate results on optical flow estimation, it is utilized between image pairs, resulting in the lack of temporal information usage. Besides, the infer speed is also harmed by multiple recurrent iterations. Thus, we improve the framework in two main parts: 1. We passed the hidden layer along the different frames to provide temporal consistency; 2. Instead of taking multiple iterations in every frame, we only have one iteration for each frame, forcing the network to get information from the hidden layer. With those improvements, the computing time could be reduced while guaranteeing accuracy.
#### Iii-C3 Warping by pattern flow
As mentioned at the end of Sec.III-A, simply passing the previous hidden layer and disparity map may result in the bias by objects motion. Therefore, we first calculate the pattern flow \(\mathbf{F}^{t}\) between \(\mathbf{I}^{t}\) and \(\mathbf{I}^{t-1}\), then warp the hidden layer \(h^{t-1}\) and disparity map \(\mathbf{D}^{t-1}\) (The "w" operator in Fig.2 between frames). We calculate the pattern flow by applying Lucas-Kanade method for every pixel in \(1/8\) resolution, which is the input resolution for the update block. Thanks to the feature-rich projected pattern, the pattern flow can be calculated without any semi-global optimization. The dense and accurate pattern flow map could be acquired in a very short time.
#### Iii-C4 Initialization for the first frame
Since our method is an incremental function, we need the initial value of the first frame. In practice, we found a very rough disparity estimation
is enough for TIDE-Net to reconstruct the accurate disparity within several frames. Thus, we train a small U-Net [5] to give the initial output for the very first frame. The initialization process is only needed for the very first frame of the image sequence.
### _Loss Functions_
We applied L1 loss for supervised training on the synthetic dataset given the disparity ground truth:
\[L=\sum_{i=t-N+1}^{t}\left|\mathbf{D}^{i}-\mathbf{D}_{GT}^{i}\right| \tag{7}\]
where the \(N\) denotes the temporal window size for the training.
## IV Experiments
In this section, we systematically evaluate our method compared with the previous technique. We first introduce the experiment platform we used for data collection, then we give out the details of the training process. Finally, we present the evaluation result on several unseen data to illustrate our method's generalization ability and the contribution of temporal information.
### _Experiment Platform_
We use the realsense D435i [23], an active stereo system equipped with two infrared cameras and one emitter, to collect real data for evaluation. In our research, we only utilize one camera for evaluation. We also manually increase the baseline to 218mm by utilizing another D435i device, as shown in Fig. 4, for we find the original baseline is not sensitive enough to the object motion. We combine one camera from the left D435i and the emitter from another as the final mono-camera structured light system. The device collects and stores the images with fps=24.
The pattern we used for our experiments is the pseudo-random dots that were originally applied in realsense D435i. Since they do not provide the pre-designed pattern officially, we calibrate the pattern by moving the device forward and backward towards a white wall. Then we apply rectification to the camera image and calibrated pattern to get the data for evaluation. We project that pattern in the scene and captured the images with the left IR camera. All the methods take the IR image sequence as the network input.
### _Training Schedule and Implementation Details_
Once we calibrated the devices used for data collection, we can generate the synthetic dataset for model training and then evaluate that on the real collected data. To generate image sequences of dynamic scenes with non-rigid motion, we randomly sample 4 objects from ShapeNet [25], each of which has an individual random rigid motion. The camera and projector are placed towards those objects, with a white plane as the background. Then, we utilized the structured light renderer provided in [5] for infrared image generation with the pre-calibrated parameters used for real data collecting. With those settings, we generate 2048 sequences with 32 frames for each. We also generate another 512 sequences for the test, shown in Fig. 5 (Original Dataset).
For model training, we use Adam optimizer with the learning rate at \(10^{-3}\). We trained our model on one 2080Ti GPU with the batch size \(N=1\). The temporal window size is set to 8, for this is the maximum capacity of the GPU memory.
### _Qualitative Result on Real Data_
For non-rigid data evaluation, we take a white wall as the background, and collect the images of the non-rigid moving human face. We select several SOTA methods in stereo matching and optical flow for comparison with some small modifications to structured light systems: 1. The Active Stereo Net [16] for active stereo; 2. the FADNet [26] for passive stereo matching; 3. the RAFT [18] framework for optical flow, and 4. CTD [5] framework using temporal information for structured light systems. Apart from those deep-learning methods, we also add the Spacetime stereo [24] for comparison. All the comparison baselines take single frame as input during the evaluation while ours takes multi-frames for incremental estimation. With the help of temporal information, ours can perform especially better on the occluded part with a better prediction from history, as shown in Fig. 5 (Real-world Non-rigid Motion).
### _Quantitative Result_
To better illustrated our model's performance, we test our method on several data with the given ground-truth for quantitative analysis. We compute two types of metrics for quantitative comparison on every estimated disparity maps: The percentage of pixels with disparity error larger than \(t\in\{1.0,2.0,5.0\}\), denoted as \(o(t)\); and the average L1 loss, denoted as \(avg\). We compute those metrics on the whole disparity sequence except the very first frame and calculate the average as the final performance.
**Original Dataset.** We first evaluate our method on the same data domain where the models are trained. Same generation parameters are applied according to Sec. IV-B, and the result is shown in Tab. II. We can see all the methods can achieve accurate results after the supervised training. Our method does not outperform others but also achieves good accuracy.
**Real-world Rigid Motion.** After the evaluation on the Original Dataset, we test the performance on the unseen real-world data. For ground truth generation, we first scan the accurate 3D information of several objects by a precise
Fig. 4: The experiment platform for data collection.
Fig. 5: The experiment results for visualization. All the models are trained on the original synthetic data, the first row, then evaluated on the others. We show the disparity map (upper row) and residual error map(lower row) for comparison. The non-projected area is masked out manually and excluded from the quantitative analysis.
laser scanner [27] offline. Then we place those objects on a rotation plate for data collection. To register the video and scanned model, we use the aruco marker [28] pasted on the plate to get the accurate position of objects in every frame. We apply a hard mask to remove the background, for we only want to focus on the reconstruction result of the object. Fig. 5 shows the estimated disparity map, and Tab. II illustrate the quantitative comparison. Our method outperforms all the SOTA methods with better boundary performance.
**Synthetic Indoor Scenes.** We also infer our model on another synthetic dataset to test the generalization ability. We use the depth map from ICL-NUIM [29], a synthetic dataset used for VO/SLAM problems where the camera moves freely inside a room. We use four sequences from a living room for evaluation. We first convert all the RGB images into gray ones, then we create the virtual projector to project the pattern into every image. Different from the original training dataset, these scenes are static and the camera has complex ego-motion with large rotation and transportation. The evaluation result, shown in Tab. II and Fig. 5, illustrate that our method can still get robust results while others suffer a lot from the domain shifting problem.
**Efficiency comparison.** We also compare the computing time of disparity estimation, which is shown in Tab. III. The 3D cost volume in ActiveStereoNet [16] and recurrent iteration module in RAFT [18] requires a longer time for evaluation. In our method, we extend the iteration process into the temporal space, resulting in a boost in efficiency. All the results are evaluated on a GTX1070 card.
### _Ablation Study_
To prove the contribution of the temporal module, we apply the ablation study by removing pattern flow propagation and hidden layer passing. We take the result from Real-World Rigid Motion data for comparison. We first removed the pattern flow warping module(-PF). The previous disparity and hidden features will pass directly to the next frame. By removing the warping, the accuracy decreases. We then further remove the hidden layer(-\(h\)) and all the network only takes previous disparity as the initial estimation. As listed in Tab. IV, the accuracy decreased when removing the temporal propagation and pattern flow warping.
We also show several estimated disparity maps in the same sequence of "Synthetic Indoor Scenes" to illustrate the performance improvements along the time. In Fig.6, our method can achieve better performance frame by frame in the 30-50 frames from the beginning. Such accurate results
Fig. 6: Our experiment result on the synthetic sequence. With several processing frames(about 30-50), our method can achieve an accurate result.
|
2310.17812
|
Prospects for thermalization of microwave-shielded ultracold molecules
|
We study anisotropic thermalization in dilute gases of microwave shielded
polar molecular fermions. For collision energies above the threshold regime, we
find that thermalization is suppressed due to a strong preference for forward
scattering and a reduction in total cross section with energy, significantly
reducing the efficiency of evaporative cooling. We perform close-coupling
calculations on the effective potential energy surface derived by Deng et al.
[Phys. Rev. Lett. 130, 183001 (2023)], to obtain accurate 2-body elastic
differential cross sections across a range of collision energies. We use
Gaussian process regression to obtain a global representation of the
differential cross section, over a wide range of collision angles and energies.
The route to equilibrium is then analyzed with cross-dimensional
rethermalization experiments, quantified by a measure of collisional efficiency
toward achieving thermalization.
|
Reuben R. W. Wang, John L. Bohn
|
2023-10-26T23:06:47Z
|
http://arxiv.org/abs/2310.17812v2
|
# Prospects for thermalization of microwave-shielded ultracold molecules
###### Abstract
We study anisotropic thermalization in dilute gases of microwave shielded polar molecular fermions. For collision energies above the threshold regime, we find that thermalization is suppressed due to a strong preference for forward scattering and a reduction in total cross section with energy, significantly reducing the efficiency of evaporative cooling. We perform close-coupling calculations on the effective potential energy surface derived by Deng et al. [Phys. Rev. Lett. 130, 183001 (2023)], to obtain accurate 2-body elastic differential cross sections across a range of collision energies. We use Gaussian process regression to obtain a global representation of the differential cross section, over a wide range of collision angles and energies. The route to equilibrium is then analyzed with cross-dimensional rethermalization experiments, quantified by a measure of collisional efficiency toward achieving thermalization.
The ever growing interest in quantum control of polar molecules motivates the cooling of molecular gases to unprecedented cold temperatures [1; 2; 3; 4; 5]. In bulk gases, reaching such temperatures can be accomplished through evaporative cooling [6], a process which throws away energetic molecules and leverages collisions to rethermalize the remaining, less energetic, distribution. Understanding and controlling 2-body scattering for thermalization is, therefore, of great importance for ultracold experiments. To this end, the exciting advent of collisional shielding with external fields has permitted a large suppression of 2-body losses between molecules [7; 8; 9; 10; 11]. Thermalization relies instead on the elastic cross section, which is generally dependent on the field-induced dipole-dipole interaction and their energy of approach. Of particular interest to this Letter is collisional shielding with microwave fields [12; 13; 14; 15], recently achieved at several labs around the world [16; 17; 18; 19].
In analogous gases of magnetic atoms with comparatively small dipole moments, dipolar scattering remains close-to-threshold [20] at the ultracold but nondegenerate temperatures of \(T\sim 100\) nK [21; 22; 23; 24]. For dipoles, threshold scattering occurs when the collision energy is much lower than the dipole energy \(E_{\rm dd}\), in which case the scattering cross section becomes energy independent [25] with a universal analytic form [26]. Numerical studies of thermalization are made much simpler at universality, since collisions can be sampled regardless of collision energy [27; 28]. However, this convenience is lost with the polar molecular gases of interest here. Take for instance a gas of fermionic \({}^{23}\)Na\({}^{40}\)K, as we will concern ourselves with in this study. This species has a large intrinsic dipole moment of \(d=2.72\) D, so that even ultracold temperatures have majority of collisions occurring away from threshold with an energy dependent cross section.
In this Letter, we find that non-threshold collisions can dramatically reduce thermalization and thus, the efficiency of the cooling process. Ignoring all 1 and 2-body losses for a focused study on elastic collisions, the decrease in gas total energy \(E=3Nk_{B}T\) along with the number of molecules \(N\), approximately follows the coupled rate equations [29; 18]
\[\frac{dN}{dt} =-\nu(\kappa)\gamma_{\rm th}N, \tag{1a}\] \[\frac{dE}{dt} =-\frac{1}{3}\lambda(\kappa)\gamma_{\rm th}E, \tag{1b}\]
where \(\nu(\kappa)=(2+2\kappa+\kappa^{2})/(2e^{\kappa})\) and \(\lambda(\kappa)=(6+6\kappa+3\kappa^{2}+\kappa^{3})/(2e^{\kappa})\) are functions of the energetic truncation parameter \(\kappa=U/(k_{B}T)\)[30].
By continuously lowering the energetic depth of the confining potential \(U(t)=U_{0}\exp(-t/\tau)\) over a time interval \(\tau\), highly energetic molecules are forced to evaporate away, lowering the number of molecules along with the gas temperature as shown in Fig. 1. For the plot, Eq. (1) is solved by taking evaporation to occur with an
Figure 1: A log-log plot of \(T\) vs \(N\) during a forced evaporation protocol. The plot compares the evaporation trajectory for microwave shielded \({}^{23}\)Na\({}^{40}\)K when scattering is realistic and non-threshold (solid black curve), to the artificial case of threshold scattering (dashed red curve). Both 1 and 2-body losses are assumed negligible and ignored here.
initial trap depth \(U_{0}/k_{B}=4\:\mu\)K over \(\tau=0.5\) s, in a harmonic trap with mean frequency \(\omega=2\pi\times 100\) Hz, starting at temperature \(T_{0}=400\) nK and molecule number \(N_{0}=20,000\). The evaporation efficiency, defined as the slope of \(T\) vs \(N\) on a log-log scale, is governed by the thermalization rate \(\gamma_{\rm th}\). The figure shows efficient cooling for the low-energy threshold cross sections (dashed red curve), and significantly less efficient cooling for the realistic cross sections (solid black curve). The remainder of this Letter provides the microscopic mechanisms that lead to this dramatic difference, and efficient theoretical tools we employ to obtain these conclusions.
_Shielded collisions_--Central to this study, are collisions that occur between molecules shielded by circularly polarized microwaves [15]. The resulting potential energy surface between two such molecules is conveniently described by a single effective potential [31]:
\[V_{\rm eff}(\mathbf{r})=\frac{C_{6}}{r^{6}}\big{[}1-(\hat{\mathbf{r}}\cdot\hat{\mathbf{ \mathcal{E}}})^{4}\big{]}+\frac{\overline{d}^{2}}{4\pi\epsilon_{0}}\frac{3( \hat{\mathbf{r}}\cdot\hat{\mathbf{\mathcal{E}}})^{2}-1}{r^{3}}, \tag{2}\]
where \(\mathbf{r}=(r,\theta,\phi)\) is the relative position between the two colliding molecules, \(\hat{\mathbf{\mathcal{E}}}\) is the axis along which the dipoles are effectively aligned, \(\overline{d}=d_{0}/\sqrt{12(1+(\Delta/\Omega)^{2})}\) is the effective molecular dipole moment and \(C_{6}=d_{0}^{4}(1+(\Delta/\Omega)^{2})^{-3/2}/(128\pi^{2}\epsilon_{0}^{2}\hbar\Omega)\). Here \(\Delta\) and \(\Omega\) are the detuning and Rabi frequency respectively, of the microwaves. A \(y=0\) slice of the effective microwave shielding interaction potential is plotted in the inset of Fig. 2. Notably, the long-range \(1/r^{3}\) tail of \(V_{\rm eff}(\mathbf{r})\) is almost identical to that of point dipole particles, modified only by an overall minus sign. As a result, the close-to-threshold elastic cross sections for microwave shielded molecules are identical to those for point dipoles.
It is natural to introduce units based on the reduced mass \(\mu\), dipole length and dipole energy:
\[a_{d}=\frac{\mu\overline{d}^{2}}{4\pi\epsilon_{0}\hbar^{2}}\quad\text{and} \quad E_{\rm dd}=\frac{\hbar^{2}}{\mu a_{d}^{2}}, \tag{3}\]
respectively. Threshold scattering is then expected to occur for collision energies \(E\ll E_{\rm dd}\). With the microwave parameters \(\Delta=2\pi\times 15\) MHz and \(\Delta=2\pi\times 9.5\) MHz, which will be assumed in what follows, the molecules see a dipole length of \(a_{d}\approx 3900a_{0}\), corresponding to a dipole energy of \(E_{\rm dd}/k_{B}\approx 360\) nK. Therefore, temperatures comparable to \(E_{\rm dd}/k_{B}\) are insufficient to keep molecular scattering in the threshold regime [25]. Moreover, since the dipole energy scales as \(E_{\rm dd}\sim d^{-4}\), larger dipoles require much lower temperatures to achieve universal dipolar threshold scattering as alluded to earlier. Away from threshold, the integral cross section \(\overline{\sigma}\) in the presence of microwave shielding (dashed black curve), develops a nontrivial energy dependence that clearly differs from that of plain point dipoles (dotted blue curve) as illustrated in Fig. 2. The plotted cross sections were obtained from close-coupling calculations logarithmically spaced in energy, with a universal loss short-range boundary condition [32] (see Supplementary Material for further details).
Away from threshold at \(E\approx E_{\rm dd}\), the microwave shielded integral cross section does not deviate much from its value at threshold (solid red line in Fig. 2). But the differential cross section could still have its anisotropy changed substantially, which is what ultimately affects thermalization [26]. For a study of both non-threshold differential scattering and its implications to thermalization in nondegenerate Fermi gases, we take its nonequilibrium evolution as governed the Boltzmann transport equation [33]. Formulated in this way, numerical solutions treat the molecular positions and momenta as classical variables, while collisions can be efficiently computed by means of Monte Carlo sampling [27; 34]. But on the fly close-coupling calculations would be too expensive for such sampling over a broad range of collision energies and angles. Instead, we propose the following.
_Gaussian process fitting_--At a given collision energy, the elastic differential cross section \(\mathcal{D}_{\rm el}\), is a function of the dipole alignment axis \(\hat{\mathbf{\mathcal{E}}}\), and the relative ingoing and outgoing momentum vectors \(\hbar\mathbf{k}\) and \(\hbar\mathbf{k}^{\prime}\), respectively. Collectively, we refer to this set of parameters as \(\mathbf{\beta}\). By first performing close-coupling calculations at several well chosen collision energies \(E=\hbar^{2}k^{2}/(2\mu)\)[35], we can use the resultant scattering data to infer an \(M\)-dimensional continuous hypersurface that approximates \(\mathcal{D}_{\rm el}\), with a Gaussian process (GP) model [36; 37; 38].
GP regression is a machine learning technique used to
Figure 2: Energy dependence of the angular averaged total cross section \(\overline{\sigma}\) between microwave shielded \({}^{23}\)Na\({}^{40}\)K (black dashed line). The energy dependence clearly differs from the total cross section between fermionic point dipoles (dotted blue curve). For comparison, we plot the low energy Born and high energy Eikonal approximations with solid red lines. The inset shows a \(y=0\) slice of the effective microwave shielding interaction potential, with Rabi frequency \(\Omega=2\pi\times 15\) MHz and microwave detuning \(\Delta=2\pi\times 9.5\) MHz. The shielding core is depicted as a white patch surrounding the coordinate origin which saturates the colorbar at \(V_{\rm eff}>200E_{\rm dd}\). Coordinate axes are plotted in units of \(10^{3}\) Bohr radii \(a_{0}\).
interpolate discrete data points, stitching them together to form a continuous global surface. To do so, a GP assumes that \(\mathcal{D}_{\text{el}}(\mathbf{\beta})\) evaluated any 2 nearby points in its coordinate space, \(\mathbf{\beta}_{i}\) and \(\mathbf{\beta}_{j}\), are Gaussian distributed with a covariance given in terms of a function \(K(\mathbf{\beta}_{i},\mathbf{\beta}_{j})\), called the kernel. A parameterized functional form for the kernel is chosen prior to the surface fitting process, reducing the task of combing through an infinite space of possible functions that best match the data, to a minimization over the kernel parameters. This minimization step is referred to as _training_ the GP model.
Several symmetries in the differential cross section help to reduce the computational load of training slightly. Rotated into the frame where \(\mathbf{\hat{\mathcal{E}}}\) points along the \(z\) axis, which we refer to as the dipole-frame, the unique hypersurface regions effectively live in an \(M=4\) dimensional space, with coordinates \(\mathbf{\beta}=(E,\eta,\theta_{s},\phi_{s})\). As defined, \(\eta=\cos^{-1}\hat{\mathbf{k}}-\mathbf{\hat{\mathcal{E}}}\) is the angle between the dipole and incident relative momentum directions, where it is convenient to select \(\hat{\mathbf{k}}\) to lie in its \(x,z\) plane. The angles \(\theta_{s}\) and \(\phi_{s}\), denote the inclination and azimuthal scattering angles respectively, in this frame. Doing so, the differential cross section possesses the symmetry
\[\mathcal{D}_{\text{el}}(E,\eta,\theta_{s},\phi_{s})=\mathcal{D}_{\text{el}}(E,\eta,\theta_{s},-\phi_{s}). \tag{4}\]
Consequently, we only need to specify the differential cross section for angles within the domain \(\eta,\theta_{s},\phi_{s}\in[0,\pi]\), to fully describe its global structure. More details of the appropriate frame transformations are provided in Supplementary Material.
To perform the interpolation with GP regression, we utilize the Matern-\(\frac{5}{2}\) kernel [39], which is better able to capture the sharp jumps in a non-smooth function, over higher-order differentiable kernels such as the radial basis function. This kernel contains a parameter \(w\) that sets a length scale over which features of the data vary in coordinate space, that is optimized during the model training process. This kernel is typically not ideal for periodic input data, so we make the periodicity of the angles \((\eta,\theta_{s},\phi_{s})\) explicitly known to the GP model by training it with the cosine of these angles, instead of the angles themselves. Furthermore, \(\log_{10}(E/E_{\text{dd}})\) is fed into the GP model in place of \(E\), to reduce the disparity in fitting domains between each coordinate of \(\mathbf{\beta}\). The GP model is trained over the range \(\log_{10}(E/E_{\text{dd}})=-6\) to 2, corresponding to collision energies of \(E/k_{B}\approx 0.36\) pK to 36 \(\mu\)K. After training on \(\sim 10,000\) samples of \(\mathcal{D}_{\text{el}}(E,\eta,\theta_{s},\phi_{s})\), the resulting GP fit obtains a mean-squared error of \(\approx 0.5\%\) against the close-coupling calculations [40], which we take as an accurate representation of the actual cross section.
In Fig. 3, we plot the total cross section \(\sigma(E,\eta)=\int\mathcal{D}_{\text{el}}(E,\eta,\Omega_{s})d\Omega_{s}\), at various collision energies. There is a marked variation in the \(\eta\) dependence, indicating a higher tendency for side-to-side collisions (\(\eta=90^{\circ}\)) over head-to-tail ones (\(\eta=0^{\circ}\)) at higher energies. To highlight the dominant anisotropic scattering process, Fig. 3 also provides plots of the differential cross section at \(\eta=45^{\circ}\), the approximate angle at which \(\sigma\) is maximal. As energy increases from subplots (a) to (d), the scattered angle dependence of \(\mathcal{D}_{\text{el}}\) becomes biased toward
Figure 3: The central plot shows the total cross section as a function of the incident collision angle, obtained from (a) the Born approximation (red dashed curve), and from GP interpolation (solid curves) for 3 different collision energies: (b) \(E=0.2E_{\text{dd}}\) (black), (c) \(E=2E_{\text{dd}}\) (gray) and (d) \(E=20E_{\text{dd}}\) (light gray). In alphabetical correspondence, are angular plots of the differential cross section (in units of \(a_{d}^{2}\)) in subplots with the respective collision energies, assuming dipoles pointing along \(\mathbf{\hat{\mathcal{E}}}=\hat{\mathbf{z}}\) and incident collision angle \(\eta=45^{\circ}\) lying in the \(x,z\)-plane. Subplot (d) uses a smaller domain for clarity of presentation.
forward scattering, reducing the effectiveness of collisions for thermalization as discussed below. Alphabetic labels in Fig. 3 consistently correspond to the collision energies: (b) \(E=0.2E_{\rm dd}\), (c) \(E=2E_{\rm dd}\) and (d) \(E=20E_{\rm dd}\). The Born approximated cross sections at threshold [26] are labeled with (a).
_Collisional thermalization_--Fast and easy access to the accurate differential cross section via its GP model now permits accurate theoretical investigations of nondegenerate gas dynamics. More specifically, we are concerned here with a gas' route to thermal equilibrium. A common experiment for such analysis is cross-dimensional rethermalization [41], in which a harmonically trapped gas is excited along one axis, then left alone to re-equilibrate from collisions.
We present results in terms of the temperatures along each axis \(i\), defined in the presence of a harmonic trap as \(\mathcal{T}_{i}=(\langle p_{i}^{2}\rangle/m+m\omega_{i}^{2}\langle q_{i}^{2} \rangle)/2\), where \(\langle\dots\rangle=\int d^{3}\mathbf{q}d^{3}\mathbf{p}f(\mathbf{q},\mathbf{p})(\dots)\) denotes a phase space average over the phase space distribution \(f\) in molecular positions \(\mathbf{q}\) and momenta \(\mathbf{p}\), while \(\omega_{i}\) are the harmonic trapping frequencies. As is usual in cross-dimensional thermalization, we consider an excitation of axis \(i\) then proceed to measure the thermalization rate along axis \(j\). This is modeled by taking axis \(i\) to have an initial out-of-equilibrium temperature \(\mathcal{T}_{i}=T_{0}+\delta_{i}/k_{B}\), with a perturbance in energy \(\delta_{i}\), while the the other 2 axes are simply at initial temperature \(T_{0}\).
In the case of a dilute gas, the relaxation of \(\mathcal{T}_{j}\) follows an exponential decay in time, whose rate \(\gamma_{ij}\) is related to the standard collision rate \(\gamma_{\rm coll}\), by a proportionality factor \(\varepsilon_{ij}=\gamma_{ij}/\gamma_{\rm coll}\). As defined, the quantity \(\varepsilon_{ij}\) is the inverse of the so-called number of collisions per rethermalization [41; 42], a measure of thermalization common to the literature [17; 10; 18]. We opt to utilize its inverse instead as it is the more natural definition to discuss efficiency of evaporative cooling. Usually defined as \(\gamma_{\rm coll}=\langle n\rangle\langle\overline{\sigma}v_{r}\rangle\) with phase space averaged number density \(\langle n\rangle\) and 2-body elastic rate \(\langle\overline{\sigma}v_{r}\rangle\), \(\varepsilon_{ij}\) represents the efficiency of each non-threshold collision toward thermalization of the gas. This collisional efficiency is formally seen in terms of the integral
\[\varepsilon_{ij}\approx\alpha_{ij}\frac{\pi^{2}}{64}\int\frac{d^{3}\mathbf{ \kappa}}{(2\pi)^{3}}\frac{e^{-\kappa^{2}/4}}{\sqrt{\pi}}\int d^{2}\Omega^{ \prime}\frac{\mathcal{D}^{\prime}_{\rm el}\kappa}{\langle\sigma\kappa\rangle }\Delta\kappa_{i}^{2}\Delta\kappa_{j}^{2}, \tag{5}\]
where \(\Delta\kappa_{i}^{2}=\kappa_{i}^{\prime 2}-\kappa_{i}^{2}\) is the collisional change in adimensional relative momenta \(\mathbf{\kappa}=\mathbf{p}_{r}(mk_{B}T_{0})^{-1/2}\), \(\alpha_{ij}=3/2\) if \(i=j\), and \(\alpha_{ij}=-3\) otherwise (see Supplementary Materials). The integral above has been evaluated analytically in the threshold scattering regime [28], both for identical dipolar fermions and bosons.
Evidently from Eq. (5), \(\varepsilon_{ij}\) is symmetric in its indices which leaves only 6 unique configurations of \(i\) and \(j\). As
Figure 4: Plot of \(\varepsilon_{ij}\) as a function of the dipole tilt angle \(\Theta\), for all 6 unique configurations (subplots a to f) of the excitation axis \(i\), and measured thermalization axis \(j\). The solid red curves are the analytic \(\varepsilon_{ij}\) results derived with the Born approximated cross section at threshold, whereas the dashed-dotted curves are those from Monte Carlo integration using the GP interpolated cross sections, at temperatures \(T=10\) nK (black), \(T=100\) nK (dark gray), \(T=400\) nK (gray), and \(T=1\)\(\mu\)K (light gray). The dashed blue lines are the efficiency for purely \(p\)-wave collisions, \(\varepsilon_{p}=1/4.1\).
serving the dipoles lie in the \(x,z\)-plane and tilted with angle \(\Theta=\cos^{-1}\hat{\mathbf{\mathcal{E}}}\cdot\hat{\mathbf{z}}\), we compute Eq. (5) with Monte Carlo integration [43] and plot the results in Fig. 4. Each subplot (a to f) shows a different \((i,j)\) configuration, within which, \(\varepsilon_{ij}\) is plotted against the dipole tilt angle \(\Theta\) as dashed curves, for the temperatures \(T=10\) nK (black), \(T=100\) nK (dark gray), \(T=400\) nK (gray) and \(T=1\)\(\mu\)K (light gray). Interestingly, the \(\varepsilon_{ij}\) terms involving excitation or rethermalization along \(y\) essentially lose their dependence on \(\Theta\) around \(400\) nK, beyond which collisions are less efficient than even non-nolipolar \(p\)-wave scattering (dashed blue line in Fig. 4) [44] for all \(\Theta\). This decrease can be intuited by looking at the differential cross section around \(\eta=45^{\circ}\), around which the total cross section is maximal. As evidenced from the subplots of \(\mathcal{D}_{\text{el}}\) in Fig. 3, forward scattering is favored at higher collision energies, limiting momentum transfer between axes and therefore, also the efficiency of collisions toward rethermalization. Preferential forward scattering is what ultimately leads to the reduction in evaporation efficiency, earlier described and seen in Fig. 1. There, the rate of thermalization was approximated by the average \(\gamma_{\text{th}}=\gamma_{\text{coll}}\sum_{i,j}\varepsilon_{ij}/9\), as is expected for evaporation along all 3-dimensions. The dipoles were assumed aligned along \(\Theta=90^{\circ}\), and \(\gamma_{\text{th}}\) interpolated over several temperatures to solve Eq. (1).
Realistically, forced evaporation by trap depth lowering tends to occur primarily along 1 direction, reducing the evaporation efficiency in the presence of molecular losses [45]. The resulting out-of-equilibrium momentum distribution from single axis evaporation will be much like that in cross-dimensional rethermalization experiments, where an anisotropic collisional efficiency could now be used to your advantage. For instance, near unity collisional efficiency is achieved in the threshold regime with \(\varepsilon_{xz}\) specifically at \(\Theta=45^{\circ}\). Optimal evaporation protocols could thus be engineered by varying the molecular dipole orientation relative to the axis of evaporation. We leave such investigations to a future work.
_Outlook and conclusions_--By constructing a GP model of the elastic differential cross section between microwave shielded polar molecular fermions, we have found that non-threshold collisions can greatly diminish the efficacy of collisions toward thermalization of a nondegenerate gas. It is thus prudent to perform evaporation in the threshold regime, with the caveat that Pauli blocking in fermions would also lower the collisional efficiency below the Fermi temperature [21]. If deployed in direct simulation Monte Carlo solvers [27; 28; 34], this GP model could also permit accurate dynamical studies in the Fermi degenerate or hydrodynamic regimes. The latter is motivated by restrictions of \(\varepsilon_{ij}\), only being able to describe thermalization in dilute samples. With larger molecular dipoles at densities required to achieve quantum degeneracy, the collision rate is far exceeded by the mean trapping frequency, demanding equilibration of trapped dipolar gases be treated within a hydrodynamic framework [46; 47; 48; 49]. The method of GP interpolation proposed here could similarly be applied to DC field shielded molecules [50] and bosonic species.
_Acknowledgments_--The authors are grateful to Luo Xin-Yu for motivating discussions and insights on evaporation in molecular Fermi gases. This work is supported by the National Science Foundation under Grant Number PHY2110327.
|
2301.07398
|
Will ALMA Reveal the True Core Mass Function of Protoclusters?
|
Characterizing prestellar cores in star-forming regions is an important step
towards the validation of theoretical models of star formation. Thanks to their
sub-arcsecond resolution, ALMA observations can potentially provide samples of
prestellar cores up to distances of a few kpc, where regions of massive star
formation can be targeted. However, the extraction of real cores from
dust-continuum observations of turbulent star-forming clouds is affected by
complex projection effects. In this work, we study the problem of core
extraction both in the idealized case of column-density maps and in the more
realistic case of synthetic 1.3\,mm ALMA observations. The analysis is carried
out on 12 regions of high column density from our 250 pc simulation. We find
that derived core masses are highly unreliable, with only {\em a weak
correlation between the masses of cores selected in the synthetic ALMA maps and
those of the corresponding three-dimensional cores}. The fraction of real
three-dimensional cores detected in the synthetic maps increases monotonically
with mass and remains always below 50\%. Above $\sim 1\,M_{\odot}$, the core
mass function derived from the column-density maps is steeper than that of the
three-dimensional cores, while the core mass function from the synthetic ALMA
maps has a slope closer to that of the real three-dimensional cores. Because of
the mass uncertainties, proper guidance from realistic simulations is essential
if ALMA observations of protoclusters at kpc distances are to be used to test
star-formation models.
|
Paolo Padoan, Veli-Matti Pelkonen, Mika Juvela, Troels Haugbølle, Åke Nordlund
|
2023-01-18T09:54:41Z
|
http://arxiv.org/abs/2301.07398v2
|
# Will ALMA Reveal the True Core Mass Function of Protoclusters?
###### Abstract
Characterizing prestellar cores in star-forming regions is an important step towards the validation of theoretical models of star formation. Thanks to their sub-arcsecond resolution, ALMA observations can potentially provide samples of prestellar cores up to distances of a few kpc, where regions of massive star formation can be targeted. However, the extraction of real cores from dust-continuum observations of turbulent star-forming clouds is affected by complex projection effects. In this work, we study the problem of core extraction both in the idealized case of column-density maps and in the more realistic case of synthetic 1.3 mm ALMA observations. The analysis is carried out on 12 regions of high column density from our 250 pc simulation. We find that derived core masses are highly unreliable, with only _a weak correlation between the masses of cores selected in the synthetic ALMA maps and those of the corresponding three-dimensional cores_. The fraction of real three-dimensional cores detected in the synthetic maps increases monotonically with mass and remains always below 50%. Above \(\sim 1\,M_{\odot}\), the core mass function derived from the column-density maps is very steep, while the core mass function from the synthetic ALMA maps has a shallower slope closer to that of the real three-dimensional cores. Because of the very large mass uncertainties, proper guidance from realistic simulations is essential if ALMA observations of protoclusters at kpc distances are to be used to test star-formation models.
keywords: stars: formation - MHD - stars: luminosity function, mass function
## 1 Introduction
The relation between the mass of a prestellar core and that of the star it forms is an important prediction of theoretical models of star formation. On the one hand, in the core-collapse model (McKee and Tan, 2002, 2003) and in some models of the initial mass function (IMF) of stars (e.g. Hennebelle and Chabrier, 2008; Hopkins, 2012), it is assumed that the stellar mass reservoir is fully contained in a bound prestellar core. On the other hand, in the competitive accretion model (Zimecker, 1982; Bonnell et al., 2001, 2001), most of the stellar mass is accreted over time from a larger gas reservoir shared with other stars, with an accretion rate that depends on the mass of the star. Numerical simulations of star formation under turbulent conditions consistent with Larson's velocity-size relation (e.g. Larson, 1981; Heyer and Brunt, 2004) and with virial parameter of order unity seem to rule out both models (Padoan et al., 2014; Pelkonen et al., 2021) and to suggest an alternative scenario, the _inertial-inflow model_, where the prestellar core contains only a fraction of the final stellar mass, while the remaining mass is brought to the star from larger scale by preexisting converging flows (Padoan et al., 2020; Pelkonen et al., 2021). The IMF model of Padoan and Nordlund (2002) is consistent with this alternative scenario, as the final stellar mass is determined by preexisting converging flows, and can be much larger than the critical mass, particularly towards larger stellar masses (Padoan and Nordlund, 2011).
Observations do not directly constrain the relation between the final mass of a star and the mass of its prestellar core; they only offer us a single time snapshot with an ensemble of cores and young stars. The masses of cores and stars can only be related statistically, by comparing the core mass function (CMF) with the stellar IMF (Salpeter, 1955; Kroupa, 2001; Chabrier, 2005). Observations of nearby star-forming regions seem to indicate that the CMF has a similar shape as the stellar IMF, which is often interpreted as evidence in favor of the core-collapse model (e.g. Motte et al., 1998; Alves et al., 2007; Enoch et al., 2007; Nutter and Ward-Thompson, 2007; Konyves et al., 2010, 2015; Marsh et al., 2016; Sokol et al., 2019; Konyves et al., 2020; Ladjelate et al., 2020; Takemura et al., 2021, 2022). However, in more distant regions of high-mass star formation, _Herschel's_ observations (Tige et al., 2017) and, more conclusively, interferometric surveys (e.g. Sanhueza et al., 2017, 2019; Li et al., 2019; Pillai et al., 2019; Kong, 2019; Servajean et al., 2019) have revealed a scarcity of massive prestellar cores, implying that the mass reservoir to form high-mass stars is spread over larger scales, in contrast with the core-collapse model. To further complicate the picture, in some of the most extreme regions of high-mass star formation, the CMF at intermediate and large masses is
sometimes found to be a shallower power law than the stellar IMF (Motte et al., 2018; Pouteau et al., 2022), which would imply either core fragmentation or a shallower IMF in such regions. The observational constraints on theoretical models rely on the statistical significance of the estimated values of the turnover mass and slope of the CMF. These values can be quite uncertain, as the turnover mass is often close to the completeness limit of the surveys and both turnover and slope can depend on the core-selection algorithm and may be affected by projection artifacts. _The goal of this work is to assess the reliability of CMFs derived from ALMA dust continuum observations of Galactic protoclusters_. As a test bed, we use our 250 pc star-formation simulation driven self-consistently by supernova (SN) explosions (Padoan et al., 2016; Pan et al., 2016; Padoan et al., 2016, 2017). The maximum resolution of the simulation is 0.0076 pc, adequate to address ALMA observations of Galactic protoclusters at a resolution of order 0.01 pc, such as those of the ALMA-IMF Large Program (Motte et al., 2022; Ginsburg et al., 2022; Pouteau et al., 2022). Although the turnover mass of the CMF is not fully resolved at this resolution (e.g. Pellkonen et al., 2021), the simulation is realistic enough to allow us to test the relation between cores selected from two-dimensional (2D) projections, as in the observations, and real cores selected from the corresponding three-dimensional (3D) volumes, irrespective of the actual value of the turnover mass.
We first analyze column-density maps obtained from the simulation, and then synthetic 1.3 mm dust-continuum maps computed with the radiative transfer code SOC (Juvela, 2019). We aim at reproducing a synthetic ALMA dataset and an analysis pipeline as close as possible to that of the ALMA-IMF Large Program (Motte et al., 2022; Ginsburg et al., 2022; Pouteau et al., 2022). As in Pouteau et al. (2022), the cores are extracted with the _gets_ code (Men'shchikov, 2021). Cores are also extracted from the corresponding 3D volumes with a dendrogram analysis (Padoan et al., 2007; Rosolowsky et al., 2008) and compared with the sample of 2D cores from _gets_. The comparison shows that the 2D cores present a highly incomplete view of the sample of real 3D cores, are sometimes observational artifacts from projection effects, and have masses poorly correlated with those of the corresponding 3D cores.
The paper is organized as follows. In the next section we briefly summarize the simulation and in SS 3 we describe the selection and the general properties of the column density maps. The computation of the synthetic observations is presented in SS 4 and the extraction of cores in both 2D and 3D is described in SS 5. The results from the analysis of both column-density maps and synthetic observations is presented in SS 6. A discussion of our results, and their significance with respect to recent findings from the ALMA-IMF Large Program, is given in SS 7, and the main conclusions are summarized in SS 8.
## 2 Simulation
The test bed for this work is a sample of protocluster regions found in our 250 pc star-formation simulation driven self-consistently by SNe. The simulation has been continuously run, during the past three years, under a multi-year PRACE project, until it reached approximately 45 Myr of evolution under self-gravity. It describes an ISM region of size \(L_{\rm box}=250\) pc and total mass \(M_{\rm box}=1.9\times 10^{6}\) M\({}_{\odot}\), where the turbulence is driven by SNe alone. Given the large time and spatial scales, the simulation develops many regions where high-mass stars are formed, including several stellar clusters.
The simulation has been shown to generate star-forming clouds with realistic observational properties, including kinematics, lifetimes, and star-formation rates (Padoan et al., 2016, 2016, 2020). It has also been used to study the statistical properties of SN-driven turbulence (Padoan et al., 2016; Pan et al., 2016), to propose the new _Inertial-Inflow_ scenario for the formation of massive stars (Padoan et al., 2017), and to assess the real nature of massive clumps from the _Hi-GAL_ Survey (Lu et al., 2022). The reader is referred to Padoan et al. (2016, 2016) for details about the numerical setup. The main features of the simulation relevant to this work are briefly summarized in the following.
The 3D MHD equations are solved with the adaptive-mesh-refinement (AMR) code RAMSES (Teyssier, 2002; Fromang et al., 2006; Teyssier, 2007), using periodic boundary conditions. The energy equation includes the pressure-volume work, the thermal energy introduced to model SN explosions, a uniform photoelectric heating as in Wolfire et al. (1995), with efficiency \(\epsilon=0.05\) and the FUV radiation field of Habing (1968) with coefficient \(G_{0}=0.6\) (the UV shielding in MCs is approximated by tapering off the photoelectric heating exponentially above a number density of 200 cm\({}^{-3}\)), and a tabulated optically thin cooling function constructed from the compilation by Gnedin & Hollon (2012) that includes all relevant atomic transitions. Molecular cooling is not included, due to the computational cost of solving the radiative transfer. The thermal balance between molecular cooling and cosmic-ray heating in dense gas is emulated by setting a limit of 10 K as the lowest temperature of dense gas. However, to generate synthetic observations of the dust emission, the radiative transfer is computed by postprocessing individual snapshots, including all stars with mass \(>2\) M\({}_{\odot}\) as point sources (see SS 4).
The simulation is initialized with zero velocity, uniform density, \(n_{\rm H,0}=5\) cm\({}^{-3}\), uniform temperature, \(T_{0}=10^{4}\) K, and uniform magnetic field, \(B_{0}=4.6\)\(\mu\)G. During the first 45 Myr, self-gravity was not included and SN explosions were randomly distributed in space and time, at a rate of 6.25 SNe Myr\({}^{-1}\). The resolution was \(dx=0.24\) pc, achieved with a 1283 root grid and three AMR levels. The minimum cell size was then decreased to \(dx=0.03\) pc, using a root-grid of 512\({}^{3}\) cells and four AMR levels, for an additional period of 10.5 Myr, still without self-gravity. At \(t=55.5\) Myr, gravity is introduced and the simulation is continued until \(t\approx 100\) Myr with a minimum cell size further reduced to \(dx=0.0076\) pc, by adding two more AMR levels. At this resolution, we can follow the formation of individual massive stars, so the time and location of the SNe are computed self-consistently from the evolution of those stars.
Individual stars are modeled with accreting sink particles, created when the gas density is larger than \(10^{6}\) cm\({}^{-3}\) and other conditions are satisfied (see Haugbelle et al., 2018, for details of the sink particle model). A SN is created when a sink particle of mass larger than 7.5 M\({}_{\odot}\) has an age equal to the corresponding stellar lifetime for that mass (Schaller et al., 1992). The sink particle is removed and the stellar mass, momentum, and \(10^{51}\) erg of thermal energy are added to the grid with a Gaussian profile (see Padoan et al., 2016, for further details). By the latest simulation snapshot used in this work, corresponding to a time of 34.7 Myr from the inclusion of self-gravity and star formation, 3,942 stars with mass \(>2\)\(M_{\odot}\) have been generated, of which 389 have already exploded as SNe. The stellar mass distribution is consistent with Salpeter's IMF (Salpeter, 1955) above \(\sim 8\) M\({}_{\odot}\), but is incomplete at lower masses (it starts to flatten at a few solar masses instead of at a fraction of a solar mass), as expected for the spatial resolution of the simulation.
## 3 Column density maps
The goal of this work is to study simulated star-forming regions that are comparable to observed regions of high-mass star formation. For that purpose, we select the highest column density regions from four different snapshots of the simulation, spanning a range of times between 14.2 and 34.7 Myr from the beginning of self-gravity. We generate maps of 2 pc \(\times\) 2 pc, as that is a characteristic size of the 15 regions mapped by the ALMA-IMF Large Program. For each of the four snapshots, we find the position of maximum column density measured at a resolution of 512\({}^{2}\) cells, or 0.49 pc (a characteristic size of massive clumps in Csengeri et al., 2017) in the three orthogonal directions of the 250 pc computational volume. That results in 12 positions of maximum column density, which are taken as the centers of 12 maps of 2 pc \(\times\) 2 pc. The column density maps are then recomputed at the maximum resolution of the simulation, 0.0076 pc, so each map is composed of \(263\times 263\) pixels.
We stress that these maps result from the projection of 3D columns with a very large aspect ratio of 125:1 (250 pc \(\times\) 2 pc \(\times\) 2 pc); while they are meant to represent relatively small star-forming regions, as those modelled with smaller simulations, they are embedded in a realistic 250 pc volume, and they contain the full projection effects to be expected at that scale. As extensively discussed in Lu et al. (2022), this projection depth is representative of the thickness and total column density of a characteristic spiral arm. Observations of similar regions in the Galactic plane and towards the Inner Galaxy may suffer much larger projection effects, as they may sample several dense regions along a distance of several kpc, rather than structures within a single spiral arm. This should be kept in mind when comparing our simulated regions with observations of the highest column density regions in the Galaxy, as discussed in the following.
The 12 maps from the simulation are truly independent of each other, even if they are selected from only four snapshots. We have verified that only a small fraction of the 3D columns in each snapshot intersect each other. Of these, only one intersection, between directions \(y\) and \(z\) of snapshot 3, corresponds to a 3D volume whose mass gives a significant contribution to the column density in the maps. However, while the map of direction \(z\) contains most of the
Figure 1: Column-density maps of the 12 regions analyzed in this work. Each map covers a 2 pc \(\times\) 2 pc region centered around the highest column density of the corresponding projection of the whole 250 pc computational volume (see text for details). The four columns of panels correspond to the four different snapshots of the simulation taken at time intervals of nearly 7 Myr; the three rows of panels correspond to the three orthogonal directions. The circles show the areas containing 12.5%, 25% and 50% of the total mass of each map. The grey scale is proportional to the square root of the column density, with minimum and maximum values set to 0.05 g cm\({}^{-2}\) (white colour) and 1.0 g cm\({}^{-2}\) (black colour) respectively.
mass of the map in direction \(y\), the map of direction \(y\) contains less than half of the mass of the map in direction \(z\). Thus, even these two maps cannot be considered two different views of the same 3D region. Figure 1 shows the 12 column density maps. The intersecting ones are shown by panels \(3,y\) and \(3,z\). The map in panel \(3,y\) shows a dominant dense filament along the \(z\) direction that seems to continue beyond the bottom of the map. The map in panel \(3,z\) shows the projection of that filament in the \(z\) direction, and so it also includes the part of the filament that lies beyond the bottom of the map in panel \(3,y\).
### Simulated Maps as Protocluster Regions
All the maps in Figure 1 appear like clusters of filaments, with the highest column densities corresponding to regions where many filaments intersect. This morphology is very similar to that of the so-called filament hubs that are ubiquitous in regions of high-mass star formation (e.g. Myers, 2009; Schneider et al., 2012; Peretto et al., 2014; Kumar et al., 2020, 2022; Zhou et al., 2022). Besides the morphological similarity with the astrophysical birthplaces of massive stars, these regions are indeed forming massive stars in the simulation.
To further characterize them in relation to observed regions of high-mass star formation, we measure their mass-size relation within circular regions centered around the center of the maps, and containing 12.5%, 25% and 50% of their total mass. The three circles are shown in all the maps of Figure 1. The mass-size relation is plotted in Figure 2 (squared symbols). These circular portions of the maps contain on average nearly 1,000 \(M_{\odot}\) at a scale of \(\sim\)0.5 pc, or a mean column density of \(\sim\)0.1 g/cm\({}^{2}\). The column density tends to decrease slightly with increasing size, following approximately the slope of the empirical mass-size relation limit for high-mass star formation of Kauffmann & Pillai (2010), as revised by Dunham et al. (2011). The masses are on average a factor of three above the empirical limit, showing that the regions selected from the simulations have mean column densities consistent with those of observed regions forming high-mass stars.
Figure 2 also shows the mass-size relation of the brightest ATLASGAL regions (small, black circles) from Csengeri et al. (2017), that are supposedly some of the most extreme high-mass star-forming regions in the Galaxy, and that of the 15 regions chosen for the ALMA-IMF Large Program (blue circles) of Motte et al. (2022), which are a subsample of some of the brightest regions in Csengeri et al. (2017). The median value of the column density of the ATLASGAL clumps of Csengeri et al. (2017) is a factor of 4.0 larger than that of our simulated regions, while the median column density of the even more extreme sample of the ALMA-IMF Large Program is 6.7 times larger. Nonetheless, there is partial overlap between these Galactic protocluster regions and our simulated ones, as the lowest mean column densities from both samples are well within the largest mean values from our maps (including the case of W43-MM3, shown by the lower of the two blue filled circles in Figure 2). The high mean _column_ densities of the observed protocluster clumps may suggest very large mean _volume_ densities, larger than in our simulation. However, as mentioned above and extensively discussed in Lu et al. (2022), dust emission maps of regions in the Galactic plane and towards the Inner Galaxy may suffer important projection effects with contributions from dense structures at different distances. In general, the lower depth and column density of our simulation, compared with the most extreme regions in the Inner Galactic plane, should result in less projection artifacts and spatial confusion than in those regions. Thus, our modeling approach is conservative, in the sense that the issues raised in this work may be exacerbated in real Galactic protoclusters.
## 4 Synthetic Dust Continuum Maps
### Radiative Transfer
To generate the synthetic surface-brightness maps corresponding to the column-density maps of Figure 1, we first carried out radiative transfer modeling of the whole 250 pc volume of each of the four snapshots with the continuum radiative transfer program SOC (Juvela, 2019). Details of the radiative transfer method can be found in Lu et al. (2022), and the main points are summarized below.
The radiative transfer calculations need the density field, the dust properties, and a description of the radiation field as inputs. The density field is taken from the datacubes of the AMR simulation, and SOC uses directly the same hierarchical spatial discretization as the AMR simulation. The dust properties correspond to the Weingartner & Draine (2001)\(R_{\rm V}=5.5\) dust model (Case B), where the size distribution of the carbonaceous grains extends up to 10 \(\mu\)m. The high \(R_{\rm V}\) is consistent with observations of very dense clouds, and the 1.3 mm dust opacity of the model, \(\kappa_{\rm 1.3\,mm}=0.0037\) cm\({}^{2}\) g\({}^{-1}\), is more than twice the value of normal Milky Way dust. However, the opacity is still lower than for the Ossenkopf & Henning (1994) dust model that is discussed for comparison in SS 7.4, where \(\kappa_{\rm 1.3\,mm}=0.0083\) cm\({}^{2}\) g\({}^{-1}\). The data for the Weingartner & Draine (2001) dust models is obtained from the DustEin site1(Compiegne et al., 2011).
Footnote 1: [https://www.ias.u-psud.fr/DUSTEM/](https://www.ias.u-psud.fr/DUSTEM/)
The radiation field consists of an is
Figure 2: Mass versus size of the 12 regions analyzed in this work (squared symbols). The three clusters of symbols show the values of radius and mass in the case where 12.5% (yellow squares), 25% (green squares), and 50% (red squares) of the total mass of each map is considered (see the three circles in the panels of Figure 1). The small, black circles correspond to the most luminous ATLASGAL regions from Csengeri et al. (2017), and the larger blue circles the masses and sizes of the 15 ALMA-IMF regions in Motte et al. (2022). The two open blue circles with a filled circle inside correspond to W43-MM2 (higher mass) and W43-MM3 (lower mass) studied in Poutau et al. (2022). The dashed line is the empirical mass-size relation limit for massive star formation of Kauffmann & Pillai (2010), as revised by Dunham et al. (2011), and the dashed-dotted lines correspond to different values of constant column density.
Mathis et al. (1983) model of the local interstellar radiation field, and of the radiation from all the stars with mass \(>2M_{\odot}\) formed self-consistently in the simulation. The number of these point sources increases with time, as star formation progresses in the simulation, and is 775, 1929, 3093, and 3942 in the four snapshots of this work. The SOC code calculates the equilibrium dust temperature of each model cell and generates the corresponding 1.3 mm surface-brightness maps for the three orthogonal view directions of each snapshot. These \(250\,\mathrm{pc}\times 250\,\mathrm{pc}\) surface-brightness maps have a pixel size equal to \(0.0076\,\mathrm{pc}\), the smallest cell size of the simulation, hence they contain \(32,768\times 32,768\) pixels. Of these maps, only the \(2\,\mathrm{pc}\times 2\,\mathrm{pc}\) regions corresponding to the selected column-density maps of Figure 1 are used for the analysis of this work.
### ALMA Simulations
To generate the synthetic ALMA observations, the surface-brightness maps are processed with the CASA program (v. 6.5.1), assuming a \(2\,\mathrm{kpc}\) source distance and using an ALMA 12-m antenna configuration (_alma.cycle8.1_) that gives a beam size of \(1.66\,\arcsec\) (geometric average for a source at zero declination). At equal linear resolution, this corresponds to a beam size of \(0.60\,\arcsec\) at the distance of \(5.5\,\mathrm{kpc}\) of W43, not far from the beam of \(0.47\,\arcsec\) in the study of W43 by Pouteau et al. (2022). The CASA runs include simulations with the _simonive_ routine and the cleaning of the dirty interferometer image with the _chelan_ routine, using automatic masking (with the _auto-multithresh_ option), Briggs weighting with the parameter _robust_=0, and multiscale deconvolution with the parameter _scales_=[0, 3, 9, 27]. The above steps are the same used in the study of W43 by Pouteau et al. (2022), as described in Ginsburg et al. (2022), except for the masking that was custom made. We do not need to distinguish between images generated from the full 1.3 mm band and images that excludes the channels associated with lines (referred to as _bsens_ and _cleanest_ images respectively in Pouteau et al. 2022), as our surface-brightness maps do not include any contamination from emission lines. The CASA processing is carried out over an area of \(3\,\mathrm{pc}\times 3\,\mathrm{pc}\) containing the \(2\,\mathrm{pc}\times 2\,\mathrm{pc}\) region of interest, after doubling the number of pixels in the original surface-brightness maps, so the pixel size is approximately \(0.4\,\arcsec\), one fourth of the beam size.
We seek to obtain a similar sensitivity as in the ALMA-IMF Large Program that was designed to reach a point-source mass sensitivity of \(0.15\,M_{\odot}\) at \(1.3\,\mathrm{mm}\) (based on \(T_{\mathrm{dust}}=20\,\mathrm{K}\) and \(\kappa_{1.3\,\mathrm{mm}}=0.01\,\mathrm{cm}^{2}\,\mathrm{g}^{-1}\)). In the case of W43-MMA and W43-MM3, at a distance of \(5.5\,\mathrm{kpc}\), the noise level is approximately \(0.06\,\mathrm{mJy}\,\mathrm{beam}^{-1}\), based on Table 2 in Motte et al. (2022).2 With a \(6\,\mathrm{hour}\) integration time to obtain the whole \(2\,\mathrm{pc}\times 2\,\mathrm{pc}\) mosaic, the CASA simulations with the parameter \(PWV=0.5\) yield a noise level of \(\sigma\approx 0.021\,\mathrm{mJy}\,\mathrm{beam}^{-1}\) at a distance of \(5.5\,\mathrm{kpc}\), three times lower than in the maps of W43 of the ALMA-IMF Large Program. However, considering that our simulated regions have a mean column density that is on average 6.7 times smaller than that of the observed regions (see SS 3.1), we further reduce the noise by averaging 8 different realizations of each map, where the CASA simulations are exactly the same, except for the random thermal noise in the maps. After averaging the 8 maps, the typical noise level is \(\sigma\approx 0.009\,\mathrm{mJy}\,\mathrm{beam}^{-1}\), estimated by measuring the rms value of the map in small regions with negligible column density. Thus, our point-source mass sensitivity is of order \(0.3\,M_{\odot}\), depending on the assumed dust temperature and opacity values.
Footnote 2: The estimated noise level is slightly larger in Table 1 of Pouteau et al. (2022), probably due to the inclusion of regions of non-negligible dust emission.
### Noise Reduction
Following the analysis in Pouteau et al. (2022), we further reduce the noise in the maps with the multi-resolution segmentation code _MnGSeg_(Robitaille et al., 2019). _MnGSeg_ separates the incoherent
Figure 3: Example of the effect of noise reduction by the _MnGSeg_ code on the map corresponding to the panel 3, \(z\) of Figure 1. The upper panel is the simulated ALMA image before the noise reduction, the lower panel the image processed with _MnGSeg_. The grey scale is proportional to the surface brightness, with minimum and maximum values set to 5.0 MJy sr\({}^{-1}\) (black colour) and 100.0 MJy sr\({}^{-1}\) (white colour) respectively.
(Gaussian) cloud structures from the coherent component associated with the filaments and cores (see Robitaille et al., 2019). We performed the separation using a value of 2.0 for the \(q\) parameter and a skewness limit of 0.4. We then removed all the incoherent structures that were larger than the beam, as in Pouteau et al. (2022), while those at scales equal to the beam or smaller were kept. This process reduces the noise and increases the number of detected sources by more than a factor of two. The effect of processing the images with \(MnGSeg\) is illustrated in Figure 3, showing the synthetic ALMA images of the map \(3,z\) of Figure 1 before (upper panel) and after (lower panel) the noise reduction.
## 5 2D and 3D core extraction
### 2D _gets_/ Extraction
The core extraction from both the column-density maps and the synthetic ALMA maps is carried out with the _gets_/code (Men'shchikov, 2021), one of the two methods used in Pouteau et al. (2022). Although several code parameters can be controlled by the user, all core (and filament) detection parameters have already been optimized and extensively tested (Men'shchikov, 2021, 2021), and we adopt the standard extraction parameters as in Pouteau et al. (2022), such as a maximum core ellipticity of 2 and a minimum signal to noise of 2.
For each extracted core, _gets_/ determines a FWHM ellipse and computes the total mass, \(M_{\rm c,2D}\), in the case of column-density maps, or the total flux, \(S_{\rm 1,3\,mm}^{\rm int}\) from the surface-brightness maps. In the case of the synthetic ALMA maps, we derive the core masses from the _gets_/ integrated flux assuming the 1.3 mm emission is optically thin and using the same Rayleigh-Jeans approximation as in Pouteau et al. (2022):
\[M_{\rm c,2D}\approx 1\,M_{\odot}\left(\frac{S_{\rm 1,3\,mm}^{\rm int}}{10\, \rm mJy}\right)\left(\frac{T_{\rm dust}}{15\,\rm K}\right)^{-1}\left(\frac{d} {2\,\rm kpc}\right)^{2}\left(\frac{\kappa_{\rm 1,3\,mm}}{0.01\,\rm cm^{2}\,g^{-1}} \right)^{-1} \tag{1}\]
The dust opacity values are \(\kappa_{\rm 1,3\,mm}=0.0037\,\rm cm^{2}\,g^{-1}\) for the dust model of Weingartner & Draine (2001) and \(\kappa_{\rm 1,3\,mm}=0.0083\,\rm cm^{2}\,g^{-1}\) for the dust model of Ossenkopf & Henning (1994). For the temperature, we use a single value for all cores, \(T_{\rm dust}=13.0\,\rm K\) for the dust model of Weingartner & Draine (2001) and \(T_{\rm dust}=7.5\,\rm K\) for the dust model of Ossenkopf & Henning (1994), corresponding to the median dust temperature of the candidate 3D cores, defined in SS 6.1.1 as the main overdensity in the line of sight of each _gets_/ core. All the analysis and figures in this work are based on the synthetic ALMA images from the dust model of Weingartner & Draine (2001). A comparison between the two dust models is shown in SS 7.4.
We do not use the individual temperatures of the candidate 3D cores because they are not observable quantities. The color temperature can be derived from observations at different wavelengths, but we find that it does not accurately reflect the real dust temperature variations from core to core (see SS 7.4), so there is no advantage in using it. Although in Pouteau et al. (2022) the core masses are based on estimates of individual core temperatures, we adopt a single value of \(T_{\rm dust}\) also to estimate the core masses from the observations of the W43 region in SS 7.2. By using a single value of \(T_{\rm dust}\), the derived masses are proportional to the integrated fluxes with the same constant of proportionality for all cores (see Equation 1). Thus, we effectively compare the integrated flux distributions rather than the CMFs, avoiding the extra uncertainty arising from the temperature determination.
FWHM ellipses of the cores extracted by _gets_/ from one of our column-density maps and the corresponding synthetic ALMA map are shown in the two panels of Figure 4, which correspond to the panel \(3,z\) of Figure 1. The cores appear to follow the densest filaments and concentrate in the densest areas where multiple
Figure 4: Examples of _gets_/ core selection on a column density map (upper panel) and the corresponding synthetic ALMA map after noise reduction (lower panel), corresponding to the panel \(3,z\) of Figure 1. The grey scale is proportional to the square root of the column density (upper panel) or surface brightness (lower panel), with minimum and maximum values set to 0.05 g cm\({}^{-2}\) (black colour) and 1.0 g cm\({}^{-2}\) (white colour) in the upper panel and 5.0 MJy sr\({}^{-1}\) (black colour) and 100.0 MJy sr\({}^{-1}\) (white colour) in the lower panel. The ellipses correspond to the FWHM of the cores extracted by _gets_/.
filaments intersect, as usually found in real Herschel and ALMA observations. Some of the same cores can be recognized in both images, but there are also several cases of sources extracted from the column-density map that are not seen in the synthetic ALMA map and vice-versa.
### 3D _Dendro_ Extraction
To extract the 3D cores from the \(2\,\mathrm{pc}\times 2\,\mathrm{pc}\times 250\,\mathrm{pc}\) columns of each map, we use the clumpfind algorithm introduced in Padoan et al. (2007). To avoid confusion with other clumpfind algorithm in the literature, we here refer to this algorithm as _Dendro_ for the first time. While the algorithm was designed to select gravitationally unstable cores (see Pelkonen et al., 2021), it can also be used as a straightforward dendrogram (Rosolowsky et al., 2008), without imposing a gravitational instability condition while building the tree. In this case, cores are simply defined as connected overdensities that cannot be split into two or more overdensities of amplitude \(\delta n/n>f\). _Dendro_ scans the density field with discrete density levels, each of amplitude \(f\) relative to the previous one. Only the connected regions above each density level are retained. The final (unsplit) core of each branch is retained. Each core is assigned only the mass within the density isosurface that defines the core (below that density level the core would be merged with its next neighbor).
The _Dendro_ algorithm depends only on three parameters: (1) the spacing of the discrete density levels, \(f\), (2) the minimum density above which cores are selected, \(n_{\mathrm{min}}\), and (3) the minimum number of cells per core. In principle, there is no need to define a minimum density, but in practice it speeds up the algorithm. In this work we adopt \(n_{\mathrm{min}}=10^{4}\) cm\({}^{-3}\), which is appropriate for prestellar cores. The parameter \(f\) is set by looking for numerical convergence of the mass distribution with decreasing values of \(f\). We find that the mass distribution above approximately \(1\,M_{\odot}\) is stable in the range of values between \(f=32\%\) and \(f=2\%\), while at the lowest masses the number of cores slightly increases with decreasing \(f\), but with a clear tendency to converge at around \(f=2\%\). However, these variations in the core mass functions do not cause significant variations in our comparison between 2D and 3D cores, so the qualitative conclusions of this work do not depend on the precise value of \(f\). The plots in this work correspond to the _Dendro_ runs with \(f=8\%\). The minimum number of cells is set to 27, to ensure that all the detected cores are at least minimally resolved in the simulation and to limit the minimum mass and size of 3D cores to values comparable to those of the 2D cores extracted by _gets_.
Because _gets_ distinguishes between sources and filaments, sources cannot have an aspect ratio larger than two. _Dendro_ does not impose a limit on the aspect ratio of the cores. However, we have verified that core elongation is not a problem in comparing with the _gets_ 2D cores. For each 3D core, we have measured the half-mass distance of all its cells from its center and the equivalent radius. For elongated cores and filaments, the half-mass distance can be much larger than the equivalent radius. However, we find that an insignificant number of the 3D _Dendro_ cores have a ratio of half-mass distance over equivalent radius larger than two.
## 6 Results
We present separately the results of the source extraction from the column-density maps and from the synthetic ALMA maps. The idealized case of the column-density maps provides a test of projection effects in isolation, without the observational complications, such as radiative transfer effects, interferometric artifacts and instrumental noise. These observational effects are then addressed in the second part of this section where the analysis of the synthetic ALMA maps is presented.
### Cores from Column Density Maps
#### 6.1.1 _gets_ and Candidate 3D Cores
Using _gets_ we extract 1,726 compact sources from the 12 column density maps. We want to test if these cores correspond to real 3D cores, and, if so, how their masses compare with the corresponding 3D ones. One could directly search for overlap between 3D cores and 2D ones by simply projecting the positions of the 3D cores on the 2D maps. However, we find it more instructive to go through the extra step of first identifying a candidate 3D core position along the line of sight of each 2D core. This extra step provides a test of the quality of the background subtraction by _gets_, even when a counterpart real 3D core is not found. The _gets_ background subtraction turns out to be rather impressive, in the sense that real 3D density structures are discovered by _gets_ even if the background is often \(\sim 90\%\) of the mass in the line of sight.
To identify a candidate 3D core position, we first search for the density maxima along the line of sight of each _gets_/core, scanning its full 250 pc length. The density is first averaged within the footprint of the core in the two directions on the plane of the sky, to obtain a 1D array of 32,768 elements, and a physical length of 250 pc. Each element of this array corresponds to the mean density on a slice of thickness equal to 0.0076 pc (the minimum cell size in the simulation) and area equal to the area of the core's footprint. The position of the highest density in this 1D array is chosen as the candidate 3D core position. We also define a candidate 3D core, as the spherical region centered around that position in the line of sight, and with radius equal to the footprint radius of the corresponding _gets_/core. We refer to the mass of this 3D candidate core as \(M_{\mathrm{cond,3D}}\). We repeat the process also for the second and third highest density maxima along the line of sight, so these other density peaks can also be considered when searching for an associated 3D core (see SS 6.1.2).
The comparison of the 2D _gets_ masses with those of the corresponding candidate 3D cores is shown in the upper panel of Figure 5. The dashed line shows the one-to-one relation, while the thick solid line is a least-squares fit corresponding to the relation \(M_{\mathrm{c,2D}}=0.96\,M_{\mathrm{cond,3D}}^{0.94}\) (where the mass unit is \(1\,M_{\odot}\)). For any given value of \(M_{\mathrm{cond,3D}}\), the error in the estimated mass, \(M_{\mathrm{2D}}\), is quite small. The standard deviation of \(M_{\mathrm{2D}}\) relative to the best fit in the logarithmic plot corresponds to a mass ratio of less than a factor of 1.3, the correlation is almost exactly linear, and the normalization at \(1\,M_{\odot}\) almost unity. Considering that the background subtraction performed by _gets_ is on average quite large (approximately 90% of the mass in the line of sight is interpreted as background in most cases), _gets_ is very successful at extracting the mass of the dominant 3D dense structure along the line of sight. This also implies that the estimated volume density based on the _gets_/mass and _gets_/core size is also quite accurate, despite the complexity of the cloud morphology.
The success of _gets_/ to identify real high-density structures along the line of sight does not necessarily mean that most _gets_/cores correspond to actual 3D cores of a similar mass, nor that most of the real 3D cores are captured in 2D by _gets_. There is no guarantee that the boundary of a candidate 3D core (a spherical volume of size equal to that of the _gets_/core) corresponds to that of a core selected
independently by a different 3D extraction algorithm. For example, a candidate 3D core may be a segment of a filament that happens to intersect another filament in projection (not in 3D), hence appearing as a bona fide 2D core. That filament segment with size equal to that of the _gets_ core may indeed have a similar mass as that of the _gets_ core, but it is not a real 3D core. Even when a candidate 3D core overlaps with a real 3D core, its mass (or that of the associated _gets_ core) may not be a reliable estimate of that of the real 3D core, as the latter could have a significantly different size or center of mass. In addition, a majority of real 3D cores will generally be missed in 2D projections if they frequently overlap along the line of sight. Thus, to truly test the quality of the 2D source extraction relative to real 3D cores, we need to also independently extract 3D cores from the 3D datacubes that were used to generate the column-density maps. For that purpose, we use the _Dendro_ algorithm as described in SS 5.2.
#### 6.1.2 Dendro Runs and Real 3D Cores
_Dendro_ returns a total of 7,298 3D cores (for \(f=8\%\)) from the 12 maps, a much larger number than found in 2D, as to be expected. The range of sizes and masses of the 3D cores are quite similar to those of the _gets_ cores. The median mass is 0.25 \(M_{\odot}\), versus 0.27 \(M_{\odot}\) for the _gets_ cores. We search for spatial overlap between the _gets_ cores and the _Dendro_ cores by using the coordinates of the candidate 3D cores, as those cores are interpreted as the best 3D counterpart of the _gets_ cores. If the center of a _Dendro_ core is found inside the volume of a candidate 3D core, then that _Dendro_ core is taken as the 3D counterpart to the corresponding _gets_ core. If that condition is not satisfied, the test is repeated using the candidate cores associated to the second highest density peak in the line of sight, and failing also that, the third highest density peak is tested (see SS 6.1.1). We don't consider even lower density peaks below the third highest ones, because then the association with the _Dendro_ core is deemed to be insignificant (and indeed the correlation between the two 2D and 3D core masses would be negligible). Based on this criterium, we find a total of 1,066 overlapping cores, that is 62% of the _gets_ cores and 15% of the _Dendro_ cores.
The comparison of the masses of this subset of cores is shown in the lower panel of Figure 5. The least-squares fitting gives the relation \(M_{\rm c,2D}=0.41\,M_{\odot,3D}^{0.45}\) (with a mass unit of \(1\,M_{\odot}\)). The vertical scatter in this plot is much larger than that in the upper panel, with a standard deviation relative to the best fit corresponding to a factor of four in mass. The fit is now significantly shallower than linear, and Pearson's correlation coefficient has decreased from 0.85 in the upper panel to 0.61 in the lower one. For each value of the 3D core mass, \(M_{\rm c,3D}\), the corresponding mass of the overlapping _gets_ core, \(M_{\rm c,2D}\), spans a total range of nearly two orders of magnitude, not much smaller than the full range of _gets_ core masses. Thus, the _gets_ masses are rather unreliable estimates of the masses of the corresponding _Dendro_ cores, even if there is still a significant statistical correlation between the two masses.
#### 6.1.3 Core Mass Functions
Given that the number of _Dendro_ cores is so much larger than that of the _gets_ cores, the great majority of the 3D cores (85%) are not found in 2D, so the _gets_ sample is highly incomplete. Figure 6 shows the CMFs of the 3D _Dendro_ cores (thick red line) and of the 2D _gets_ cores (thick blue line). The number of 2D cores is always much smaller than the number of 3D cores, at any mass. The shape of the two CMFs are also somewhat different. In the range between approximately 1 and 10 \(M_{\odot}\), the 2D CMF is significantly steeper, with a slope of \(-2.0\pm 0.1\), than the 3D CMF that has a slope of \(-1.2\pm 0.1\). The peak of the 3D CMF would certainly shift to lower masses if the numerical resolution of the simulation were increased (e.g. Pelkonen et al., 2021), as more 3D cores of low mass would be found. However, assuming a fixed telescope beam size, the peak of the 2D CMF may not shift much towards lower masses even if the numerical resolution were increased, because the assumed telescope beam size in the _gets_ search is already twice larger than the current resolution of the simulation.
The CMFs of the 1,066 2D and 3D cores that are found to overlap with each other are shown by the shaded histograms in Figure 6. The 3D CMF (red histogram) is clearly shifted to larger masses than the 2D CMF (blue histogram). The ratio of the 3D CMF of overlapping cores with the 3D CMF of all cores is shown by the red star symbols in Figure 7. The plot shows that below approximately 0.5 \(M_{\odot}\) less than 20% of the 3D cores are found by _gets_ if 2D, as clearly illustrated by the comparison of the shaded and unshaded red histograms in Figure 6. The ratio increases monotonically with core mass, up to nearly 0.6 at the largest masses.
Figure 5: Comparison of the mass of cores selected by _gets_ from the column-density maps, \(M_{\rm c,2D}\), and the mass of the corresponding candidate 3D cores along the line of sight, \(M_{\rm cand,3D}\) (upper panel), or the mass of the overlapping _Dendro_ cores extracted independently in the 3D data cube, \(M_{\rm c,3D}\) (lower panel). The dashed lines correspond to equal masses, the thick solid lines to the least-squares fits. In the upper panel, all 1,726 _gets_ cores are shown, while the lower panel is limited to the 1,066 overlapping _gets_ and _Dendro_ cores.
Finally, the ratio of overlapping 2D cores and all 2D cores, shown by the blue squares in Figure 7, gives the fraction of 2D cores that can be considered as real, rather than projection effects. This ratio is close to 1 above approximately 2 \(M_{\odot}\), but decreases monotonically with decreasing core mass. At 0.1 \(M_{\odot}\), approximately half of the cores are projection artifacts.
### Cores from Synthetic 1.3 mm ALMA Maps
#### 6.2.1 ALMA gets Runs and Candidate 3D Cores
We now consider the _gets_ core extraction from the synthetic 1.3 mm ALMA maps corresponding to the column-density maps studied above. We refer to these 2D cores also as synthetic ALMA cores. The _gets_ search yields a total of 791 synthetic ALMA cores, an average of 66 cores per map. The maps with the smallest and largest numbers of cores have 40 and 80 cores respectively. The reduced number of synthetic ALMA cores with respect to the number of cores from the column-density maps is to be expected, because the observational noise sets a finite mass sensitivity.
Using the positions and sizes of the synthetic ALMA cores, we repeat the same study of density fluctuations in the line of sight of the _gets_ cores as in the case of the column-density maps (see SS 6.1.1). The upper panel of Figure 8 compares the 2D _gets_ masses with those of the corresponding candidate 3D cores, where the dashed line shows the one-to-one relation and the thick solid line is a least-squares fit. Although the vertical scatter is increased with respect to Figure 5, there is still a significant correlation between the two masses, with a Pearson's correlation coefficient of 0.62. However, the least-squares fit gives the relation \(M_{\rm c,2D}=1.86\,M_{\rm cand,3D}^{0.66}\) (where the mass unit is \(1\,M_{\odot}\)), much shallower than linear, and the _gets_ masses are on average 2.3 times larger than those of the candidate 3D cores (based on the ratio of the median values), suggesting a smaller background subtraction than in the case of the column-density maps, possibly caused by interferometric artifacts, as discussed in SS 7.1.
#### 6.2.2 ALMA Dendro Runs and Real 3D Cores
Following the same procedure as in SS 6.1.2 for the case of the column-density maps, we identify the 3D _Dendro_ cores that are associated with the synthetic ALMA cores extracted with _gets_. As in SS 6.1.2, the criterium is that the center of the _Dendro_ core must be inside the spherical volume of the candidate 3D core associated to the _gets_ core. This search yields 557 overlapping cores, corresponding to 70% of the synthetic ALMA cores, or 8% of the 7,298 _Dendro_ cores. The lower panel of Figure 8 shows the comparison of the masses of the synthetic ALMA cores, \(M_{\rm c,2D}\), with those of the corresponding 3D cores, \(M_{\rm c,3D}\). The scatter in this plot is very large, and the two masses are barely correlated, with a Pearson's coefficient of 0.32, and an almost flat slope of the least-squares fit, \(M_{\rm c,2D}=1.50\,M_{\rm c,3D}^{0.17}\) (where the mass unit is \(1\,M_{\odot}\)). Despite the fact that 70% of the synthetic ALMA cores have a real 3D counterpart (and approximately 90% above \(2\,M_{\odot}\)), their mass does not really reflect that of the corresponding 3D cores.
We have verified that the correlation of the two masses remains rather low (Pearson's coefficient of 0.48) even if the _gets_ masses were derived from the 1.3 mm fluxes using the actual dust temperature of the candidate 3D cores (an ideal knowledge that could not be achieved from the observational data). Thus, the temperature uncertainty is only a partial contribution to the scatter, which is dominated by projection effects and by the differences between the 2D and 3D core definitions (as in the lower panel of Figure 5) and by interferometric artifacts (see SS 7.1). However, temperature variations along the line of sight certainly make projection effects and background subtraction more complex than in the case of the column density maps.
#### 6.2.3 ALMA Core Mass Functions
The mass distribution of the synthetic ALMA cores is shown by the unshaded blue histogram in Figure 9. The CMFs of the 3D _Dendro_ cores is shown by the red unshaded histogram, while the shaded histograms show the CMFs of the subsamples of the 557 ALMA
Figure 6: Mass distributions of the 3D _Dendro_ cores (red histograms), and of the 2D _gets_ cores selected from the column-density maps (blue histograms). The unshaded histograms are for the full core samples, while the shaded ones are for the sub-samples of _Dendro_ cores (red histogram) and _gets_ cores (blue histogram) that overlap with each other. Power-law fits in the approximate mass interval 1-10 \(M_{\odot}\) are shown for the CMFs of the full samples.
Figure 7: Fraction of cores with overlap between the _Dendro_ selection and the _gets_ selection in the column-density maps, as a function of core mass. The blue squares are the fraction with respect to the total number of _gets_ cores in each mass interval, \(N_{\rm 2D,core}/N_{\rm 2D,tot}\), and the red stars the fraction with respect to the total number of _Dendro_ cores, \(N_{\rm 3D,onset}/N_{\rm 3D,tot}\). The error bars are the propagation of the Poisson errors of the individual quantities.
cores (blue) and 3D cores (red) that overlap with each other. In the range of masses where the CMF of the synthetic ALMA cores is close to a power law, approximately between 1.5 and 10 \(M_{\odot}\), its slope is \(-1.4\pm 0.1\), significantly shallower than that of the _gets_/cores from the column-density maps and only slightly steeper than that of the 3D _Dendro_ cores. Above approximately 1 \(M_{\odot}\), the number of synthetic ALMA cores is significantly larger than that of the 2D cores from the column-density maps (see Figure 6), and only less than a factor of three lower than the number of 3D cores of the same mass. The reason for this excess of massive ALMA cores relative to those from the column-density maps may be related to non-trivial effects of interferometric artifacts, as discussed below in SS 7.1.
The plot with blue-square symbols in Figure 10 shows the ratio of the numbers of synthetic ALMA cores with a real 3D counterpart and the total number of synthetic ALMA cores, within each mass interval. Overall, 70% of the synthetic ALMA cores have a 3D counterpart. Above 2 \(M_{\odot}\), 88% have a 3D counterpart or, conversely, only 12% may be interpreted as projection artifacts. Towards lower masses, the fraction decreases monotonically (except for the smallest mass bin that is not statistically significant), down to a value of approximately 35% at approximately 0.1 \(M_{\odot}\).
The ratio between the 3D CMF of overlapping cores and the 3D CMF of all cores (shaded and unshaded red histograms in Figure 9) is shown by the red-star symbols in Figure 10. This ratio corresponds to the completeness fraction of the observational CMF. The fraction decreases monotonically with decreasing mass, as in Figure 7, from 40% at the largest masses, to 15% at 1 \(M_{\odot}\), to 5% or less below approximately 0.1 \(M_{\odot}\), near the empirical sensitivity limit of the synthetic ALMA maps. Evidently, the synthetic ALMA observations yield a highly incomplete sample of the real 3D cores. As to be intuitively expected, the completeness fraction decreases with decreasing mass at all masses. However, this does not result in a
Figure 8: Comparison of the mass of cores selected by _getsf_ from the synthetic ALMA maps, \(M_{\rm c,2D}\), and the mass of the corresponding main 3D cores along the line of sight, \(M_{\rm main,3D}\) (upper panel), or the mass of the overlapping cores selected independently in the 3D data cube, \(M_{\rm c,3D}\) (lower panel). The dashed lines correspond to equal masses, the thick solid lines to the least-squares fits. In the upper panel, all 791 _getsf_ cores are shown, while the lower panel is limited to the 557 _getsf_ cores that overlap with a corresponding 3D _Dendro_ core.
Figure 10: Fraction of cores with overlap between the 3D selection and the _getsf_ selection in the synthetic ALMA maps, as a function of core mass. The blue squares are the fraction with respect to the total number of _getsf_ cores in each mass interval, \(N_{\rm 2D,\textit{new}}/N_{\rm 2D,\textit{tot}}\), and the red stars the fraction with respect to the total number of 3D cores, \(N_{\rm 3D,\textit{core}}/N_{\rm 3D,\textit{tot}}\). The error bars are the propagation of the Poisson errors of the individual quantities.
Figure 9: Mass distributions of the 3D-selected cores (red histogram), and of the _getsf_ cores selected from the synthetic ALMA maps (blue histogram). The unshaded histograms are for the full core samples, while the shaded ones are for the subsamples of 557 3D cores (red) and _getsf_ ALMA cores (blue) that overlap with each other. Power-law fits in the approximate mass interval 1.5-10 \(M_{\odot}\) are shown for the CMFs of the full samples.
shallower 2D CMF relative to the 3D CMF at large masses, as trivially expected, because of the lack of correlation between 2D and 3D masses (Figure 8). In other words, when comparing the blue and red unshaded histograms in Figure 9, the _getsf_ cores corresponding to the 3D _Dendro_ cores are not in the same mass bins in the two CMFs. This also means that the similar slopes of the two CMFs above approximately \(1.5\,M_{\odot}\) is essentially a coincidence: projection effects, 2D versus 3D core definitions, interferometric artifacts and radiative transfer effects (see SS 7) transform the steep CMF of the 2D cores of the column density maps into a much shallower CMF of synthetic ALMA cores. Nevertheless, if the similarity in the slopes were confirmed as a general result, in practice one could use the observed slope as a rough estimate of the true slope of the CMF of 3D cores.
The continuous variation with mass of the completeness fraction of our synthetic core sample runs contrary to estimates for observational surveys of compact sources, where completeness fractions are usually of order unity above a minimum mass (e.g. Andre et al., 2010; Motte et al., 2018; Pouteau et al., 2022), while it is consistent with what is found in the case of more extended cores (e.g. Cao et al., 2021). But we stress again that a simple correction of the CMF slope based on the mass-dependence of the completeness fraction is inadequate due to the lack of statistical correlation between 2D and 3D masses.
## 7 Discussion
### Interferometric Artifacts
We have shown in SS 6.2.3 that the CMF of the synthetic ALMA cores has an excess of high-mass cores and a shallower power-law tail relative to the CMF extracted from the column-density maps (blue unshaded histogram in Figure 9 versus blue unshaded histogram in Figure 6). The shift in the turnover mass depends on the noise level that sets a mass sensitivity: the higher the noise, the higher the turnover mass. But the high-mass tail of the CMF is also shifted to the right. Two straightforward explanations for this excess of massive cores are possible: i) the observational noise causes an artificial merging of nearby cores extracted by _getsf_, ii) the shift is due to an incorrect choice of the temperature (assumed to be too low). By producing and analysing ALMA simulated images of the same maps without thermal noise, and single-dish simulated
Figure 11: CMFs of _getsf_ cores selected from the synthetic single-dish maps without noise (black shaded histogram), the synthetic ALMA maps without noise (red histogram) and the synthetic ALMA maps including thermal noise (blue histogram), all based on the same dust model of Weingartner & Draine (2001) with \(\rm\,r_{1,3,\,mm}=0.0037\,cm^{2}\,g^{-1}\) and the same single temperature, \(T_{\rm dust}=13.0\,\rm K\). Each CMF is based on the full set of 12 synthetic maps, all of them with the same beam size of \(0.60\,^{\circ}\) at \(5.5\,\rm kpc\).
Figure 12: Examples of _getsf_ core selection on a single-dish image without noise (upper panel) and the corresponding synthetic ALMA map with the same beam size and also without noise (lower panel), corresponding to the panel 3\(\rm\,r\) of Figure 1. The grey scale is proportional to the square root of the surface brightness in both panels, with minimum and maximum values set to \(0.05\,\rm\,g\,cm^{-2}\) (black colour) and \(1.0\,\rm\,g\,cm^{-2}\) (white colour) in the upper panel and \(5.0\,\rm\,MJy\,sr^{-1}\) (black colour) and \(100.0\,\rm\,MJy\,sr^{-1}\) (white colour) in the lower panel. The ellipses correspond to the FWHM of the cores extracted by _getsf_.
images, also without noise, we can show that neither of these simple explanations is correct.
The red histogram in Figure 11 shows the CMF of _getsf_ cores extracted from synthetic ALMA maps without thermal noise, while the blue histogram is the same synthetic ALMA CMF as in Figure 9. The comparison of the two CMFs shows that the effect of thermal noise is the expected one: the sensitivity of the images is reduced and so the CMF turnover shifts to larger masses, meaning that lower-mass cores are lost from the sample. But that loss does not cause an artificial merging into larger-mass cores, as the high-mass tails of the two CMFs are indistinguishable from each other. Essentially the same high-mass cores are selected (or at least the same CMF of high-mass cores), so thermal noise is not the main cause of the shift to larger masses of the high-mass tail of the CMF of synthetic ALMA cores, relative to that of the cores extracted from the column-density maps.
We also generate a set of single-dish synthetic images at 1.3 mm with the same dust model (Weingartner and Draine, 2001) and the same beam size (0.60 '' at 5.5 kpc) as the ALMA synthetic observations. No thermal noise is added to these single-dish images, so they differ from the ALMA images without noise only for the absence of interferometric artifacts. The CMF extracted by _getsf_ on these single-dish images is shown by the shaded black histogram in Figure 11. This CMF has a turnover comparable to that of the CMF from the column-density maps and its high-mass tail is also shifted to lower masses relative to the ALMA CMFs. Because the mass of the _getsf_ cores from the single-dish images is derived from Equation 1 with the same \(\kappa_{1.3\,\rm{mm}}\) and \(T_{\rm dust}\) values as for the synthetic ALMA cores, the shift relative to the ALMA CMFs cannot be due to the chosen temperature value. Thus, we can conclude that the high-mass tail of the synthetic ALMA CMF has an excess of massive cores relative to the CMF from the column-density maps primarily due to interferometric artifacts (e.g. an increase of the luminosity contrast between the cores and their background). However, not even the high-mass tail of the CMF from the single-dish images exhibits the steep power-law shape of the CMF from the column-density maps, so radiative transfer effects (e.g. spatial variations of the dust temperature) must also play a role in shaping the synthetic ALMA CMF.
Figure 12 shows a comparison of _getsf_ core selections on a single-dish image without noise (upper panel) and the corresponding synthetic ALMA map, also without thermal noise, with the same beam size (lower panel). This is one of the 12 pairs of maps that are used to generate the CMFs shown by the black and red histograms in Figure 11. Many cores in common between the two maps can be recognized, usually with comparable sizes. Yet we know from the black and red histograms in Figure 11 that the cores in the lower panel are on average approximately a factor of two more massive than the cores in the upper panel. Although a detailed explanation requires a careful examination of the various extraction steps in _getsf_, it is clear that the background subtraction is affected by the interferometric artifacts creating some artificial intensity contrasts in the ALMA maps, resulting in a slightly less aggressive subtraction. Because the background subtraction is often of order 90% of the column density, a small difference in background subtraction from 90% to 80%, for example, would cause an increase in the extracted core mass by a factor of two. This effect should be kept in mind, and possibly quantified, when assessing mass uncertainties of cores selected in interferometric surveys.
### The CMF in the W43 Region
Pouteau et al. (2022) find that the high-mass tail of the CMF in the W43-MM2 and MM3 regions has a shallower slope than Salpeter's (as in the case of W43-MM1, previously studied by Motte et al., 2018). This conclusion is based on plots of the cumulative CMF, which has the advantage of being independent of binning, but may also mask important features of the actual CMF. In the following comparison, we refer to the CMF, rather than its cumulative version. The comparison is independent of the uncertainty of the estimated core temperature, because all the core masses are derived from Equation 1 using the same dust temperature value of \(T_{\rm dust}=13.0\) K. Thus, we effectively compare the distributions of integrated 1.3 mm fluxes, rather than masses, even if the fluxes are expressed as masses based on the Rayleigh-Jeans approximation of Equation 1. The mass conversion assumes \(\kappa_{1.3\,\rm{mm}}=0.0037\,\rm{cm}^{2}\,g^{-1}\) for our synthetic cores (consistent with our dust model and previous plots), and \(\kappa_{1.3\,\rm{mm}}=0.01\,\rm{cm}^{2}\,g^{-1}\) for the W43 cores, as assumed in Pouteau et al. (2022). This choice of \(T_{\rm dust}\) and \(\kappa_{1.3\,\rm{mm}}\) results in a similar turnover mass for the CMF of W43 as for our synthetic cores. Using the median temperature estimated by Pouteau et al. (2022), \(T_{\rm dust}=23\) K, the CMF of W43 would shift to lower masses by a factor of 0.6, but both mass scaling probably underestimate the true masses of the W43 cores by a factor of 2 to 4, depending on the dust models (see SS 7.4).
The CMF from W43 (green histogram) and of our synthetic ALMA cores (blue histogram), computed with a single temperature as mentioned above, are shown in Figure 13. In the limited range of masses between approximately 1.5 and 9 \(M_{\odot}\), where the CMF of W43 may be approximated by a single power law, its slope is \(-1.4\pm 0.3\), consistent with that of our simulated regions. Both CMFs have a turnover at around 1.5 \(M_{\odot}\), and a similar drop towards lower masses, but the value of the turnover for the CMF of W43 is entirely dependent on the choice of \(\kappa_{1.3\,\rm{mm}}\) and \(T_{\rm dust}\), and would be shifted to larger masses if a combination of values consistent with either of our dust models were used (see SS 7.4). The CMF of W43 shows a clear excess above approximately 9 \(M_{\odot}\), where 25 cores are found while only approximately 3 would be expected if the slope of -1.4 were to continue above 9 \(M_{\odot}\).
Figure 13: CMFs of _getsf_ cores selected from the synthetic ALMA maps (blue histogram) and of the W43-MM2 and MM3 cores from Pouteau et al. (2022) (green histogram) derived from the cores integrated fluxes and assuming a constant dust temperature (see text for details). Power-law fits in the approximate mass interval 2-10 \(M_{\odot}\) are shown for both CMFs.
### Hypercritical Prestellar Cores, Massive Protostellar Cores, or Projection Artifacts?
We can only speculate about the origin of the excess of cores above 9 \(M_{\odot}\) in the CMF of W43. To illustrate the special nature of these relatively massive cores, Figure 14 shows the mass-size relation of our synthetic ALMA cores with blue empty circles, and of the cores in W43 with red filled circles. The W43 cores above 9 \(M_{\odot}\) are marked by larger red circles. The size, \(D_{\rm c,2D}\), is the beam-deconvolved FWHM, and the mass, \(M_{\rm c,2D}\), is derived in the same way as for the CMFs of Figure 13. There is a significant overlap in the mass-size relation between our synthetic cores and those in W43, although the sizes of the synthetic cores are on average a factor of two larger. This is partly due to the fact that our beam size is slightly larger, 0.60 '' compared to 0.47 '' in W43 (see the dashed and dotted vertical lines in Figure 14), partly a consequence of the limited resolution of the simulation (a bit too close to the beam size). In addition, a significant fraction of the W43 cores have FWHM size close to the beam size, so many of them are further reduced in size after beam deconvolution. As a result, 28% of the W43 cores have deconvolved sizes smaller than the beam size, while only one of the synthetic cores has a deconvolved size smaller than the beam size.
The diagonal dashed-dotted lines represent constant values of the ratio \(M_{\rm c,2D}/M_{\rm BE,c}\), where \(M_{\rm BE,c}\) is the critical Bonnor-Ebert mass (Bonnor, 1956; Ebert, 1957) for a temperature of 13.0 K. Considering only the cores with masses below 9 \(M_{\odot}\), the median value of \(M_{\rm c,2D}/M_{\rm BE,c}\) is 2.1 for the synthetic cores, and 5.1 for the W43 cores, due to the smaller sizes of the latter. The only four synthetic cores above 9 \(M_{\odot}\) have a median \(M_{\rm c,2D}/M_{\rm BE,c}\) ratio of 20.5 (and a maximum value of 47.0), while the 25 cores in W43 have a median value of 118.6, with a maximum value of 1,747.3 Thus, the W43 cores responsible for the CMF excess at high masses are all \(hypercritical\), as their \(M_{\rm c,2D}/M_{\rm BE,c}\) ratios are on average 50 times larger than those of the prestellar cores found in the simulation.
Footnote 3: If pairs of values of \(\kappa_{\rm 1.3\,mm}\) and \(T_{\rm dust}\) consistent with our dust models were used (see § 7.4), the \(M_{\rm c,2D}/M_{\rm BE,c}\) ratios of the W43 cores would be even larger, by approximately a factor of three.
It seems unlikely that such hypercritical cores are prestellar in nature, as in MHD simulations with realistic magnetic-field strength gravitational collapse initiates as soon as cores grow to be a few
Figure 14: Mass-size relation for the synthetic ALMA cores (blue empty circles) from this work, and for the cores in the W43-MM2,MM3 region (red filled circles) from Pouteau et al. (2022). The size, \(D_{\rm c,2D}\), is the beam-deconvolved FWHM, and the mass, \(M_{\rm c,2D}\), is derived from the 1.3 mm flux assuming \(T_{\rm dust}=13.0\) K (see the main text for details). Larger red circles mark the W43 cores more massive than 9 \(M_{\odot}\). The diagonal dashed-dotted lines corresponds to constant values of \(M_{\rm c,2D}/M_{\rm BE,c}\), where \(M_{\rm BE,c}\) is the critical Bonnor-Ebert mass for a temperature of 13.0 K. The vertical lines mark the beam sizes for the W43 observations (dashed line) and for the synthetic observations (dotted line).
Figure 15: Comparison of 2D _getsf_ synthetic ALMA core positions (upper panel) with 3D _Dendro_ cores positions (lower panel) plotted over the same column-density image. The _getsf_ cores in the upper panel are the same as those in the lower panel of Figure 4. The lower panel shows the position and the size of the _Dendro_ cores with masses \(>0.5\,M_{\odot}\). Lower-mass _Dendro_ cores are omitted for clarity.
times their critical Bonnor-Ebert mass (e.g. Figure 11 in Padoan et al.2020). Their observed overabundance makes these cores even more unlikely to be prestellar, as the median density of the cores above \(9\,M_{\odot}\) in W43 is 43.0 times larger than that of the cores below \(9\,M_{\odot}\), so their free-fall time is 6.6 times shorter, making them very short lived.4 Finally, if the core temperatures estimated in Pouteau et al. (2022) were correct for at least the most massive cores, they would imply a protostellar nature, as dust temperatures in excess of 15 K are not expected at such high densities without internal heating sources with reasonable dust models (see SS 7.4 and references therein).
Footnote 4: The median value of \(n_{\rm H_{2}}\) of the W43 cores above \(9\,M_{\odot}\) is approximately \(8\times 10^{7}\,{\rm cm^{-3}}\) (for the core mass contained within the FWHM size), which gives a free-fall time of only 3.4 kyr.
Thus, we speculate that, rather than being hypercritical prestellar cores, these objects are more likely to be protostellar cores, if not observational artifacts. In that case, and given their small size, they may have already acquired a significant amount of rotation around the central star (depending on the magnetic-field strength and configuration), so they may be better described as pseudodisks. Even if this interpretation were correct, such compact protostellar cores could not be present in our simulation, because the accretion radius of the sink particles is four times the minimum cell size, or 0.03 pc, so we cannot capture protostellar cores below a size of approximately 0.06 pc (this also explains the low temperature of our cores, as they are essentially all prestellar). Future observations of emission lines at comparable angular resolution may confirm the protostellar nature of the massive cores in W43 through the presence of outflows, if not by direct evidence of infall or of supersonic rotation of the cores. Simulations at higher resolution will also be needed.
It is also possible that a majority of the massive cores in W43 are observational artifacts due to projection effects and limited resolution and sensitivity. Projection effects must be even stronger in W43 than in our simulation, due to the higher maximum column density and the greater observational depth (several kpc) through the Galactic plane in the inner Galaxy. Figure 15 illustrates the difficulty of a 2D core extraction (upper panel) in a field with a large number of 3D cores (lower panel). The actual number of _Dandro_ cores is more than double of what is shown in the lower panel, as we only show the _Dendro_ cores with masses \(>0.5\,M_{\odot}\) for clarity. One can see that, even if our simulated region does not reach the column density where the most massive W43 cores are found, there is a significant chance of overlap of 3D cores in the densest regions, so a single _gets_ core may have contributions from several _Dendro_ cores.
Spatial resolution is also an important factor for compact-source extractions. When prestellar cores are selected as compact sources, they are not spatially resolved by definition. The largest ones, typically containing two to four beam sizes, have a measurable size, but their internal structure (e.g. the possibility of internal fragmentation) is unknown. Thus, cores extracted as compact sources should not be interpreted as individual objects in the absence of a strong physical justification for that assumption. The fact that cores extracted as compact sources are unresolved, in the sense that their sizes span a very limited range of values around the observational beam size, is well documented in the observational and numerical literature. Compact sources in the Hi-GAL Survey of the Galactic plane (Elia et al.2021), extracted with the CuTEx code (Molinari et al.2011), have sizes mostly below 2.5 times the (Herschel 250 \(\mu\)m) beam size (e.g. Molinari et al.2016). In the W43 regions MM2 and MM3, Pouteau et al. (2022) find only five _gets_ sources with sizes larger than four times the beam size, which they exclude from their analysis. Almost 90% of their sources have a size below 3.0 times the beam size, similarly to the Hi-GAL sources. Based on synthetic Herschel observations and the same data-analysis pipeline and source-extraction code as in the Hi-GAL survey, Lu et al. (2022) have shown that the Hi-GAL clumps are unresolved and their mass may not reflect at all that of the underlying cores (see their Section 7.3 and Figure 15). Using both Herschel and ALMA synthetic observations, Padoan et al. (2020) have shown that core masses may be highly overestimated, for example by a factor of 10 in the case of Herschel for regions at 1 kpc distance (see their section 9.2 and Figures therein). They have also found that, even when the typical core masses are not overestimated on average, at the highest ALMA resolution, the estimated masses of individual cores show no correlation with the true core masses (see their Figure 27), in agreement with the results of this work (considering that interferometric artifacts were not included in Padoan et al.2020). A more straightforward study of _gets_ core extractions in column-density maps from simulations (with no radiative transfer or interferometric effects, analogous to SS 6.1.1 of this work) has demonstrated that the sizes of compact-sources depend on the beam size, hence the sources are unresolved, should not be interpreted as single objects, and beam deconvolution to extrapolate their size is not justified (Louvet et al.2021).
### Dust Models and Core Temperature
All previous plots in this work are based on the Weingartner and Draine (2001)\(R_{\rm V}=5.5\) dust model (Case B), with \(\kappa_{\rm 1.3\,mm}=0.0037\) cm\({}^{2}\) g\({}^{-1}\). However, we have carried out the full analysis also with the dust model of Ossenkopf and Henning (1994), corresponding to dust that has evolved for \(10^{5}\) years at a density of \(10^{6}\) cm\({}^{-3}\) and has acquired thin ice mantles. This model has a significantly larger dust opacity, \(\kappa_{\rm 1.3\,mm}=0.0083\) cm\({}^{2}\) g\({}^{-1}\). Although the conclusions
Figure 16: Distributions of the mass-weighted mean dust temperature, \(T_{\rm dust}\), of the candidate 3D cores (defined in § 6.1.1) for the WD dust model (blue unshaded histogram) and the OH dust model (red unshaded histogram). The black solid and dashed-dotted lines are the least-square fits reported in the legend. The shaded histograms are the distributions of the color temperature, \(T_{\rm c}\), of the corresponding _gets_/cores (see details in the main text) for the WD model (blue histogram) and the OH model (red histogram).
of this work are not affected by the choice of the dust model, here we provide a brief comparison between the two cases, as well as a discussion of core temperatures. In the following, we refer to the two dust models as the WD model and the OH model respectively.
Figure 16 shows the distribution of the dust temperature of the candidate 3D cores (see SS 6.1.1), computed as the mass-weighted mean temperature of the dust inside the spherical volume of each core. The unshaded blue histogram is for cores extracted from the synthetic ALMA maps based on the WD model, while the unshaded red histogram from the maps computed with the OH model. As expected from the larger dust opacity, the OH model results in lower dust temperature at the high densities found in the cores. The median dust-temperature values, used to derive the core masses (see SS 5.1), are 13.0 K for the WD model and 7.5 K for the OH model. Both temperature distributions are very well approximated by exponential functions for dust temperatures above the peak (corresponding roughly to the median value), except for a few high-temperature outliers.
We also compute the color temperature, \(T_{\rm c}\), of all the synthetic ALMA cores, using surface brightness maps at 100, 160, 250, 350, 500, 850, and 1300 \(\mu\)m. The wavelengths are similar to those used in Pouteau et al. (2022) (apart from the exclusion of the 70 \(\mu\)m band and the 870 \(\mu\)m wavelength being replaced with 850 \(\mu\)m). In the modified blackbody fits, we assume a constant dust opacity spectral index of \(\beta\)=1.8 and the same relative uncertainty at all frequencies. The final color temperature estimates are averages over the 2D footprint of each core. This method is less sophisticated than that used in Pouteau et al. (2022), which is based on the PPMAP method of Marsh et al. (2015), with some other corrections for the most massive cores and for cores with evidence of outflows. The PPMAP method is expected to result in a more accurate estimate for the mass-averaged dust temperatures than a straightforward colour temperature. However, our color temperature is calculated on the precise footprint of each core, with all the bands computed at the full 0.60 '' resolution of our synthetic ALMA maps, while the PPMAP method yields a rather smooth temperature map at a nominal resolution of 2.5 ''. The distributions of \(T_{\rm c}\) for all the synthetic ALMA cores are plotted in Figure 16, where the shaded blue histogram is for the WD model, and the shaded red histogram for the OH model. The color temperatures are clearly shifted to larger values than the dust temperatures of the candidate 3D cores, with median values of 20.4 K and 16.0 K for the WD and OH models respectively.
Because the color temperature can be estimated observationally, it is useful to derive an empirical relation between \(T_{\rm dust}\) and \(T_{\rm c}\). Figure 17 shows a scatter plot of \(T_{\rm dust}\) versus \(T_{\rm c}\) for our simulated cores (candidate 3D cores and corresponding _gets_ cores, respectively), based on the WD dust model (empty blue circles) and on the OH model (filled red circles). The vertical scatter is quite large, particularly in the case of the OH model, but it is possible to fit linear relations in the log-log space, shown by the solid and dash-dotted lines in Figure 17. For the WD model the relation is almost exactly linear, with the dust temperature typically 70% of the color temperature, \(T_{\rm dust,WD}=7.2\,{\rm K}\times(T_{\rm c,WD}/10\,{\rm K})^{0.91}\). For the OH model the fit is significantly shallower than linear, with the dust temperature typically 50% of the color temperature, \(T_{\rm dust,OH}=5.6\,{\rm K}\times(T_{\rm c,OH}/10\,{\rm K})^{0.72}\). These empirical relations hold for color temperature maps at comparable angular resolution as the ALMA maps. This is not the case when combining for example the ALMA maps with Herschel images. In that case, since the color temperature at larger scales has a greater contribution from the warmer background, the typical ratio between \(T_{\rm c}\) and \(T_{\rm dust}\) will be larger than reported here. Figure 18 shows the case where \(T_{\rm c}\) is computed from images at an angular resolution of 18 '', corresponding to the 250 \(\mu\)m band of Herschel. As expected, the total range of \(T_{\rm c}\) is decreased compared to the higher-resolution case in Figure 17. However, the mean relation between \(T_{\rm c}\) and \(T_{\rm dust}\) expressed by the linear fit barely changes in the case of the WD dust model,
\[T_{\rm dust,WD}=6.8\,{\rm K}\times(T_{\rm c,WD}/10\,{\rm K})^{0.91}. \tag{2}\]
The change is instead significant for the OH model,
\[T_{\rm dust,OH}=5.5\,{\rm K}\times(T_{\rm c,OH}/10\,{\rm K})^{0.65}, \tag{3}\]
due to the larger temperature variations as a function of density caused by the higher opacity relative to the WD model, which is further discussed below. Equations 2 and 3 may be used to convert estimated color temperature maps from Herschel observations into dust temperatures of cores extracted from ALMA observations, though the conversion should be derived again for different ALMA angular resolution or different distance of the observed region.
Figure 17: Dust temperature (of the candidate 3D cores) versus color temperature (of the corresponding _gets_ cores) from the WD dust model (blue empty circles) and the OH dust model (filled red circles). The black solid and dashed-dotted lines are the linear fits reported in the legend, and the thin dashed line is the one-to-one relation.
Figure 18: The same as Figure 17, but with \(T_{\rm c}\) computed at an angular resolution of 18′′.
tracted from the maps computed with the WD model (blue unshaded histogram) and with the OH model (red unshaded histogram). The core masses are derived from the integrated _getsf_1.3 mm fluxes, using Equation 1, the \(\kappa_{\rm 1.3\,mm}\) value of each model, and the median dust temperature of the candidate 3D cores as explained in SS 5.1, \(T_{\rm dust}=13.0\) K and 7.5 K for the WD and OH models respectively. The CMF based on the OH model is clearly shifted towards lower masses, by nearly a factor of two, relative to the CMF from the WD model. This shift can be understood as the effect of the higher opacity of the OH model, resulting in higher temperatures in the diffuse medium and lower in dense cores in comparison to the WD model. The stronger dependence of the dust temperature on density in the OH model causes a stronger background subtraction, hence a lower core mass estimate, than in the WD model. This is related to the rim-brightening effects discussed in Men'shchikov (2016, see Section 5.2 and Appendix B). Other sources of uncertainties in the conversion of integrated fluxes to masses, due to spatial variations in temperature and dust-opacity, have been discussed in the literature (e.g. Malinen et al., 2011; Roy et al., 2014; Pagani et al., 2015; Men'shchikov, 2016).
The radiative transfer calculations result in self-consistent dust temperature and opacity values, depending on the adopted dust model. Given that both are quite difficult to establish observationally, one may interpret the observations based on the temperature and opacity values from the simulation. This is shown in Figure 19, where the shaded histograms are the CMFs of the MM2 and MM3 regions of W43 derived from the integrated 1.3 mm fluxes in Pouteau et al. (2022), and the core masses are computed with the same pairs of values of \(\kappa_{\rm 1.3\,mm}\) and \(T_{\rm dust}\) of the two dust models as in the case of the synthetic CMFs. The blue shaded histogram is based on the WD model, and the red shaded histogram on the OH model. The OH model causes a similar shift to lower masses relative to the WD model as in the case of the synthetic CMFs, but in this case the shift has a trivial origin, because the two CMFs are based on the same core extraction (the same integrated 1.3 mm fluxes): the ratio of the two opacities is larger than the ratio of the two median temperatures, so the larger-opacity OH model results in lower masses.
Besides the trivial dependence of the W43 CMF on the dust models, the more important result is the significant shift of the CMF to larger masses relative to the synthetic CMF and to the CMF in Pouteau et al. (2022), where typical dust temperatures in excess of 20 K are derived, while assuming a high dust opacity, \(\kappa_{\rm 1.3\,mm}=0.01\,{\rm cm}^{2}\,{\rm g}^{-1}\). Based on our simulation and radiative-transfer calculations, and assuming that the cores are mostly prestellar as in our case, hence externally heated, such opacity value should yield rather low temperatures and hence larger masses than in Pouteau et al. (2022). Temperatures below 10 K and even down to 6 K, have been observed in low-mass prestellar cores, based on spectral line observations of regions where the gas and dust temperatures should be closely coupled (Crapsi et al., 2007; Harju et al., 2008). Similarly low temperatures have been found in radiative transfer models, where (depending on the dust properties) the dust temperatures fall below 10 K already in regions that are shielded by less than \(A_{\rm V}=10\) mag of extinction (Zucconi et al., 2001; Stamatellos et al., 2007; Chacon-Tanarro et al., 2019). Alternatively, if most of the W43 cores were protostellar, internal heating from the protostars may cause higher dust temperatures closer to those estimated in Pouteau et al. (2022), but then the implications of the derived CMF for the origin of the stellar IMF would no longer be relevant.
## 8 Conclusions
We have identified and studied 12 regions forming high-mass stars in our 250 pc star-formation simulation, driven self-consistently by SNe for over 40 Myr. We have generated synthetic ALMA 1.3 mm maps of these regions, by solving the radiative transfer with thousands of point sources, including all stars with mass \(>2\,M_{\odot}\) formed self-consistently in the simulation. The synthetic ALMA images have been processed by following the same analysis pipeline and source-extraction code as in the ALMA-IMF Large Program. The cores extracted from the synthetic images with the _getsf_ code have been compared with a sample of cores extracted independently with the _Dendro_ code from the \(2\,{\rm pc}\times 2\,{\rm pc}\times 250\,{\rm pc}\) volumes used to generate the maps. The comparison shows that the CMF from the synthetic maps is incomplete at all masses relative to the CMF from the corresponding 3D volumes, and the masses of the cores from the synthetic maps are only very weakly correlated to those of the corresponding 3D cores.
By first analyzing column-density maps, we have studied the role of projection effects while excluding other effects from radiative transfer, interferometer artifacts and thermal noise. The analysis shows that a significant fraction of the 2D _getsf_ cores are projection artifacts, an even larger fraction of 3D _Dendro_ cores are not retrieved in 2D, and the ones that are retrieved in 2D have estimated masses with large uncertainties. The following is a list of our main conclusions with respect to the core detection from column-density maps at a resolution of \(\sim 0.01\) pc, comparable to that of the ALMA-IMF project:
1. The mass of the 2D _getsf_ cores is strongly correlated with the mass of the main density structure in their line of sight, showing that the background subtraction of _getsf_ works well in column-density maps despite the complex density field, although the extracted cores may not be real cores in 3D.
Figure 19: CMFs of the synthetic ALMA cores (unshaded histograms) from radiative transfer calculations with the WD dust model (blue histogram) and the OH dust model (red histogram). The shaded histograms are the CMFs of the W43 cores with masses derived from the integrated 1.3 mm fluxes in Pouteau et al. (2022) using the optically-thin approximation of Equation 1 with \(T_{\rm dust}=14.4\) K and \(\kappa_{\rm 1.3\,mm}=0.0037\) g cm\({}^{-2}\), as in the case of the WD dust model (blue histogram), and with \(T_{\rm dust}=9.8\) K and \(\kappa_{\rm 1.3\,mm}=0.0083\) g cm\({}^{-2}\), as in the case of the OH dust model (red histogram). The histograms of the W43 cores have been shifted to the right by a factor of 1.1 for clarity.
2. The fraction of projection artifacts, 2D _getsf_ cores without a 3D _Dendro_ counterpart, is a nearly monotonic function of mass, growing with decreasing mass from approximately 10% above \(2\,M_{\odot}\), to over 50% below \(0.1\,M_{\odot}\).
3. The completeness fraction, 3D _Dendro_ cores with a 2D _getsf_ counterpart, increases monotonically with increasing mass without reaching unity. Above \(1\,M_{\odot}\), approximately 40% of the 3D-selected cores are recovered in 2D.
4. The mass of the 2D _getsf_ cores is only a rough estimate of that of the associated 3D cores, with an uncertainty of approximately a factor of four, and a tendency to increasingly underestimate the 3D core mass towards increasing masses.
5. The slope of the 2D CMF above \(1\,M_{\odot}\) is \(-2.0\pm 0.1\), significantly steeper than that of the 3D CMF, \(-1.2\pm 0.1\).
When including effects from radiative transfer, interferometric artifacts, and thermal noise by using the synthetic \(1.3\,\mathrm{mm}\) ALMA maps, the ability to retrieve the correct core masses worsens significantly compared to the idealized case of the column-density maps. Our main conclusions with respect to the selection of cores in the synthetic ALMA maps, with comparable resolution and the same image processing and analysis as in the ALMA-IMF Large Program, are listed in the following:
1. Overall, 70% of the synthetic ALMA cores have a 3D counterpart. Above \(3\,M_{\odot}\) only 10% of them can be considered projection artifacts.
2. A rather small fraction of the 3D _Dendro_ cores are detected as _getsf_ cores in the synthetic ALMA maps. The completeness fraction decreases monotonically with decreasing mass at all masses, from 40% at \(10\,M_{\odot}\) to 15% at \(1\,M_{\odot}\).
3. When a 3D counterpart is found, _there is only a weak correlation between the masses of the synthetic ALMA cores and those of the corresponding 3D cores_. Interferometric artifacts, radiative transfer effects, and thermal noise cause a significant increase in the random error of the estimated core masses, relative to the idealized case of the column-density maps.
4. The slope of the CMF of the synthetic ALMA cores above \(1\,M_{\odot},-1.4\pm 0.1\), is shallower than that of the cores selected from the column-density maps at the same spatial resolution, \(-2.0\pm 0.1\). This is primarily the consequence of radiative transfer effects (spatial variations of the dust temperature). The slope is a bit steeper than that of the 3D CMF, \(-1.2\pm 0.1\).
5. Interferometric artifacts are found to cause a systematic shift in the derived masses of the synthetic ALMA cores, relative to those of the 2D _getsf_ cores from the column-density maps. A systematic shift of a similar magnitude is also caused by the choice of different dust models.
6. The color temperature overestimates the real dust temperature in the cores. Empirical relations are derived to convert color temperatures from Hershel observations into dust temperature of cores extracted from ALMA observations.
Guided by the results of this analysis, we conclude that core masses and CMFs from current ALMA observations of Galactic protoclusters at kpc distances should not be taken at face value. With the combined use of observations and realistic simulations, it should eventually be possible to set strong constraints on the CMF, but it seems premature at this stage to draw conclusions on the origin of the stellar IMF directly from the observational results.
## Acknowledgements
Use of _getsf_, developed by Alexander Men'shchikov at the DAP, IRFU, CEA Saclay, France, is hereby acknowledged. We also thank Alexander Men'shchikov for his guidance in the use of _getsf_. We are grateful to Timae Csengeri for providing an updated version of the Table 4 in Csengeri et al. (2017) used to produce our Figure 2, and to Yohan Pouteau and Frederique Motte for clarifications of some aspects of the ALMA-IMF Large Program data-analysis pipeline. PP and VMP acknowledge support by the Spanish MINECO under project PID2020-115892GB-100, and financial support from the State Agency for Research of the Spanish Ministry of Science and Innovation through the "Unit of Excellence Maria de Maeztu 2020-2023" award to the Institute of Cosmos Sciences (CEX2019-000918-M). MJ acknowledges support from the Academy of Finland grant No. 348342. The research leading to these results has received funding from the Independent Research Fund Denmark through grant No. DFF 8021-00350B (TH). We acknowledge PRACE for awarding us access to Joliot-Curie at GENCI@CEA, France. The astrophysics HPC facility at the University of Copenhagen, supported by research grants from the Carlsberg, Novo, and Villum foundations, was used for carrying out the postprocessing, analysis, and long-term storage of the results.
## Data Availability
Supplemental material can be obtained from a dedicated public URL ([http://www.erda.dk/vgrid/alma-imf](http://www.erda.dk/vgrid/alma-imf)).
|
2304.14248
|
On Manifold Learning in Plato's Cave: Remarks on Manifold Learning and
Physical Phenomena
|
Many techniques in machine learning attempt explicitly or implicitly to infer
a low-dimensional manifold structure of an underlying physical phenomenon from
measurements without an explicit model of the phenomenon or the measurement
apparatus. This paper presents a cautionary tale regarding the discrepancy
between the geometry of measurements and the geometry of the underlying
phenomenon in a benign setting. The deformation in the metric illustrated in
this paper is mathematically straightforward and unavoidable in the general
case, and it is only one of several similar effects. While this is not always
problematic, we provide an example of an arguably standard and harmless data
processing procedure where this effect leads to an incorrect answer to a
seemingly simple question. Although we focus on manifold learning, these issues
apply broadly to dimensionality reduction and unsupervised learning.
|
Roy R. Lederman, Bogdan Toader
|
2023-04-27T15:09:15Z
|
http://arxiv.org/abs/2304.14248v2
|
# On Manifold Learning in Plato's Cave:
###### Abstract
Many techniques in machine learning attempt explicitly or implicitly to infer a low-dimensional manifold structure of an underlying physical phenomenon from measurements without an explicit model of the phenomenon or the measurement apparatus. This paper presents a cautionary tale regarding the discrepancy between the geometry of measurements and the geometry of the underlying phenomenon in a benign setting. The deformation in the metric illustrated in this paper is mathematically straightforward and unavoidable in the general case, and it is only one of several similar effects. While this is not always problematic, we provide an example of an arguably standard and harmless data processing procedure where this effect leads to an incorrect answer to a seemingly simple question. Although we focus on manifold learning, these issues apply broadly to dimensionality reduction and unsupervised learning.
## I Background
The abundance of data in many applications in recent years allows scientists to sidestep the need for parametric models and discover the structure of underlying phenomena directly from some form of intrinsic geometry in the measurements. Such concepts frequently appear in unsupervised learning, manifold learning, non-parametric statistics and, more broadly, machine learning. Often, a scientist may have in mind a concept of the "natural" geometry or parametrization of the phenomenon; in other cases, they may implicitly assume that only one such objective geometry exists even if they do not know what it is. This paper aims to illustrate the difference between the structure of _observed_ data and some notion of _natural_ or _unique objective_ structure. To this end, we offer a concrete example with an obvious underlying natural geometry (up to symmetries) and demonstrate the existence of discrepancies between the data and the natural variables, even in this benign setting.
In our example, described more formally below, a simplified instance of a physical phenomenon is represented by a rigid 3D model of a horse on a spinning table. The measurement device is a fixed camera that takes images of the object. The orientation angles of the horse are distributed uniformly. Here, a natural variable is the angle at which the figure is oriented at the time of the measurement. A simple example of a scientific question is to find the mode of the distribution, which is intuitively the most prevalent orientation angle (we know that the correct answer is that the distribution is uniform and, therefore, we do not expect to find a clear mode). Since this is meant to be a simplified, intuitive version of a generic problem with no obvious underlying model, we consider generic algorithms and forgo in advance image analysis and computer vision methods that use of the special properties of images and the specific rotating motion of the object.
This benign task yields results that we find surprising yet predictable. The naive analysis discovers clear modes of the distribution, which are inconsistent with the true uniform distribution. In Appendix A3, we demonstrate that these modes are not invariant to the measurement modality.
In our discussion, we explain the reasons for the experiment's results and refer the reader to existing work on special cases where the problem can be corrected. However, there is no method for correcting the problem in the general case. We conclude by pointing to where care should be taken in defining the problem and using the output in downstream tasks.
We emphasize that this paper aims to highlight an omission that we observe in the practical use of manifold-related machine learning algorithms in applications. The purpose of this paper is not to advocate against these methods but rather to suggest that care should be taken in stating and interpreting their output.
## II The problem
The mathematical setting of the experiment is simple: let \(\mathcal{X}\subset\mathbb{R}^{d}\) and \(\mathcal{Y}\subset\mathbb{R}^{D}\) be two manifolds with \(d\ll D\) and \(f:\mathcal{X}\rightarrow\mathcal{Y}\) be a diffeomorphism. We refer to \(\mathcal{X}\) and \(\mathcal{Y}\) as the _phenomenon manifold_ and the _measurement manifold_ respectively and to \(f\) as the _measurement function_. In our simple experiment, the phenomenon manifold \(\mathcal{X}\) is the one-dimensional torus representing the orientation of the horse with respect to a fixed frame of reference (independent of the camera), the measurement function \(f\) outputs an image of the horse as captured by the camera, and the measurement manifold \(\mathcal{Y}\) is the manifold of images obtained by the camera. In particular, a sample \(x\in\mathcal{X}\) is the angle of the horse at a specific point in time, and the corresponding measurement \(f(x)\in\mathcal{Y}\) is the image of the horse at the same point in time. In a typical setting, we are given a large set of measurements \(\{y_{i}\}_{i=1}^{n}\subset\mathcal{Y}\) of a set of samples \(\{x_{i}\}_{i=1}^{n}\) drawn from a distribution \(\mathcal{D}\) on \(\mathcal{X}\). Here, we take the distribution \(\mathcal{D}\) of the orientation angles of the horse to be uniform, which would be unknown in an actual experiment. We only have access to the measurements \(\{y_{i}\}_{i=1}^{n}\) (the images of the horse), which we assume to be noise-free for simplicity, and we are interested in uncovering the low-dimensional organization of the samples of angles \(\{x_{i}\}_{i=1}^{n}\), for example, their empirical distribution on \(\mathcal{X}\). The setting of this numerical experiment is illustrated in Figure 1. For simplicity and concreteness,
we apply common techniques to answer a simple question: what is the most dominant physical state? We know that the ground truth answer is, in this case, that there is no dominant state; the data are generated with uniform distribution over the orientation angles of the horse.
We follow a common practice of assuming a low-dimensional structure and apply a manifold learning algorithm. This produces the map \(\rho:\mathcal{Y}\rightarrow\mathcal{Z}\), which yields a low-dimensional embedding of the measurements \(\rho(y_{i})\in\mathcal{Z}\) for \(i=1,\ldots,n\) and \(\mathcal{Z}\subset\mathbb{R}^{s}\) with \(d\leq s\ll D\). In our experimental setting, the low-dimensional assumption is clearly true: the orientation angles of the horse lie on the one-dimensional torus manifold, while the measurements are clearly high-dimensional (the number of pixels of each image). For simplicity, in our numerical experiment, we use the diffusion maps algorithm, whose theoretical properties are well understood [1, 2], and we retain only the first two diffusion coordinates, a standard practice in this simple case. The output we expect to see is an embedding of the one-dimensional torus in \(\mathbb{R}^{2}\): a circle.
It is common in applications to apply a machine learning or manifold learning algorithm to the measurements \(\{y_{i}\}_{i=1}^{n}\), and consider the low-dimensional embeddings \(\{\rho(y_{i})\}_{i=1}^{n}\) to be a proxy for the geometry of the actual samples \(\{x_{i}\}_{i=1}^{n}\); the potential effects of the measurement function \(f\) are omitted. The aim of this manuscript is to demonstrate that even in the most benign setting, the measurements distort the physical problem in a way that can impact a seemingly straightforward analysis.
Many algorithms for manifold learning and visualization have been developed over the years and have been found useful in applications. Often, these algorithms start with the pair-wise distances \(\|y_{i}-y_{j}\|\) (in some norm), for \(i,j=1,\ldots,n\), as a measure of (inverse) similarity, but diverge in their precise formulation of the problem. One of the notable departures from this approach is the use of the latent space estimated in the training of deep neural networks as the manifold embedding, with the variational autoencoder (VAE) [3] being one of a number of popular approaches.
The diffusion maps algorithm produces coordinates that are related to the geometry of the data through a diffusion operator on the data manifold. While there are technical nuances in the metric defined by diffusion maps (and other algorithms) and in retaining only two dimensions, this example is particularly benign, symmetric, and without boundary effects. Therefore, one expects the leading eigenvectors of the discretized diffusion operator to preserve the local geometry of the data (up to scaling). For a formal description of the diffusion maps algorithm and its properties, see [1, 2]. One of the appealing properties of the diffusion maps algorithm is that it is (asymptotically) invariant to the local density of the data and captures only its local geometry. This property and the algorithm's explicit relationship to the geometry of the data made it a good choice for our experiments.
Indeed, a diffusion map of the points on \(\mathcal{X}\) preserves the geometry and the uniform distribution (shown in Appendix A1). However, our measurement function is not necessarily an isometry (even up to scaling), and therefore, it distorts the geometry and the local pair-wise distances.
The low-dimensional embedding obtained by applying the diffusion maps algorithm to a dataset \(\{y_{i}\}_{i=1}^{n}\) of size \(n=1000\) and ambient dimension \(D=108000\) (\(180\times 200\) size images with \(3\) color channels) in our experimental setting1 is shown in Figure 2. Both panels show a scatter plot using the first two embedding coordinates given by the diffusion maps algorithm. The points in panel (a) are colored according to the true angle \(x_{i}\) for \(i=1,\ldots,n\). Visually, it appears that the algorithm reveals the correct topology and it organizes the images correctly by their angle. It is compelling to say that the embedding is a good approximation of the angles (up to shift). However, taking a closer look at the distribution of the points in panel (b), we see that that their _local density_2 has not been preserved on the embedding manifold \(\mathcal{Z}\): while the distribution \(\mathcal{D}\) of the points \(\{x_{i}\}_{i=1}^{n}\) on \(\mathcal{X}\) is uniform (by construction!), the distribution of the embedded points \(\{\rho(y_{i})\}_{i=1}^{n}\) on \(\mathcal{Z}\) is not uniform. Moreover, the distribution of the embedded points has
Fig. 1: The phenomenon manifold \(\mathcal{X}\) is a one-dimensional torus corresponding to the in-plane orientation angle of a rigid object rotating around the \(z\)-axis, and the measurement manifold \(\mathcal{Y}\) is the manifold of images of the object as captured by a camera at a fixed location.
two clear modes, with no indication that they are an artifact of the analysis.
An additional experiment showing how the distribution of the embedded points varies when the viewing angle is changed is described in Appendix A3, and the specific implementation of the diffusion maps algorithm that we used in our experiments is presented in Appendix B.
## III Discussion
In the previous section, we empirically showed how the distribution of the points on the embedding manifold \(\mathcal{Z}\) does not reflect the true distribution of the points on the phenomenon manifold \(\mathcal{X}\): the distribution on \(\mathcal{Z}\) has two distinct modes, while the distribution on \(\mathcal{X}\) is uniform. To see that this is a metric-related issue, it is worth examining the modes of the distribution on \(\mathcal{Z}\).
In Figure 3, we show measurements at a high and a low-density point on \(\mathcal{Z}\). It is revealed that the high-density regions correspond to images where the three-dimensional object is perpendicular (or nearly perpendicular) to the viewing direction of the camera, while the low-density regions correspond to the object facing toward or away from the camera. This is because, according to our chosen metric on \(\mathcal{Y}\) (i.e., the Euclidean norm on the space of vectorized images), a small difference \(\Delta x\) between two angles in \(\mathcal{X}\) is not transformed to the same distance in different regions of \(\mathcal{X}\): two images of the object facing the camera that differ by \(\Delta x\) have a larger Euclidean distance than two images of the object facing sideways that are separated by the same angle. The metric based on the measurements alone does not account for the distortion introduced by the measurement function \(f\) on the true metric on \(\mathcal{X}\), namely the wrap-around distance on \([0,2\pi)\).
The discrepancy between the metric on the phenomenon manifold, which is the metric we want to recover, and the arguably arbitrary metric produced by the measurement modality can be corrected in some special cases. For example, when bursts of measurements around each point on \(\mathcal{X}\) are available, one can use the Jacobian of the measurement function to define metrics that are invariant to the measurement modality (see, for example, [4, 5, 6, 7, 8]). Such metrics might still not be the "desired" metrics we want to conceptualize, but they are "Platonic" in the sense that they are defined on the phenomenon manifold \(\mathcal{X}\) and are invariant to the arbitrary measurement function. Other works such as [9, 10, 11, 12] correct the metric distortion introduced by the embedding \(\rho\) from the measurement manifold \(\mathcal{Y}\) to the embedded manifold \(\mathcal{Z}\); these works do not correct the discrepancy between the measurements and the metric on the phenomenon manifold.
We emphasize that the problem illustrated here is not due to a failure of the diffusion maps or other algorithms; the algorithm performs as expected and characterizes the _measurement manifold_ very well. However, the metric of this measured manifold is incompatible with the natural metric of in-plane rotation angles. As a result, we identify modes of the distribution in the measurement space, but these do not correspond to modes of the underlying distribution of angles.
We note that the problem discussed here is not unique to the diffusion maps algorithm or the setup we chose; in fact, other algorithms are not as well-understood as diffusion maps, and applications are rarely as simple as our illustrative example. Many modern algorithms add layers of complexity to the problem. For instance, deep learning approaches that generate latent variables, such as VAEs, are often combined with more standard manifold learning algorithms to obtain low-dimensional data representations. In [13, 14], the distortions introduced by popular algorithms like t-SNE and UMAP are analyzed in the context of single-cell genomics, although the focus is on the discrepancy between the high dimension of the measurement space and the very low dimension (2 or 3) of the embedding space, rather than on the choice of metric. While such algorithms provide valuable new insights into datasets, practitioners should be aware that the results they generate, even when they perform as intended, may have a subtle relation to the "Platonic" physical reality. These outputs should
Fig. 3: The low-dimensional embedding with example images corresponding to samples from the estimated distribution. The image to the left of the embedding plot is chosen to be at a low local density in the embedding, and the image to the right is chosen to be at the maximum density point.
Fig. 2: Low-dimensional embedding of the images of the spinning horse. The coloring is given by the true orientation angle of the horse in panel (a) and the local density of points (\(r=0.05\)) in panel (b).
arguably mainly be used for visualization and confirmed by other means. Indeed some of the original work on popular non-linear dimensionality reduction algorithms defines them as tools for visualization [15, 16].
## IV Conclusions
This paper illustrates one of the discrepancies between the measured manifold and a perceived natural parametrization of the underlying phenomenon. In addition, Appendix A3 demonstrates how this discrepancy depends on the measurement modality and how the measured manifold is not invariant to measurements. The discrepancy presented here is by no means the only type of discrepancy; we defer the discussion of additional effects to future work. While the existence of this discrepancy is a natural consequence of various mathematical formulations of manifold learning problems (with the exception of special cases where the metric can be corrected), it is occasionally omitted, which may lead to incorrect and inconsistent answers to seemingly simple scientific questions. In the absence of a general solution to the problem, we suggest the following points to consider when using these methods.
* A good rule of thumb is that manifold learning and dimensionality reduction can provide (when they "work") _an_ embedding, but they may not provide _the_ embedding (that we might have in mind). In fact, without a good definition of the desired embedding, the embedding is not unique.
* Sometimes, the effects can be controlled if there is knowledge of the structure of the measurement function (e.g., Lipschitz constant). However, nuances in definitions of the output of algorithms, the increased complexity of algorithms, and the practice of layering algorithms on top of each other may make it much more difficult to control such effects. In some special cases, additional measurements may allow one to reverse the effect [4, 5, 6, 7, 8].
* In many (but not all) applications, the inferred manifold may reveal enough about the _topology_ of the problem, or the distortion in the metric might be sufficiently small to be a sufficiently good proxy for the geometry. What is "sufficiently good" may depend on the downstream task. For example, the low-dimensional manifold _may_ be a starting point for an analysis by an expert, regression or careful clustering, suspected outliers detection, and even for identification of clear modes. It _may_ not be as helpful for aligning data collected using different modalities (or even different algorithms applied to the same data) with different distortions (see Appendix A3), or for certain analyses of free energy associated with the distribution.
## V Acknowledgments
This work was supported by the grants NIH/R01GM136780 and AFOSR/FA9550-21-1-0317, and by the Simons Foundation.
|
2307.15406
|
Stochastic automatic differentiation for Monte Carlo processes
|
Monte Carlo methods represent a cornerstone of computer science. They allow
to sample high dimensional distribution functions in an efficient way. In this
paper we consider the extension of Automatic Differentiation (AD) techniques to
Monte Carlo process, addressing the problem of obtaining derivatives (and in
general, the Taylor series) of expectation values. Borrowing ideas from the
lattice field theory community, we examine two approaches. One is based on
reweighting while the other represents an extension of the Hamiltonian approach
typically used by the Hybrid Monte Carlo (HMC) and similar algorithms. We show
that the Hamiltonian approach can be understood as a change of variables of the
reweighting approach, resulting in much reduced variances of the coefficients
of the Taylor series. This work opens the door to find other variance reduction
techniques for derivatives of expectation values.
|
Guilherme Catumba, Alberto Ramos, Bryan Zaldivar
|
2023-07-28T08:59:01Z
|
http://arxiv.org/abs/2307.15406v1
|
# Stochastic automatic differentiation for Monte Carlo processes
###### Abstract
Monte Carlo methods represent a cornerstone of computer science. They allow to sample high dimensional distribution functions in an efficient way. In this paper we consider the extension of Automatic Differentiation (AD) techniques to Monte Carlo process, addressing the problem of obtaining derivatives (and in general, the Taylor series) of expectation values. Borrowing ideas from the lattice field theory community, we examine two approaches. One is based on reweighting while the other represents an extension of the Hamiltonian approach typically used by the Hybrid Monte Carlo (HMC) and similar algorithms. We show that the Hamiltonian approach can be understood as a change of variables of the reweighting approach, resulting in much reduced variances of the coefficients of the Taylor series. This work opens the door to find other variance reduction techniques for derivatives of expectation values.
###### Contents
* 1 Introduction
* 1.1 Existing methods
* 2 Automatic differentiation
* 3 AD for Monte Carlo process
* 3.1 Reweighting and Automatic Differentiation
* 3.2 Hamiltonian perturbative expansion
* 3.2.1 Implementation of AD and convergence
* 4 A comparison between approaches
* 5 Some general applications
* 6 Applications in lattice field theory
* 7 Conclusions
* 8 Acknowledgments
* 9 Acknowledgments
* 10 References
Introduction
Monte Carlo (MC) techniques are ubiquitous in science, and particularly in physics. From particle physics to cosmology, in order to extract information about the quantities (parameters, quantum fields, etc) of our physics models we mostly rely on MC, since it provides an unbiased estimator of the underlying probability distributions of such quantities, and consequently, of expectation values of our observables:
\[\mathbb{E}_{p_{\theta}}[f(x;\theta)]\,. \tag{1.1}\]
Here \(p_{\theta}(x)\) is the distribution of the quantities of interest \(x\), and we will consider the case where it also depends on the parameters \(\theta\) (both \(x\) and \(\theta\) could be multivariate). Two prominent examples arise in physics: _i)_ in the context of a quantum field theory, \(x\) represent the quantum fields, while \(\theta\) could represent for example the mass of the fields, as well as their couplings; _ii)_ in the context of fitting physics models to some observed data, \(x\) represent the model parameters (e.g. in a cosmological model, the matter energy density, the Hubble parameter, etc), while \(\theta\) are the so-called _hyper-parameters1_ that fix either their prior distribution, or the likelihood of the data, or both, for example in a Bayesian inference framework.
Footnote 1: These are parameters which the analysis considers as deterministic, so they do not follow a distribution themselves, but do determine the distribution of the “standard” parameters.
In many cases we are interested in optimizing such expectations w.r.t. \(\theta\). The state-of-the-art in optimization is represented by Stochastic Gradient Descent (SGD) methods, especially popular in the statistics and machine learning communities. For the cases at hand, the application of any variant of the SGD algorithm requires to determine the gradient of the expectation cost Eq. (1.1) w.r.t \(\theta\).
In the case of Bayesian inference, one of the interesting questions arises when we are concerned about the sensitivity of the Bayesian predictions on the hyper-parameters. Formally these predictions2 are determined as expected values of the form given in Eq. (1.1), and we are interested in the dependence w.r.t. \(\theta\). It is important to note at this point that nowadays, in the Bayesian inference community, the above question is -to our knowledge- not addressed. Indeed, the Bayesian predictions are commonly calculated at optimal values \(\boldsymbol{\theta}_{\text{opt}}\) of the hyper-parameters, the latter being obtained from a point-wise Maximum Likelihood optimization of an approximation of the Bayesian evidence, or with "Bayesian optimization" methods. Typically no further analysis is performed quantifying the impact on the predictions when the hyper-parameters values deviate from \(\theta_{\text{opt}}\).
Footnote 2: More concretely, the distribution of such predictions, known as the _predictive distribution_ in the Bayesian jargon.
Another situation in Bayesian inference when the above problem appears is actually very common, specifically in the context of approximate inference, where the aim is to approximate the true posterior \(p_{\theta}(x)\) by a distribution \(q_{\phi}(x)\). Popular implementations minimize either the forward Kullback-Leibler divergence, \(\text{KL}[p_{\theta}||q_{\phi}]\), or its reverse: \(\text{KL}[q_{\phi}||p_{\theta}]\). Since the posterior is unknown (it is what such methods try to approximate in first place), the forward KL can be estimated via reweighting: FKL \(=(1/Z_{w})E_{q_{\phi}}[w\log(\hat{p}_{\theta}/q_{\phi})]\) (see e.g. [1]), where \(w=\hat{p}_{\theta}/q_{\phi}\), and \(\hat{p}_{\theta}\) is the unnormalized posterior (assumed to be tractable), and \(Z_{w}=E_{q_{\phi}}[w]\). In this case one is interested in minimizing the FKL w.r.t. the parameters \(\phi\). In cases where the objective function is the reverse KL (typical case in Variational Infererence) the expectation is directly with respect to \(q_{\phi}\). While for simple choices of the latter the procedure is well defined (see below Sect.1.1), in general it is a complex task when we want to minimize w.r.t. parameters which are implicit in the samples.
In all these cases the key question is the determination of the gradient and (possibly) higher derivatives
\[\frac{\partial}{\partial\theta_{j}}\mathbb{E}_{p_{\theta}}[f(x;\theta)]\,, \quad\frac{\partial^{2}}{\partial\theta_{j}\partial\theta_{k}}\mathbb{E}_{p_{ \theta}}[f(x;\theta)]\,,\ldots \tag{1.2}\]
of expected values, where \(\theta_{j}\) are the components of \(\theta\).
In this work we focus on the typical case when the relevant expectations values Eq. (1.1) are determined using some Monte Carlo (MC) method:
\[\mathbb{E}_{p_{\theta}}[f(x;\theta)]\approx\frac{1}{N_{s}}\sum_{s=1}^{N_{s}}f(x_ {s};\theta)\, \tag{1.3}\]
where \(x_{s}\) are samples from the unnormalized \(\hat{p}_{\theta}\). _Our aim is to develop a formalism to compute the gradients of expr.(1.3) with respect to \(\theta\), using automatic differentiation techniques._
### Existing methods
There are solutions in the literature for this problem: for some simple distributions the reparametrization trick can be used, which is nothing but the ability to express a sample \(x\) as a deterministic function \(g(\theta,\eta)\), of the parameters \(\theta\) and a random variable \(\eta\)_that does not depend on \(\theta\)_. A typical example is when \(p_{\theta}\) is a multivariate Gaussian with mean \(\mu(\theta)\) and covariance matrix \(\mathbf{C}\), in whose case a sample \(x_{s}\) can be expressed as \(x_{s}=\mu(\theta)+\mathbf{L}\cdot\eta_{s}\), where \(\mathbf{L}\) is the Cholesky decomposition of \(\mathbf{C}\), and the sampled random variable \(\eta_{s}\sim\mathcal{N}(\mathbf{0},\mathbb{I})\). Clearly, since \(x_{s}\) is an explicit function of \(\theta\), so it will be \(f(x_{s},\theta)\) in Eq.(1.3), making possible to use the usual techniques of Automatic Differentiation to obtain the gradients w.r.t. \(\theta\) exactly. For other popular distributions as Gamma, Beta or Dirichet, among others, this simple reparametrization is not possible, and generalizations of the above trick have been developed (e.g. in [2], using implicit differentiation). Nonetheless, the considered distributions should still be somehow reparametrizable.
Another existing alternative is to use the _score function estimator_, allowing us to obtain the gradient for a more general case. This method just uses the trivial relation
\[\nabla_{\theta}\log p_{\theta}(x)=\frac{1}{p_{\theta}(x)}\nabla_{\theta}p_{ \theta}(x)\,, \tag{1.4}\]
such that the gradient in expr.(1.2) could be approximated with MC as:
\[\nabla_{\theta}\mathbb{E}_{p_{\theta}}[f(x;\theta)]\approx\frac{1}{N_{s}}\sum _{s=1}^{N_{s}}\left[(\nabla_{\theta}\log p_{\theta}(x_{s}))f(x_{s};\theta)+ \nabla_{\theta}f(x_{s};\theta)\right] \tag{1.5}\]
While certainly being a more flexible method, this estimator is known to suffer in practice from large sample-to-sample variance (although see [3] for variance-control methods in this context). Lastly, other treatments have been proposed which attempt to extract the best of both methods above, i.e. to be applicable to distributions beyond those typically reparametrizable, while keeping a low variance ([4]).
Up to our knowledge, all the existing efforts applying the solutions mentioned above require to know the normalization of the distribution \(p_{\theta}(x)\), which prevents the use in conjunction with Monte Carlo methods, where samples of a distribution are often obtained in the case where the corresponding normalization is unknown.
The question on how to determine derivatives of expected values taken over complicated distributions \(p_{\theta}(x)\), especially in the case that one relies on Monte Carlo methods to draw samples from such a distribution is still open. Ideally one would like an "automatic" procedure, i.e. extending the benefits of automatic differentiation to Monte Carlo processes. In this work we will explore two such approaches. First we use the idea of reweighting, where the expectation values, eq. (1.3), are modified by a weighting factor that takes into account the dependence on the parameters, but that utilizes unmodified samples for the average. The second method is a modification of Hamiltonian sampling algorithms, that includes the generation of samples that carry themselves the information about the parameters. Both methods can be used for unnormalized probability distributions and allow the computation of derivatives of arbitrary orders.
Automatic differentiation
Our methods to compute derivatives of expectation values are based on the techniques of automatic differentiation (AD). By AD we understand a set of techniques to determine the derivative of a deterministic function specified by a computer program. There are various flavors of AD in the market, and our algorithms are quite agnostic about the particular implementation that is used, but in order to make the proposal and notation more concrete we are going to choose a particular method, based on operations of polynomials truncated at some order. The generalization of our techniques to other flavors of AD is straightforward.
In what follows it is useful to use multi-index notation. The \(d\)-dimensional multi-index is defined by
\[n=(n_{1},\ldots,n_{d})\,. \tag{2.1}\]
We can define a partial order for multi-indices by the condition
\[n\leq m\Longleftrightarrow n_{i}\leq m_{i}\quad\forall i=1,\ldots,d\,. \tag{2.2}\]
We also define the absolute value, factorial and power by the relations
\[|n|=\sum_{i=1}^{d}n_{i}\,,;\quad n!=\prod_{i=1}^{d}n_{i}!\,,\quad\epsilon^{n}= \prod_{i=1}^{d}\epsilon_{i}^{n_{i}}\,. \tag{2.3}\]
Finally the higher-order partial derivative is defined by
\[\frac{\partial^{n}}{\partial x^{n}}=\frac{\partial^{|n|}}{\partial x_{1}^{n_ {1}}\cdots\partial x_{d}^{n_{d}}}\,. \tag{2.4}\]
With this notation, polynomials of degree \(p=(p_{1},\ldots,p_{d})\) in several variables \(\{\epsilon_{i}\}_{i=1}^{d}\) are represented in the compact form
\[\tilde{a}(\epsilon)=\sum_{n\leq p}c_{n}\epsilon^{n}. \tag{2.5}\]
Note that each variable \(\epsilon_{i}\) is raised at most at the power \(p_{i}\) and that the index of the coefficient \(c_{n}\) is itself a multi-index (i.e. \(c_{n}=(c_{n1},c_{n2},\ldots,c_{nd})\)). If the coefficients \(c_{ni}\) are elements of a field (i.e. real numbers), the addition/multiplication of these polynomials where terms \({\cal O}(\epsilon_{j}^{n_{j}+1})\) are neglected form an algebra over the very same field.
As an example calculation in this algebra, consider the following two polynomials in two variables and with degrees \(p=(2,3)\)
\[\tilde{a}(\epsilon) = 2+\epsilon_{1}+\epsilon_{2}^{3}\,, \tag{2.6}\] \[\tilde{b}(\epsilon) = 1+\epsilon_{1}+2\epsilon_{1}^{2}+3\epsilon_{2}^{2}\,, \tag{2.7}\]
we have
\[(\tilde{a}+\tilde{b})(\epsilon) = 3+2\epsilon_{1}+2\epsilon_{1}^{2}+3\epsilon_{2}^{2}+\epsilon_{2 }^{3}\,, \tag{2.8}\] \[(\tilde{a}\cdot\tilde{b})(\epsilon) = 2+3\epsilon_{1}+5\epsilon_{1}^{2}+2\epsilon_{1}^{3}+3\epsilon_{1 }\epsilon_{2}^{2}+\epsilon_{1}\epsilon_{2}^{3}+2\epsilon_{1}^{2}\epsilon_{2}^ {3}+\epsilon_{2}^{3}\,. \tag{2.9}\]
In particular it is important to note that we have dropped terms \(\propto\epsilon_{1}^{3},\epsilon_{2}^{5}\) in the product, since \(3>p_{1}=2\) and \(5>p_{2}=3\). Therefore the "\(=\)" sign in the above equations has to be understood as "_up to higher
_order corrections_". Elementary operations and functions acting on these polynomials can be defined in a straightforward way (see [5]).
The connection of the algebra of truncated polynomials with AD is a consequence of Taylor theorem. Let \(f(x)\) be a deterministic function in the variables \(x_{1},\ldots,x_{d}\). If each variable is promoted to a truncated polynomial
\[x_{i}\longrightarrow\tilde{x}_{i}(\epsilon)=x_{i}+\epsilon_{i}\,, \tag{2.10}\]
and we evaluate the function \(f\) with the truncated polynomials as input
\[\tilde{f}(\epsilon)=f(\tilde{x}(\epsilon))=\sum_{n\leq p}f_{n}\epsilon^{n}\,, \tag{2.11}\]
it is easy to see that the result is a polynomial that is equal to the Taylor series of \(f\) at \(x\). In particular partial derivatives of the function are obtained by the relation
\[f_{n}=\frac{1}{n!}\frac{\partial^{n}f}{\partial x^{n}}. \tag{2.12}\]
Note that the analogy with the Taylor expansion is just at the level of the coefficients \(f_{n}\), which are obtained automatically when writing functions of truncated polynomials \(\tilde{x}_{i}(\epsilon)\), while \(\epsilon\) is exclusively a symbolic quantity.
## 3 AD for Monte Carlo process
### Reweighting and Automatic Differentiation
Samples \(\{x^{\alpha}\}_{\alpha=1}^{N}\) of some distribution \(p_{\theta}(x)\) allows to estimate expectation values
\[\mathbb{E}_{p_{\theta}}[f(x)]=\frac{1}{N}\sum_{A=1}^{N}f(x^{\alpha};\theta)+ \mathcal{O}\left(\frac{1}{\sqrt{N}}\right)\,. \tag{3.1}\]
In this expression, \(\theta\) are some parameters that the distribution function (and possibly the function \(f\)) depends on. We are interested in obtaining the gradient of expectation values with respect to the parameters \(\theta\)
\[O_{n}=\frac{\partial^{n}}{\partial\theta^{n}}\mathbb{E}_{p_{\theta}}[f(x; \theta)]\,. \tag{3.2}\]
Our proposal to determine these derivatives is based on _reweighting_ (a.k.a. importance sampling). If we have \(N\) samples \(\{x^{\alpha}\}_{\alpha=1}^{N}\) of the distribution \(p_{\theta}\) they can be used to determine expectation values of a different distribution \(p^{\prime}\) thanks to the identity
\[\mathbb{E}_{p^{\prime}}[f(x;\theta)]=\frac{\mathbb{E}_{p_{\theta}}\left[ \frac{p^{\prime}}{p_{\theta}}f(x;\theta)\right]}{\mathbb{E}_{p_{\theta}}\left[ \frac{p^{\prime}}{p_{\theta}}\right]}=\frac{\sum_{\alpha=1}^{N}w^{\alpha}f(x^ {\alpha};\theta)}{\sum_{\alpha=1}^{N}w^{\alpha}}+\mathcal{O}\left(\frac{1}{ \sqrt{N}}\right)\,, \tag{3.3}\]
where \(w^{\alpha}=p^{\prime}(x^{\alpha})/p_{\theta}(x^{\alpha})\) are usually called _reweighting factors_. Our approach consists in using for the target distribution
\[p^{\prime}(x)=p_{\tilde{\theta}(\epsilon)}(x)\,,\qquad(\tilde{\theta}_{i}( \epsilon)=\theta_{i}+\epsilon_{i})\,. \tag{3.4}\]
With this substitution, the reweighting factors will become truncated polynomials
\[\tilde{w}^{\alpha}(\epsilon)=\frac{p_{\tilde{\theta}}(x^{\alpha})}{p_{\theta }(x^{\alpha})}=\sum_{n\leq p}w_{n}^{\alpha}\epsilon^{n}, \tag{3.5}\]
with leading coefficients \(w_{0}^{\alpha}=1\). The basic Eq. (3.3) now leads to estimates for the expectation values in the form of truncated polynomials
\[\frac{\sum_{\alpha=1}^{N}\tilde{w}^{\alpha}f(x^{\alpha};\tilde{\theta})}{\sum_{ \alpha=1}^{N}\tilde{w}^{\alpha}}=\sum_{n\leq p}O_{n}\epsilon^{n}\,, \tag{3.6}\]
that will give stochastic estimates the Taylor series coefficients of the expectation values. i.e.
\[O_{n}=\frac{1}{n!}\frac{\partial^{n}}{\partial\theta^{n}}E_{p_{\theta}}[f(x; \theta)]+\mathcal{O}\left(\frac{1}{\sqrt{N}}\right)\,. \tag{3.7}\]
Borrowing the terminology of the lattice field theory community, we distinguish two type of contributions to the derivatives in Eq. (3.6):
**Connected contributions:**: They come from the explicit dependence of the observable \(f(x;\tilde{\theta})\) on the parameters \(\theta\).
**Disconnected contributions:**: They come from the reweighing factors \(\tilde{w}^{\alpha}\), and account for the implicit dependence that the samples \(x^{\alpha}\) have on the parameters \(\theta\).
It is important to note that the denominator in Eq. (3.3) accounts for the possibility that the distributions are unnormalized (i.e this expression is valid in the context of Monte Carlo sampling, where samples are obtained without knowledge of the normalization of \(p_{\theta}\)).
This reweighting approach can be thought as a generalization of the score function estimator described in sec.1: indeed, in the case of normalized distributions, and if one considers only the first derivatives, the reweighting factors become \(\tilde{w}^{\alpha}=1+(\nabla_{\theta}\log p_{\theta})\epsilon\), with the first-order coefficient being precisely the term appearing in expr.(1.5).
### Hamiltonian perturbative expansion
Sampling algorithms based on Hamiltonian dynamics are nowadays a central tool in many different areas. Probably the best known example is the Hybrid Monte Carlo (HMC) algorithm. Originally developed in the context of Lattice QCD [6], today it is also a cornerstone in Bayesian inference.
The HMC algorithm belongs to the class of Metropolis-Hastings algorithms that allows to obtain samples of arbitrarily complex distribution functions with a high acceptance rate. In order to sample the distribution function
\[p_{\theta}(x)=\frac{1}{\mathcal{Z}}\exp\left\{-S(x;\theta)\right\}\,,\qquad \left(\mathcal{Z}=\int\mathrm{d}x\,e^{-S(x:\theta)}\right)\,, \tag{3.8}\]
where \(x^{\alpha}\in\mathbb{R}^{d}\), we introduce some momentum variables \(\pi^{\alpha}\), conjugate to \(x^{\alpha}\), and consider the sampling of the modified distribution function
\[q_{\theta}(\pi,x)=\frac{1}{\mathcal{Z}^{\prime}}\exp\left\{-H(\pi,x;\theta) \right\}\,,\qquad\qquad\qquad\left(\mathcal{Z}^{\prime}=\int\mathrm{d}x \mathrm{d}\pi\,e^{-H(\pi,x:\theta)}\right)\,. \tag{3.9}\]
Assuming that the momenta are distributed as a standard Gaussian, the Hamiltonian is defined by
\[H(\pi,x;\theta)=\frac{1}{2}\sum_{\alpha=1}^{d}\pi^{\alpha}\pi^{\alpha}+S(x; \theta)\,. \tag{3.10}\]
It is clear that expectation values of quantities that depend only on the variables \(x\) are the same if they are computed using \(p_{\theta}(x)\) or \(q_{\theta}(\pi,x)\) (i.e. \(\mathbb{E}_{p_{\theta}}[f(x)]=\mathbb{E}_{q_{\theta}}[f(x)]\)). On the other hand the distribution \(q_{\theta}(\pi,x)\) can be sampled with a high acceptance rate just by 1) throwing randomly distributed Gaussian momenta \(\pi(0)\sim e^{-\pi^{2}/2}\), 2) solving the Hamilton equations of motion (eom)
\[\dot{x}^{\alpha} = \pi^{\alpha}\,, \tag{3.11}\] \[\dot{\pi}^{\alpha} = -\frac{\partial H}{\partial x^{\alpha}}=-\frac{\partial S}{ \partial x^{\alpha}}\,, \tag{3.12}\]
for a time interval from \((\pi(0),x(0))\) to \((\pi(\tau),x(\tau),\) and 3) performing a Metropolis-Hastings accept/reject step with probability \(e^{-\Delta H}\) (see Figure 1). The values of \(x(\tau)\) are distributed according to the probability density \(p_{\theta}(x)\) (for a proof see the original reference [6]). The trajectory length \(\tau\) can be chosen arbitrarely, although in order to guarantee the ergodicity of the algorithm it is required that this length is chosen randomly from some distribution (exponential or uniform are the most common choices). This point is usually not relevant, but for the case of "free" theories (i.e. Gaussian distributions \(p_{\theta}\)), it is well known that a constant trajectory length can lead to wrong results (see [7]).
In the last step \(\Delta H=H(\pi(\tau),x(\tau);\theta)-H(\pi(0),x(0);\theta)\) is just the energy violation. Since energy is conserved in Hamiltonian systems, this violation of energy conservation is entirely due to the fact that the eom in eq. (3.11) are solved numerically, and not exactly. Nevertheless this integration of the eom can be made very precise with a modest computational effort, allowing to reach arbitrarily high acceptance rates.
The Stochastic Molecular Dynamics (SMD) algorithm is closely related with the HMC, and also based on a Hamiltonian approach. In this case, after each integration step (from \(t\) to \(t+\delta t\)) of the equations of motion, the momenta is partially refreshed according to the equations
\[\pi\to c_{1}\pi+\sqrt{1-c_{1}^{2}}\,\eta\,, \tag{3.13}\]
where \(\eta\) is a new random momenta with Gaussian distribution, and \(c_{1}=e^{-\gamma\delta t}\) is a parameter that can be chosen arbitrarily via the value of \(\gamma\). The SMD algorithm has some advantages from the theoretical point of view, specially in the context of simulating field theories on the lattice [8]
#### 3.2.1 Implementation of AD and convergence
The typical implementation of either the HMC or the SMD algorithm involves the numerical integration of the eqs. (3.11). This is performed by using a sequence of steps
\[\mathcal{I}_{\pi,h}:\quad\pi\to\pi-h\frac{\partial S}{\partial x} \tag{3.14}\] \[\mathcal{I}_{x,h}:\quad x\to x+h\pi\,.\]
An example is the well known _leapfrog_ integrator, that is obtained by applying the series of steps \(\mathcal{I}_{\pi,\delta t/2}\mathcal{I}_{x,\delta t}\mathcal{I}_{\pi,\delta t/2}\). Better precision can be obtained by using higher order schemes (see [9]).
Figure 1: The HMC algorithm makes a new sample proposal \(x(\tau)\) from a previous sample \(x(0)\) of the target distribution \(p_{\theta}(x)\) by numerically integrating the equations of motion of a fictitious Hamiltonian. AD techniques can be applied to determine the dependence of the trajectory on model parameters \(\theta\).
The application of AD to solve the eom eq. (3.11) follows basically the same procedure, with the difference that
1. Both coordinates and conjugate momenta variables are promoted to truncated polynomials \[x^{\alpha} \longrightarrow \tilde{x}^{\alpha}=\sum_{i}x_{i}^{\alpha}\epsilon^{i}\,,\] (3.15) \[\pi^{\alpha} \longrightarrow \tilde{\pi}^{\alpha}=\sum_{i}\pi_{i}^{\alpha}\epsilon^{i}\,.\] (3.16) At the same time the model parameters are also promoted using \(\tilde{\theta}_{i}=\theta_{i}+\epsilon_{i}\). Only the lowest order \(\pi_{0}^{\alpha}\) is initially set with Gaussian momenta \[\tilde{\pi}_{0}^{\alpha}(0)\sim e^{-\pi_{0}^{\alpha}\pi_{0}^{\alpha}/2}\,,\] (3.17) while higher orders are initialized to zero.
2. The eom eq. (3.11) are solved consistently (i.e. order by order in \(\epsilon\)). Note that the non trivial eom can be written at each order as \[\dot{\pi}_{n}^{\alpha}=-\frac{\partial^{2}S}{\partial x^{\alpha}\partial x^{ \beta}}x_{n}^{\beta}+\text{lower order terms}\,.\] (3.18) and therefore the eom eq. (3.11) can be solved numerically using the same basic building blocks defined by eq. (3.14).
3. Since now the energy violation \(\Delta H\) is a truncated polynomial, the usual accept/reject step cannot be carried out. This means that one has to extrapolate the HMC results to zero step size \(\delta t\to 0\). In practice it is enough to work at sufficiently small step size such that any possible bias is well below our statistical uncertainties [10].
The generalization of the SMD algorithm follows the same basic rules.
Note that the HMC algorithm, despite being in the class of Metropolis-Hastings algorithms, is a completely deterministic algorithm in the limit that the eom are integrated exactly. This explains the strategy: we are using the tools of AD to determine the Taylor expansion of \(x(\tau)\) with respect to the model parameters \(\theta\).
This approach to the perturbative sampling is intimately related with the techniques of Numerical Stochastic Perturbation Theory (NSPT) [11], especially the Hamiltonian versions described in [12, 13]. The crucial difference is that NSPT is usually applied to determine the deviations with respect to the "free" (i.e. Gaussian) approximation, implementing a numerical approach to lattice perturbation theory, whereas in our case we determine the dependence with respect to arbitrary parameters \(\theta\) (possibly more than one!) present in the distribution function \(p_{\theta}(x)\).
In practice the numerical implementation of this procedure is straightforward, and just amounts to numerically solving the eom using the algebraic rules of truncated polynomials (see section 2).
These steps will result in a series of samples \(\{\tilde{x}_{A}\}_{A=1}^{N}\) (which are truncated polynomials, cf. expr.(2.5)). The usual MC evaluation of expectation values
\[\frac{1}{N}\sum_{A}O(\tilde{x}_{A})=\sum_{i}\bar{O}_{i}\epsilon^{i}\,, \tag{3.19}\]
will give a truncated polynomial that contains the dependence of expectation values with respect to the model parameters \(\theta\).
The convergence of expectation values is guaranteed if the Hessian
\[H^{\alpha\beta}=\frac{\partial^{2}S}{\partial x^{\alpha}\partial x^{\beta}} \tag{3.20}\]
is positive definite (see [10]). This condition is always true in the context of perturbative applications in lattice field theory, since in this case it is equivalent to having a stable vacuum. However, in models defined with compact variables and for the case or expansions around arbitrary backgrounds, the convergence of the process is not always guaranteed. An important example of this case are the simulation of Yang-Mills theories on the lattice, where one would in general expect this process not to converge. On the other hand, note that in applications of Bayesian inference, the convergence condition on the Hessian is guaranteed for unimodal posteriors.
## 4 A comparison between approaches
We have introduced two methods to determine Taylor series (and consequently, derivatives) of expectation values. In the reweighting based method (section 3.1) the samples obtained by the Monte Carlo method are corrected by the reweighting factors, eq. (3.6), to take into account the dependence on the parameters \(\tilde{\theta}\). On the other hand, the Hamiltonian approach (section 3.2) produces samples that automatically carry the dependence on the parameters \(\tilde{\theta}\). It is instructive to see the relation of each method with the reparametrization trick. In order to get some intuition, we can examine a very simple toy model. Imagine that we are interested in the distribution function
\[p_{\sigma}(x)=\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{x^{2}}{2\sigma^{2}}} \tag{4.1}\]
and in how expectation values depend on \(\sigma\) around \(\sigma=1\). Of course, this can be trivially solved using a change of variables. If \(\{x_{A}\}_{A=1}^{N}\) are samples for \(\sigma=1\) then
\[y_{A}=\sigma x_{A}\,, \tag{4.2}\]
are samples for any other value of sigma. In particular one can write the truncated polynomial
\[\tilde{x}_{A}=x_{A}+x_{A}\epsilon\,,\quad(\epsilon=\sigma-1)\,. \tag{4.3}\]
and now evaluating any expectation value using \(\{\tilde{x}_{A}\}\) as samples will produce its corresponding Taylor series. In this sense the change of variables can be seen as a particular transformation to the samples (from \(x_{A}\) to \(\tilde{x}_{A}\)) such that expectation values evaluated with these samples \(\tilde{x}_{A}\) give automatically Taylor series of observables.
Now consider the reweighting approach to this simple problem. One would define \(\tilde{\sigma}=1+\epsilon\), and determine the reweighing factors Eq. (3.6). They read
\[\tilde{w}^{\alpha}(\epsilon)=e^{-\frac{(x^{\alpha})^{2}}{2}\left[\frac{1}{(1+ \epsilon)^{2}}-1\right]} \tag{4.4}\]
On the other hand, if one performs the change of variables Eq. (4.3) _before_ applying the reweighting formula, the reweighting factors are given by
\[\tilde{w}^{\alpha}(\epsilon)=e^{-\frac{(x^{\alpha})^{2}}{2}\left[\frac{1}{(1 +\epsilon)^{2}}-\frac{1}{(1+\epsilon)^{2}}\right]-\log(1+\epsilon)}=\frac{1}{ 1+\epsilon}\,. \tag{4.5}\]
Note that these reweighting factors are constant (i.e. independent on \(x\)). They cancel from the computation of the any expectation value:
\[\langle O(x)\rangle\approx\frac{\sum_{\alpha}\tilde{w}^{\alpha}O_{\alpha}}{\sum_{ \alpha}\tilde{w}^{\alpha}}\ =\frac{1}{N}\sum_{\alpha}O_{\alpha}\,,\qquad(O_{\alpha}=O(\tilde{x}_{\alpha})). \tag{4.6}\]
One can therefore see the reparametrization trick as a particular application of the general reweighting formula, where the change of variables leads to constant reweighting factors.
We claim that the Hamiltonian approach is just a method to find this change of variables for complicated distributions and to any order. In order to see how this happens, we need to work out the solution of the equations of motion for our toy model Eq. (4.1). They read
\[\ddot{x}_{0} = -\frac{x_{0}}{\sigma^{2}}\,, \tag{4.7}\] \[\ddot{x}_{1} = -\frac{x_{1}}{\sigma^{2}}+2\frac{x_{0}}{\sigma^{3}}\,. \tag{4.8}\]
It is clear that the equation for \(x_{0}(t)\) is just the usual harmonic oscillator, with solution
\[x_{0}(t)=x_{0}(0)\cos\left(\frac{t}{\sigma}\right)+\sigma\pi_{0}(0)\sin\left( \frac{t}{\sigma}\right)\,. \tag{4.9}\]
For the next order we have a driven harmonic oscillator (without damping term). Note, however, that since the frequency of the driven force is the same as the natural frequency of the oscillator (\(\omega=1/\sigma\)), we have a resonant phenomena: the amplitude of the oscillations to first orders will increase with the trajectory length. It is clear that since the HMC algorithm only integrates the eom up to a finite time \(\tau\), and since we take the average over the samples, this phenomena does not represent any issue for the convergence: any trajectory length will produce correct results according with our expectations. On the other hand, it is also clear that the variance of the observables computed with these samples will depend significantly on the trajectory length: of the many solutions found by the Hamiltonian approach (corresponding to different trajectory lengths), some will produce results with smaller variances.
Figure 2 shows that for certain trajectory length, the Hamiltonian approach described here just "finds" the transformation given by Eq. (4.3): the zeroth and first order are very similar. Any other trajectory length will still produce the correct expectation values, but with a significantly larger variance. In the case of the SMD algorithm we observe a similar phenomena, but in this case it is the parameter \(\gamma\) the one that has to be tuned.
This little example shows two important lessons: 1) the Hamiltonian approach can be considered just a change of variables from the original samples \(x^{\alpha}\to\tilde{y}^{\alpha}(x^{\alpha})\) such that once applied to the reweighting formula eq. (3.6) gives _constant_ reweighting factors, and 2) The variance obtained for the derivatives depends on the particular change of variables. In section 6 we will comment on the differences in variance between the two methods in detail.
## 5 Some general applications
In this section we explore a few applications of the techniques introduced in section 3. First we consider the application of the reweighting technique to an optimization problem. Second, we consider the application in Bayesian inference to obtain the dependence of predictions on the parameters that characterize the prior distribution.
## References
* [1] A. A. Abrahams, J. A. Barabasi, and A. A. Abrahams, "The 2011-011-011-01," _IEEE Transactions on Information Theory_, vol. 10, no. 1, pp. 115-120, 2011.
* [2] A. A. Abrahams, J. A. Barabasi, and A. A. Abrahams, "The 2011-011-01-01," _IEEE Transactions on Information Theory_, vol. 10, no. 1, pp. 115-126, 2011.
* [3] A. A. Abrahams, J. A. Barabasi, and A. A. Abrahams, "The 2011-011-01-01," _IEEE Transactions on Information Theory_, vol. 10, no. 1, pp. 115-126, 2011.
* [4] A. A. Abrahams, J. A. Barabasi, and A. A. Abrahams, "The 2011-011-01-01," _IEEE Transactions on Information Theory_, vol. 10, no. 1, pp. 115-126, 2011.
* [5] A. A. Abrahams, J. A. Barabasi, and A. A. Abrahams, "The 2011-01-01-01," _IEEE Transactions on Information Theory_, vol. 10, no. 1, pp. 115-126, 2011.
* [6] A. A. Abrahams, J. A. Barabasi, and A. A. Abrahams, "The 2011-01-01-01-01," _IEEE Transactions on Information Theory_, vol. 10, no. 1, pp. 115-126, 2011.
* [7] A. A. Abrahams, J. A. Barabasi, and A. A. Abrahams, "The 2011-01-01-01-01," _IEEE Transactions on Information Theory_, vol. 10, no. 1, pp. 115-160, 2011.
* [8] A. A. Abrahams, J. A. Barabasi, and A. A. Abrahams, "The 2011-01
### Applications in optimization
As an example application of an optimization process we will consider the probability density function
\[p_{\theta}(x)=\frac{1}{\mathcal{Z}}\exp\left\{-S(x;\theta)\right\}\,,\qquad\left( \mathcal{Z}=\int\mathrm{d}x\,e^{-S(x:\theta)}\right)\,. \tag{5.1}\]
with
\[S(x;\theta)=\frac{1}{\theta_{1}^{2}+1}\left(x_{1}^{2}+x_{1}^{4}\right)+\frac{1} {2}x_{2}^{2}+\theta_{2}x_{1}x_{2}\,. \tag{5.2}\]
The shape of \(S(x;\theta)\) is inspired in the action of a quantum field theory in zero dimensions, where \(x_{1}\) and \(x_{2}\) are two fields with coupling \(\theta_{2}\), while \(\theta_{1}\) is related to the mass of the field \(x_{1}\). Expectation values with respect to \(p_{\theta}(x)\) are functions of the parameters \(\theta\).
As an example we consider the problem of minimizing \(\mathbb{E}_{\theta}[x_{1}^{2}+x_{2}^{2}]\) (i.e. finding the values for \(\theta\) that make \(\mathbb{E}_{\theta}[x_{1}^{2}+x_{2}^{2}]\) minimum). We have implemented two flavours of Stochastic Gradient Descent (SGD): the first -basic- one, having a constant learning rate, and the second one being the well-known ADAM algorithm [14]. It is worth noting at this point that as a general concept, SGD implies a stochastic (but unbiased) evaluation of the gradients of the objective function at every iteration. While in typical applications in the ML community, where the task is to fit some dataset, this is done by evaluating the gradients at different random batches of the data, the present example is different in that no data is involved. In this case, every iteration of the SGD evaluates the gradients on the different Monte Carlo samples used to approximate the objective function \(\mathbb{E}_{\theta}[x_{1}^{2}+x_{2}^{2}]\).
Here we consider a simple implementation of the Metropolis Hastings algorithm in order to first produce the samples \(\{x^{\alpha}\}_{\alpha=1}^{N}\sim p_{\theta}(x)\). Second, we determine the reweighted expectation value truncated
Figure 3: Estimates of the objective function \(\mathbb{E}_{\theta}[x_{1}^{2}+x_{2}^{2}]\) as a function of the iteration count both for the SGD and ADAM algorithms. Both parameters \(\theta_{i},\theta_{2}\) tend to their optimal value (zero).
at first order
\[\frac{\sum w(x^{\alpha};\tilde{\theta})\left[(x_{1}^{\alpha})^{2}+(x_{2}^{\alpha} )^{2}\right]}{\sum w(x^{\alpha};\tilde{\theta})}\approx\bar{O}+\bar{O}_{i} \epsilon_{i}\,,\qquad\left(w(x^{\alpha};\theta)=e^{S(x^{\alpha};\theta)-S(x^{ \alpha};\tilde{\theta})}\right)\,, \tag{5.3}\]
where \(\tilde{\theta}_{i}=\theta_{i}+\epsilon_{i}\). This quantity gives an stochastic estimate of the function value
\[\bar{O}=\frac{1}{N}\sum_{i=1}^{N}[x_{1}^{\alpha}]^{2}+[x_{2}^{\alpha}]^{2}\,, \tag{5.4}\]
and its derivatives
\[\bar{O}_{i}\approx\frac{\partial\mathbb{E}_{\theta}[x_{1}^{2}+x_{2}^{2}]}{ \partial\theta_{i}}\,. \tag{5.5}\]
Figure 3 shows the result of the optimization process. As the iteration count increases the function is driven to its minima, while the values of the parameters approach the optimal values \(\theta_{1}^{\rm opt}=\theta_{2}^{\rm opt}=0\).
It is worth mentioning that in this particular example only 1000 samples were used at each step to estimate the loss function and its derivatives. If one decides to use a larger number of samples (say \(10^{5}\)), the value of the parameter \(\theta_{2}\) is determined with a much better precision. Note that the direction associated with \(\theta_{2}\) is much flatter, and therefore its value affects much less value of the loss function.
### An application in Bayesian inference
The purpose of statistical inference is to determine properties of the underlying statistical distribution of a dataset \(D=\{x_{i},y_{i}\}_{i=1}^{N}\). In many cases, the independent variables \(x_{i}\) are fixed, and all the stochasticity is captured by the dependent variables \(y_{i}\). As such, the data is assumed to be sampled from a certain model, specified by the _likelihood_, \(p(y|x,\phi)\), which depends on a set of parameters \(\phi\). The Bayesian paradigm attributes a level of confidence to the model by introducing the _prior_\(p_{\theta}(\phi)\), _i.e._ an a priori distribution of the models parameters, where in this context \(\theta\) play the role of the hyper-parameters specifying the prior. Following Bayes' rule, the _posterior_ distribution \(p_{\theta}(\phi|D)\) is computed as3:
Footnote 3: The normalization factor, \(p_{\theta}(D)\), called the evidence, or marginal likelihood, is \(\phi\)-independent and represents the probability distribution of the observed data, given the model.
\[p_{\theta}(\phi|D)\propto p(D|\phi)p_{\theta}(\phi). \tag{5.6}\]
The likelihood of the whole dataset, \(p(D|\phi)\), is computed assuming independent data points following a Gaussian distribution:
\[p(D|\phi)=\prod_{i=1}^{N}\mathcal{N}(y_{i}|f(x_{i};\phi),\sigma_{i})\,, \tag{5.7}\]
where \(\sigma_{i}\) are the uncertainties of the corresponding observations \(y_{i}\) (and assumed here to be given), while the mean of the Gaussian is given by \(f(x_{i};\phi)\). From a practical standpoint, in addition to the normalization being, in general, unknown, the usual complexity of the posterior distribution makes this possibly highly dimensional integral difficult to compute. The use of Monte Carlo techniques, in particular of the HMC, is typical in this context. We focus below on two types of predictions: 1) The variance of the model parameters \(\delta\phi_{j}^{2}=\mathbb{E}_{p_{\theta}}[\phi_{j}^{2}]-(\mathbb{E}_{p_{ \theta}}[\phi_{j}])^{2}\), where \(j=1,...,d\), being \(d\) the dimension of \(\phi\), and 2) the variance of the output mean \(\delta f_{t}^{2}=\mathbb{E}_{p_{\theta}}[f_{t}^{2}]-(\mathbb{E}_{p_{\theta}}[ f_{t}])^{2}\), where \(f_{t}\) is a shorthand notation for the output mean \(f(x_{t};\phi)\), evaluated at a new "test" datapoint \(x_{t}\)4.
We are interested in studying the dependence of these quantities on the choice of hyperparameters \(\theta\) that characterize the prior distributions. In particular we will consider the case of Gaussian priors, and determine the dependence of our predictions with the width of this Gaussian.
#### 5.2.1 Model and data set
We generate a synthetic dataset (cf. Figure 4) by defining the points on an irregular grid in the range \(x_{i}\in[-1.0;1.0]\), such that
\[y_{i}=f(x_{i};\phi_{\rm true})+\sigma_{i}\epsilon\;, \tag{5.8}\]
where the mean is a 3rd degree polynomial, \(f(x;\phi)=\phi_{0}+\phi_{1}x+\phi_{2}x^{2}+\phi_{3}x^{3}\), with \(\phi_{\rm true}=(1,1,1,1)\); \(\epsilon\sim\mathcal{N}(0,1)\) is sampled from a standard Gaussian, and we consider a heteroscedastic dataset by defining a noise \(\sigma_{i}\) dependent on \(x_{i}\). We adopt the same model in order to make inference on the parameters \(\phi\).
The prior distribution is also chosen as a Gaussian, \(\phi\sim\mathcal{N}(\mu_{p},\sigma_{p})\). For simplicity we choose the priors centered on the "correct" values of the model (i.e. \(\mu_{p}=\phi_{\rm true}\)), while we keep the width \(\sigma_{p}\) as a hyperparameter to study the dependence on5.
Footnote 5: This is a simplified setup for the sake of illustration, given the methodological scope of this work. Nonetheless, it is straightforward to apply the method to the situation where we are interested in studying the dependence on both parameters \(\mu_{p}\) and \(\sigma_{p}\) simultaneously, or in general on the joint set of hyperparameters of the model.
For any choice of the prior width \(\sigma_{p}\) we can obtain a prediction by generating \(N\) samples \(\{\phi^{(\alpha)}\}_{\alpha=1}^{N}\) according to the distribution \(p_{\theta}(\phi|D)\) computed from eq. (5.6).
#### 5.2.2 Reweighting approach
The reweighting method takes \(N\) samples \(\{\phi_{i}^{(\alpha)}\}_{\alpha=1}^{N}\) obtained at \(\sigma_{p}=\sigma_{p}^{*}\) and computes the reweighted average using \(\tilde{\sigma}_{p}=\sigma_{p}^{*}+\epsilon\) in eq. (3.3).
For each sample \(\phi^{(\alpha)}\), the reweighting factor becomes a polynomial expansion in \((\sigma-\sigma_{p}^{*})\)
\[\tilde{w}_{\alpha}(\epsilon)=\frac{p_{\mu,\sigma_{p}^{*}+\epsilon}(\phi_{ \alpha}|D)}{p_{\mu,\sigma_{p}^{*}}(\phi_{\alpha}|D)}. \tag{5.9}\]
Notice that the zeroth order of eq. (5.9) is one, such that the zeroth order result corresponds to the usual Monte Carlo point estimate for \(\delta\phi_{0}(\sigma_{p}^{*})\).
In order to generate these samples, we used the standard HMC algorithm. The equations of motion are
\[H_{\theta}(\phi,\pi)=\frac{\pi^{2}}{2}-\log(p_{\theta}(\phi|D)), \tag{5.10}\] \[\dot{\phi}_{j}=\pi_{j},\] (5.11) \[\dot{\pi}_{j}=-\frac{1}{\sigma_{p}^{2}}(\phi_{j}-(\mu_{p})_{j})+ \sum_{i=0}^{N}\frac{1}{\sigma_{i}^{2}}\left(y_{i}-f(x_{i},\phi)\right)(x_{i}) ^{j}, \tag{5.12}\]
where \(\pi=\{\pi_{0},\pi_{1},\pi_{2},\pi_{3}\}\) are the momenta conjugated to \(\phi\). Note that all \(\phi\)-independent terms can be dropped from the equations of motion, namely the normalization of \(p_{\theta}(\phi|D)\) is not needed. The eom were solved numerically using a fourth-order symplectic integrator [9] providing a high acceptance rate in the Metropolis-Hastings step even with a coarse integration.
The chosen integration step-size was \(\varepsilon=0.001\), while the trajectory length was uniformly sampled in the interval \([0,100]\times\varepsilon\)6.
Footnote 6: Due to the quadratic form of the Hamiltonian, the phase space of this system is cyclic. The algorithm is ergodic only if the trajectory length is randomized [7].
#### 5.2.3 Hamiltonian perturbative expansion
Following the procedure in section 3.2, the Monte Carlo samples \(\{(\tilde{\phi}_{j})^{\alpha}\}_{\alpha=1}^{N},\ j=0,1,2,3\) were obtained with the modified HMC algorithm for some values of \(\sigma_{p}^{*}\). We used the same parameters for the HMC as described in the previous section. In particular our acceptances were so close to \(100\%\) that any bias due to the missing accept/reject step is negligible. We checked this hypothesis by further performing another simulation with a coarser value of the integration step and finding completely compatible results.
#### 5.2.4 Results
Here we compare the predictions for the average model parameters \(\phi\) and their dependence on the prior width \(\sigma\). In particular we focus on the variance of the model parameters \(\delta\phi_{j}^{2}\), since these are the quantities most sensitive to the prior width (i.e. very thin priors result in small variance for the model parameters). We have fixed \(\sigma^{*}=0.3\), but similar conclusions are obtained for other values.
The results of the Monte Carlo average for \(\delta\tilde{\phi}_{i}^{2}\) and its derivatives with respect to \(\sigma\) are shown in table 1. Results labeled "RW" use the reweighting method, while results labeled "HAD" use the Hamiltonian approach.
It is obvious that results using the Hamiltonian approach are more precise: the uncertainties in the derivatives, \(\delta\phi_{i,n}^{2},n\neq 0\), are smaller for the Hamiltonian approach, despite the statistics being the same.
The difference is larger for higher order derivatives: the approach based on reweighting struggles to get a signal for the fourth and fifth derivatives, while the Hamiltonian approach is able to obtain even the fifth derivative with a few percent precision. This fits our expectations (see section 4).
On the other hand, for our second quantity of analysis \(\delta f_{t}^{2}\) (i.e. the variance of the prediction mean), Figure 5 shows the results of the dependence on \(\sigma_{p}\), where we have fixed \(x_{t}=0.5\).
The Hamiltonian approach gives visually results with a reduced variance, similar to the results presented in table 1.
\begin{table}
\begin{tabular}{c c c c c c c c} & \multicolumn{6}{c}{\(n\)} \\ \cline{2-9} & & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) \\ \hline \multirow{2}{*}{\(\delta\phi_{0,n}^{2}\)} & RW & 0.00014705(86) & 0.0001384(63) & -0.000248(29) & 0.000367(62) & -0.00071(51) & -0.0003(12) \\ & HAD & 0.00014705(86) & 0.0001365(34) & -0.0002850(60) & 0.000311(20) & 0.000178(77) & -0.00115(26) \\ \hline \multirow{2}{*}{\(\delta\phi_{1,n}^{2}\)} & RW & 0.01099(15) & 0.0285(12) & -0.0450(58) & 0.032(13) & 0.04(10) & -0.61(25) \\ & HAD & 0.01099(15) & 0.02787(69) & -0.0518(11) & 0.0248(38) & 0.189(16) & -0.700(46) \\ \hline \multirow{2}{*}{\(\delta\phi_{2,n}^{2}\)} & RW & 0.008938(74) & 0.00830(28) & -0.0283(10) & 0.0850(39) & -0.234(18) & 0.603(78) \\ & HAD & 0.008938(74) & 0.00817(15) & -0.02789(42) & 0.0849(13) & -0.2505(44) & 0.726(15) \\ \hline \multirow{2}{*}{\(\delta\phi_{3,n}^{2}\)} & RW & 0.03617(59) & 0.1205(51) & -0.182(24) & 0.050(61) & 0.63(42) & -4.0(12) \\ & HAD & 0.03617(59) & 0.1177(30) & -0.2052(42) & 0.020(16) & 1.132(66) & -4.02(19) \\ \hline \end{tabular}
\end{table}
Table 1: Results for the expansion coefficients of the variance, \(\delta\phi_{j,n}^{2}\) for \(\sigma_{p}^{*}=0.3\) from the reweighting and hamiltonian expansion.
A case study in lattice field theory
In this section we explore in detail some applications in lattice field theory. We will use as model the \(\lambda-\phi^{4}\) theory in 4 space-time dimensions. In the continuum the Euclidean action of this theory is given by
\[S(\phi;m,\lambda)=\int\mathrm{d}x^{4}\left\{\frac{1}{2}(\partial_{\mu}\phi)^{2}+ \frac{m^{2}}{2}\phi^{2}+\lambda\phi^{4}\right\}\,. \tag{6.1}\]
The discretized version of this action is
\[S_{\mathrm{latt}}(\hat{\phi};\hat{m},\lambda)=\sum_{x}\left\{\frac{1}{2}\sum_{ \mu}[\hat{\phi}(x+\hat{\mu})-\hat{\phi}(x)]^{2}+\frac{\hat{m}^{2}}{2}\hat{\phi }^{2}(x)+\lambda\hat{\phi}^{4}(x)\right\} \tag{6.2}\]
where dimension-full quantities have been scaled with appropriate powers of the lattice spacing \(a\) in order to render all quantities dimensionless
\[\hat{\phi} = a\phi\,, \tag{6.3}\] \[\hat{m} = am\,. \tag{6.4}\]
In field theory one is interested in correlation functions, given as expectation values over the Euclidean partition function
\[\mathcal{Z}=\int\prod_{x}\mathrm{d}\hat{\phi}(x)e^{-S_{\mathrm{latt}}(\hat{ \phi};\hat{m},\lambda)}\,. \tag{6.5}\]
These expectation values depend on the parameters \(\hat{m},\lambda\). The methods described in sections 3.1 and 3.2 can be applied to determine the dependence of correlation functions with these parameters. In particular we note that for non-negative \(\hat{m}^{2}\) the potential given by the action of eq. (6.2) is convex, guaranteeing the convergence of the Hamiltonian perturbative expansion (see section 3.2.1).
We have performed simulations on a \(L^{4}\) lattice with \(L/a=32,48\) for several values of the parameter \(\lambda\) and \(\hat{m}^{2}=0.05\). As example observables, we will consider some simple local quantities: \(\langle\hat{\phi}^{2}(x)\rangle,\langle\hat{\phi}^{4}(x)\rangle\), as well as the action density
\[\langle\hat{s}(x)\rangle=\frac{1}{2}\langle[\hat{\phi}(x+\hat{\mu})-\hat{\phi }(x)]^{2}\rangle+\frac{\hat{m}^{2}}{2}\langle\hat{\phi}^{2}(x)\rangle+\lambda \langle\hat{\phi}^{4}(x)\rangle\,. \tag{6.6}\]
Since we perform our simulations with periodic boundary conditions, invariance under translations ensures that these local expectation values are independent on the point measured \(x\). In order to get better precision we perform volume averages for the estimations (_e.g._\(\langle\hat{s}\rangle=\left(\frac{a}{L}\right)^{4}\sum_{x}\langle\hat{s}(x)\rangle\)).
Table 2 shows the results of both the Hamiltonian and the reweighting approach applied to the determination of the derivatives \(\partial/\partial m^{2}\), \(\partial/\partial\lambda\), \(\partial^{2}/\partial m^{2}\partial\lambda\) of the observables \(\langle\phi^{2}\rangle,\langle\phi^{4}\rangle,\langle\hat{s}\rangle\) (see eq. (6.6)).
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & & & & & \(\lambda\) & & & \\ \cline{3-8} & & 0.0 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & \\ \hline \multirow{4}{*}{\(\langle\phi^{2}\rangle\)} & \multirow{2}{*}{\(\partial_{\dot{m}^{2}}\)} & RW & -0.0428(20) & -0.0328(14) & -0.0270(13) & -0.0241(12) & -0.0220(11) & -0.01974(91) \\ & & HAD & -0.042526(41) & -0.030880(14) & -0.026273(10) & -0.0233672(82) & -0.0212721(72) & -0.0196387(60) \\ \cline{2-8} & \multirow{2}{*}{\(\partial_{\lambda}\)} & RW & -0.0779(22) & -0.05227(94) & -0.04370(89) & -0.03534(61) & -0.03169(50) & -0.02754(49) \\ & & HAD & -0.077816(79) & -0.052499(24) & -0.042218(19) & -0.035830(14) & -0.031323(11) & -0.0278909(93) \\ \cline{2-8} & \multirow{2}{*}{\(\partial_{\dot{m}^{2},\lambda}^{2}\)} & RW & 0.434(3) & 0.03 (16) & 0.16(14) & -0.10(11) & 0.116(77) & -0.024(69) \\ & & HAD & 0.27332(20) & 0.061593(99) & 0.035082(69) & 0.024240(42) & 0.018263(30) & 0.014553(31) \\ \hline \multirow{4}{*}{\(\langle\phi^{4}\rangle\)} & \multirow{2}{*}{\(\partial_{\dot{m}^{2}}\)} & RW & -0.0391(20) & -0.0272(12) & -0.0223(11) & -0.01809(90) & -0.01645(86) & -0.01398(70) \\ & & HAD & -0.038919(39) & -0.026247(13) & -0.0211084(97) & -0.0179118(73) & -0.0156615(62) & -0.0139464(46) \\ \cline{2-8} & \multirow{2}{*}{\(\partial_{\lambda}\)} & RW & -0.0844(24) & -0.0539(11) & -0.04281(92) & -0.03340(54) & -0.02850(43) & -0.02428(41) \\ & & HAD & -0.084330(78) & -0.054229(26) & -0.041514(19) & -0.033715(14) & -0.028357(11) & -0.0243679(73) \\ \cline{2-8} & \multirow{2}{*}{\(\partial_{\dot{m}^{2},\lambda}^{2}\)} & RW & 0.41(44) & 0.01(16) & 0.11(14) & -0.083(90) & 0.089(61) & -0.000(56) \\ & & HAD & 0.2848(21) & 0.068858(94) & 0.038917(70) & 0.026391(38) & 0.019393(29) & 0.015080(25) \\ \hline \multirow{4}{*}{\(\langle s\rangle\)} & \multirow{2}{*}{\(\partial_{\dot{m}^{2}}\)} & RW & -0.0025(42) & -0.0006(34) & 0.0027(35) & 0.0028(36) & 0.0057(32) & 0.0063(30) \\ & & HAD & -0.000003(22) & 0.002623(16) & 0.004218(20) & 0.005397(14) & 0.006265(16) & 0.006989(15) \\ \cline{1-1} \cline{2-8} & \multirow{2}{*}{\(\partial_{\dot{m}^{2},\lambda}\)} & RW & -0.0686(48) & -0.0567(26) & -0.0538(25) & -0.0447(23) & -0.0400(17) & -0.0343(17) \\ & & HAD & -0.069774(49) & -0.057738(34) & -0.050128(40) & -0.044530(27) & -0.040250(27) & -0.036721(26) \\ \cline{1-1} \cline{2-8} & \multirow{2}{*}{\(\partial_{\dot{m}^{2},\lambda}^{2}\)} & RW & 1.1(1.0) & -0.16(39) & 0.36(43) & -0.43(32) & 0.16(26) & -0.15(21) \\ \cline{1-1} & & HAD & 0.038864(96) & 0.019197(66) & 0.013405(69) & 0.010126(50) & 0.007860(47) & 0.006407(48) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Derivatives with respect to \(\dot{m}^{2}\) (\(\dot{m}^{2}\)), \(\lambda\) (\(\partial_{\lambda}\)) and the cross derivative (\(\partial_{\dot{m}^{2},\lambda}^{2}\)) of different observables. Note that the action density \(s\) has an explicit dependence on \(\dot{m}^{2},\lambda\) (see Eq. (6.6)). All simulations are performed on a \(L^{4}\) lattice with \(L/a=32\) and \(\dot{m}^{2}=0.05\).
It is apparent that results obtained with the Hamiltonian approach are much more precise (both results use exactly the same statistics), usually about 100 times more precise for the first derivatives. This difference in precision is even more clear for the higher orders: the signal for the cross derivative \(\partial_{\hat{m}^{2},\lambda}^{2}\) is completely lost in the reweighting approach, whereas the Hamiltonian approach determines its value with a precision better than a 1%. This can be understood from a general point of view by noting that the reweighting approach requires to evaluate the so called _disconnected_ contributions. For example, the leading order derivative \(\partial_{\hat{m}^{2}}\langle\phi^{2}\rangle\) as determined by the reweighting approach is given by
\[\partial_{\hat{m}^{2}}\langle\hat{\phi}^{2}\rangle=\langle\hat{\phi}^{2}( \partial_{\hat{m}^{2}}S_{\text{latt}})\rangle-\langle\hat{\phi}^{2}\rangle \langle\partial_{\hat{m}^{2}}S_{\text{latt}}\rangle \tag{6.7}\]
These disconnected contributions are known to suffer from a large variance. As explained in section 4 the Hamiltonian approach completely avoids estimating such disconnected contributions. The Hamiltonian approach implements an exact version of the reparametrization trick, where the field variables \(\tilde{\phi}(x)\) (and its dependence with the relevant parameters) are determined to any order so that all terms in the Taylor series are determined _as connected contributions_.
We conclude that it is the absence of disconnected terms in the Hamiltonian approach what lies at the heart of the differences in variances between both approaches.
## 7 Conclusions
The tools of automatic differentiation represent a cornerstone in modern optimization algorithms and many machine learning applications. The extension of these techniques to functions evaluated using Monte Carlo processes is non trivial. In these cases the underlying probability distribution depends on some parameters, and Monte Carlo techniques are used to draw samples for specific values of the parameters. The dependence of the samples with the parameter values is difficult to determine. Nonetheless, there are many applications for these techniques. In this work we have considered several of them, from optimizations of expectation values, to the study of the dependence of Bayesian predictions with respect to prior parameters or applications in lattice field theory.
We have presented two different approaches to the determination of Taylor series of quantities estimated via Monte Carlo sampling. The first approach is based on reweighting and can be considered a generalization of the score function estimator, valid for derivatives of arbitrary orders, and unnormalized probability distribution functions. The second approach is based on Hamiltonian methods to sampling (HMC being the most popular option), and produces samples that carry the information of the dependence on the action parameters. The convergence of the stochastic process in this last approach is not always guaranteed, but we have provided sufficient conditions for the convergence.
We have shown some applications of these methods. First, in the context of optimization, we have applied the stochastic gradient descent to find the optimal parameters of some expectation value (see section 5.1). Second, in Bayesian inference we have shown how these methods can be used to estimate the dependence on Bayesian predictions on the "hyperparameters" that describe the prior distribution (cf section 5.2).
Finally, in the context of Lattice Field Theory, we have studied in detail the case of \(\lambda-\phi^{4}\) in four space-time dimensions. The dependence of observables with respect to the parameters of the action (the bare mass in lattice units \(\hat{m}\) and the bare coupling \(\lambda\)) can be accurately determined using these techniques.
A detailed comparison of both methods shows that results obtained with the Hamiltonian approach are much more precise. We have argued that the Hamiltonian approach can be seen as a change of variables from the reweighing formula where the reweighing factors are constant: In the Hamiltonian approach all dependence with respect to the parameters is present in the samples. The absence of disconnected
contributions (in the lattice jargon) makes the variance of Taylor series computed with the Hamiltonian approach much more precise. For example, our study on \(\lambda-\phi^{4}\) shows that for the same statistics one gets results 100 times more precise.
The Hamiltonian approach has his own drawbacks. On one hand the convergence of the stochastic process is not guaranteed. In particular for the case of Lattice QCD, that is formulated in terms of compact variables, the convergence cannot be guaranteed. On the other hand, the method cannot be applied to samples that have already been generated, unlike the reweighting method.
Nevertheless the investigations of this work open the door to an interesting possibility: that one can find change of variables that eliminate (or significantly reduce) the disconnected contributions of the reweighing approach. Machine Learning techniques, in particular the tools related with normalizing flows, potentially can provide a significant gain in the computation of derivatives of expectation values.
## Acknowledgments
The authors are grateful to A. Patella for the many discussions on the early stages of the work presented here, as well as A. Dimitriou, D. Hernandez-Lobato and S. Rodriguez-Santana. AR and GT acknowledge financial support from the Generalitat Valenciana (CIDEGENT/2019/040). Similarly, BZ acknowledges the support from CIDEGENT/2020/055. The authors gratefully acknowledge as well the support from the Ministerio de Ciencia e Innovacion (PID2020-113644GB-I00) and computer resources at Artemisa, funded by the European Union ERDF and Comunitat Valenciana as well as the technical support provided by the Instituto de Fisica Corpuscular, IFIC (CSIC-UV). The authors acknowledge the financial support from the MCIN with funding from the European Union NextGenerationEU (PRTR-C17.I01) and Generalitat Valenciana. Project "ARTEMISA", ref. ASFAE/2022/024.
|
2304.03734
|
Mirzakhani's frequencies of simple closed geodesics on hyperbolic
surfaces in large genus and with many cusps
|
We present a proof of a conjecture proposed by V. Delecroix, E. Goujard, P.
Zograf, and A. Zorich, which describes the large genus asymptotic behaviours of
the ratio of frequencies of separating over nonseparating simple closed
geodesics on a closed hyperbolic surface of genus $g$ with $n$ cusps. We
explicitly give the function $f(\frac{n}{g})$ in the conjecture. The moderate
behaviour of the frequencies with respect to the growth rate of the number of
cusps compared to that of the genus drastically contrasts with the behaviour of
other geometric quantities and exhibits the topological nature of the
frequencies.
|
Irene Ren
|
2023-04-07T17:03:23Z
|
http://arxiv.org/abs/2304.03734v1
|
Mirzakhani's frequencies of simple closed geodesics on hyperbolic surfaces in large genus and with many cusps
###### Abstract.
We present a proof of a conjecture proposed by V. Delecroix, E. Goujard, P. Zograf, and A. Zorich, which describes the large genus asymptotic behaviours of the ratio of frequencies of separating over nonseparating simple closed geodesics on a closed hyperbolic surface of genus \(g\) with \(n\) cusps. We explicitly give the function \(f(\frac{n}{g})\) in the conjecture. The moderate behaviour of the frequencies with respect to the growth rate of the number of cusps compared to that of the genus drastically contrasts with the behaviour of other geometric quantities and exhibits the topological nature of the frequencies.
###### Contents
* 1 Introduction
* 2 Statement of the Conjecture
* 2.1 The conjecture
* 2.2 Asymptotic form
* 3 Main Theorem and Its Proof
* 3.1 The main theorem
* 3.2 Large \(g,n\) estimate of the ratio
## 1. Introduction
Let \(X\) be a hyperbolic surface of genus \(g\) with \(n\) cusps. A closed geodesic is called semi-simple if it does not have self-intersections. A simple closed geodesic is called separating if it splits the hyperbolic surface \(X\) into two parts, and is called nonseparating if it does not. We denote the frequencies of separating and nonseparating geodesics \(c_{g,n,sep}\) and \(c_{g,n,nonsep}\) respectively. In 2008 in her paper [14], M. Mirzakhani has explored the frequencies of different types of simple closed geodesics on hyperbolic surfaces of genus \(g\) with \(n\) cusps, and their relationship with the Weil-Petersson volume of moduli spaces of corresponding bordered Riemann surfaces. She proved that the ratio of frequencies, \(\frac{c_{g,n,sep}}{c_{g,n,nonsep}}\) is a topological quantity, which is one and the same for all hyperbolic surfaces of genus \(g\) and \(n\) cusps, no matter the geometric structure of the surface.
In [10], V. Delecroix, E. Goujard, P. Zograf and A. Zorich related the count of asymptotic frequencies of simple closed geodesics on a hyperbolic surface with that
of square-tiled surfaces through Witten-Kontsevich correlators. In a more recent paper [1], the same authors introduced a correspondence between squared-tiled surfaces and meanders. In the same paper they discussed the frequencies of separating versus non separating simple closed geodesics on hyperbolic surfaces of a large genus g with n cusps.
In this paper we are dealing with a conjecture from [1], which describes the large \(g\) asymptotic behaviour of the ratio of \(c_{g,n,sep}\) and \(c_{g,n,nonsep}\) on hyperbolic surfaces. This conjecture indicate the topological nature of the asymptotic frequencies of simple closed geodesics on hyperbolic surfaces of a large genus \(g\) with \(n\) cusps.
### Acknowledgement
I thank Anton Zorich for introducing this problem to me as well as reviewing the scripts. I thank Simon Barazer and Anton Zorich for useful discussions.
## 2. Statement of the Conjecture
### The conjecture
In [1], the authors proposed
**Conjecture 2.1**.: _(Conjecture 2.16 in [1]) The ratio of frequencies of separating over nonseparating simple closed geodesics on a closed hyperbolic surface of genus g with n cusps admits the following uniform asymptotics:_
\[\frac{c_{g,n,sep}}{c_{g,n,nonsep}}=\sqrt{\frac{2}{3\pi g}}\cdot\frac{1}{4^{g} }\cdot f\left(\frac{n}{g}\right)\cdot\left(1+\varepsilon(g,n)\right), \tag{1}\]
_where the function \(f:[0,\infty)\mapsto\mathbb{R}\) is continuous and increases monotonously from \(f(0)=1\) to \(f(\infty)=\sqrt{2}\) and the error term \(\varepsilon(g,n)\) tends to \(0\) as \(g\to\infty\) uniformly in \(n\)._
M. Mirzakhani has proved in [14] that the ratio of frequencies of a given hyperbolic surface depends only on the topological properties, i.e. the genus \(g\) and the number of cusps \(n\). An explicit example is given by Ber's hairy torus, which is a torus of \(n^{2}\) cusps. We can take a very symmetric taking a very symmetric hairy torus such as in [11] section 5.3, or take a random shape hairy torus with the same number of cusps, but the ratio \(\frac{c_{g,n,sep}}{c_{g,n,nonsep}}\) would always be \(\frac{1}{6}\).
The large \(g\) asymptotic of \(\frac{c_{g,n,sep}}{c_{g,n,nonsep}}\) when \(1\ll g\ll n\) (Remark 2.14 in [1]) and \(1\ll n\ll g\) (Theorem 2.15 in [1]) are already proved in [1], which reveals that the function \(f\) has \(f(0)=1\) and \(f(\infty)=\sqrt{2}\).
The above conjecture clarifies that the ratio has a very controlled behaviour under the variation of \(\frac{n}{g}\) when \(g\) is large, in contrast to geometric quantities such as Witten-Kontsevich correlators, spectral gap of Laplacian operators and Cheeger constant. The discussion of Witten-Kontsevich correlators includes works of A. Aggarwal, in [1], where it was proved that Witten-Kontsevich correlators, after normalization, are uniformly close to \(1\) in the regime \(n^{2}\ll g\) and might explode exponentially otherwise.
M. Mirzakhani investigated the geometric properties of random hyperbolic surfaces of large genus without cusps in [14], and proved that there exist a spectral gap. This pioneer work of Mirzakhani was extended by N. Anantharaman and
L. Monk to give a more accurate estimate for this spectral gap in [1]. Mirzakhani's work was also extended to the case of random hyperbolic surfaces of large genus with cusps. W. Hide in [14] studied the spectrum of Laplace operator on hyperbolic surfaces when \(n^{2}\ll g\), and proved the existence of the spectral gap; while Y. Shen and Y. Wu studied in [15] the asymptotic behavior of the Cheeger constants and spectral gaps of random hyperbolic surfaces when \(n^{2}\gg g\), and by their results, the spectral gap vanishes for Weil-Petersson random hyperbolic surfaces in that regime.
The contrasting behaviour of the frequencies when \(\frac{n}{g}\) changes, in comparison to geometric quantities like Witten-Kontsevich correlators, spectral gap and Cheeger constant, emphasizes the topological nature of the frequencies.
### Asymptotic form
In [11], Proposition 6.4 gave explicitly the contribution to Mazur-Veech volumes of principle stratum \(\mathcal{Q}_{g,n}\) of meromorphic quadratic differentials, which comes from single-band square-tiled surfaces corresponding to all stable graphs. While in [11], Theorem 1.22 gave a relation between Mirzakhani's asymptotic frequency of closed geodesic multicurves of certain topological type and the volume contribution of the corresponding stable graph to Mazur-Veech volume. Accordingly one have explicitly the ratio of frequencies of separating and nonseparating geodesics.
**Theorem 2.2**.: _(Theorem 1.22 in [11]) Let \((g,n)\) be a pair of nonnegative integers satisfying \(2g+n>3\) and different from \((2,0)\). Let \(\gamma\in\mathcal{ML}_{g,n}(\mathbb{Z})\) be a multicurve, and let \((\Gamma,\mathbf{H})\) be the associated stable graph and weights. Then the volume contribution \(\operatorname{Vol}(\Gamma,\mathbf{H})\) to the Masur-Veech volume \(\operatorname{Vol}\mathcal{Q}_{g,n}\) coincides with Mirzakhani's asymptotic frequency \(c(\gamma)\) of closed geodesic multicurves of topological type \(\gamma\) up to an explicit factor depending only on \(g\) and \(n\):_
\[\operatorname{Vol}(\Gamma,\mathbf{H})=2(6g-6+2n)\cdot(4g-4+n)!\cdot 2^{4g-3+n} \cdot c(\gamma). \tag{2}\]
The above theorem allows us to write \(\frac{c_{g,n,sep}}{c_{g,n,nonsep}}\) as a ratio of volume contribution of stable graphs.
The following formula is true for any stable graph \(\Gamma\) with a single edge, and for any \(g\) and \(n\), as maintained by a generalization of Theorem 3.1 in [11]:
\[\operatorname{Vol}(\Gamma)=cyl_{1}\Gamma\cdot\zeta(6g-6+2n).\]
In our case when considering large genus asymptotics, one always has \(\operatorname{Vol}(\Gamma)\sim cyl_{1}\Gamma\), since \(\zeta(6g-6+2n)\) tends to \(1\) exponentially fast as \(g\to\infty\).
Furthermore, \(cyl_{1}\Gamma\) has a explicit expression as a sum of products of binomial coefficients and Witten-Kontsevich correlators when \(g\geq 2\), as the following proposition.
**Proposition 2.3**.: _(Proposition 6.4 in [11]) Assume \(g\geq 2\). The contribution to the Masur-Veech volume of the principle stratum \(\mathcal{Q}_{g,n}\) of meromorphic quadratic differentials coming from single-band square-tiled surfaces corresponding to the stable graph \(\Gamma_{1}(g,n)\) has the following form:_
\[cyl_{1}(\Gamma_{1}(g,n))=2^{g+1}\binom{4g-4+n}{g}\cdot g!\sum_{k=0}^{3g-4} \binom{3g-4+2n}{n+k}\langle\tau_{k}\tau_{g-4-k}\rangle_{g-1}. \tag{3}\]
_The total contribution to the Masur-Veech volume of the principal stratum \(\mathcal{Q}_{g,n}\) of meromorphic quadratic differentials coming from single-band square-tiled surfaces corresponding to all stable graphs \(\Gamma_{g_{1},n_{1}}^{g_{2},n_{2}}\) has the following form:_
\[\frac{1}{2}\sum_{n_{1}=0}^{n}\binom{n}{n_{1}}\sum_{g_{1}=0}^{g}|\mathrm{Aut} \Gamma_{g_{1},n_{1}}^{g_{2},n_{2}}||\mathrm{Vol}\left(\Gamma_{g_{2},n_{2}}^{g _{2},n_{2}}\right)|=\frac{2^{g+1}}{24^{g}}\cdot\binom{4g-4+n}{g}\sum_{g_{1}=0}^ {g}\binom{g}{g_{1}}\binom{3g-4+2n}{3g_{1}-2+n} \tag{4}\]
Therefore numerator in \(\frac{c_{g,n,sep}}{c_{g,n,nonsep}}\) is given directly in (4).
Meanwhile, when \(g\) is large, the 2-corralators have the following asymptotics:
**Lemma 2.4**.: _The large genus asymptotics for 2-correlators is_
\[\langle\tau_{k}\tau_{3g-1-k}\rangle_{g}=\frac{1}{24^{g}\cdot g!}\cdot\frac{(6g -1)!!}{(2k+1)!!(6g-1-2k)!!}\big{(}1+O(\frac{1}{g})\big{)}, \tag{5}\]
_and the error term \(O(\frac{1}{g})\) is uniform when \(0\leq k\leq 3g-1\)._
The proof of this lemma can be found in [11] Proposition 4.1 and Formula 4.2.
Hence the denominator of \(\frac{c_{g,n,sep}}{c_{g,n,nonsep}}\) has the following asymptotic expression in terms of binomial coefficients:
\[\mathrm{Vol}(\gamma_{1}(g,n))=\frac{2^{g+3}g}{24^{g}(g-1)}\cdot\binom{4g-4+n} {g}\cdot\sum_{k=0}^{3g-4}\frac{\binom{3g-4+2n}{n+k}\binom{6g-6}{2k+1}}{\binom{ 3g-4}{k}}\cdot(1+\mathcal{O}(\frac{1}{g}))\,. \tag{6}\]
Note that the error term in (6) does not depend on \(n\).
Consequently, the ratio of frequencies of separating and nonseparating curves has the following asymptotics
\[\frac{c_{g,n,sep}}{c_{g,n,nonsep}}=\frac{1}{4}\,\,\sum_{k=0}^{g}\frac{g}{ \binom{g}{g_{1}}}\binom{3g-4+2n}{3g_{1}-2+n}\cdot\Big{(}1+\mathcal{O}(\frac{ 1}{g})\Big{)}\,, \tag{7}\]
where the error term comes from the asymptotics of the 2-correlators.
## 3. Main Theorem and Its Proof
### The main theorem
To prove conjecture 2.1, we specify the conjecture by giving explicitly the expression of \(f(\frac{n}{g})\), and we restate the conjecture as the theorem below.
**Theorem 3.1**.: _Conjecture 2.1 holds, and the unknown function \(f\) is given by_
\[f(\lambda)=\sqrt{\frac{6+2\lambda}{6+\lambda}}\,, \tag{8}\]
_where we define \(n=\lambda g\), \(\lambda\in\mathbb{R}_{+}\)._
### Large \(g,n\) estimate of the ratio
To prove theorem 3.1, we will use the following lemmas.
**Lemma 3.2**.: _(Lemma B.6. in [10]): (9) \[\binom{y}{py}=e^{yH(p)}\frac{1}{\sqrt{2\pi y\,p(1-p)}}\Big{(}1+O(\frac{1}{y}) \Big{)}\,,\quad\text{where}\quad H(p)=-p\log p-(1-p)\log(1-p)\,,\]
uniformly in \(p\) restricted to compact subsets of \((0,1)\). Here, the binomial coefficient with real parameters is interpreted in terms of \(\Gamma\)-functions,
\[\binom{a}{b}=\frac{\Gamma(a+1)}{\Gamma(b+1)\Gamma(a-b+1)}\,. \tag{10}\]
Proof.: The proof is by applying the Stirling's formula directly.
**Remark 3.3**.: _The function \(H(p)\) has the following properties,_
* \(0\leq H(p)\leq\log 2\)_,_ \(\forall p\in(0,1)\)_._
* \(H(p)\) _obtains its maximal value_ \(\log 2\) _at_ \(p=1/2\)_._
* \(H(p)\) _satisfies the inequality_ (11) \[H(\frac{1}{2}+\frac{x}{2})\leq\log 2-\frac{x^{2}}{2}\,,\qquad\forall x\in(-1,1)\,.\]
Proof.: \(H^{\prime}(p)=\log(1-p)-\log p\) is positive when \(0<p<1/2\), negative when \(1/2<p<1\), zero when \(p=1/2\). Therefore \(H(p)\) obtains the maximum at \(p=1/2\). The proof of the inequality is similar, as one can verify from the first derivative that \(H(1/2+x/2)-\log 2-x^{2}/2\) obtains its maximum at \(x=0\).
**Lemma 3.4**.: _(Problem 9.42 in [1]) The contribution of tails (neighbourhood of the endpoints) to the total sum of binomial coefficients admits the following bound:_
\[\sum_{k=0}^{sy}\binom{y}{k}<\frac{1-s}{1-2s}\frac{1}{\sqrt{2\pi ys(1-s)}}e^{yH (s)}\Big{(}1+O(\frac{1}{y})\Big{)}\,, \tag{12}\]
_when \(s\in(0,\frac{1}{2})\). The contribution of the region \(k\in[(1-s)y,y]\) is bounded in the same way._
Proof.: Using the standard identities of \(\Gamma\)-function we notice that
\[\frac{\binom{y}{k-1}}{\binom{y}{k}}=\frac{k}{y-k+1}\leq\frac{sy}{y-sy+1}< \frac{s}{1-s}<1\,. \tag{13}\]
Therefore, the sum is bounded by a geometric sum
\[\sum_{k=0}^{sy}\binom{y}{k}<\binom{y}{sy}\Big{(}1+\frac{s}{1-s}+(\frac{s}{1-s })^{2}+\cdots\Big{)}=\binom{y}{sy}\frac{1-s}{1-2s}\,. \tag{14}\]
When \(y\to\infty\), we use lemma 3.2 to expand the binomial coefficient, which completes the proof of the lemma.
The region \(k\in[(1-s)y,y]\) can be obtained by a change of variable \(k\to y-k\) so the bound also holds.
**Remark 3.5**.: _The method of proving lemma 3.4 can be generalized to products and/or ratios of binomial coefficients. It follows from lemma 3.2 that the dominant contributions of such products and/or ratios are of order \(2^{y}/\sqrt{y}\). The tail contributions, on the contrary, are of order \((\exp H(s))^{y}/\sqrt{y}\). For \(0<s<1\) the latter is exponentially smaller than the former and can be dropped from the summation._
Now we start to estimate the asymptotics of the numerators and denominators in equation (7). The results are presented in the following two propositions.
**Proposition 3.6**.: _Let \(\lambda=\frac{n}{g}\). The asymptotics of the numerator of (7) is given by_
\[\sum_{g_{1}=0}^{g}\binom{g}{g_{1}}\binom{3g-4+2n}{3g_{1}-2+n}=\frac{1}{\sqrt{ \pi g(\lambda+6)}}2^{(2\lambda+4)g-4}\big{(}1+o(1)\big{)}\,, \tag{15}\]
_with the error term uniformly small in \(n\) as \(g\to\infty\)._
Proof.: For any \(n\) both binomial coefficients obtain their maximal value at \(g_{1}=g/2\). In the large \(g\) limit, the dominant contribution to the sum comes from the region where \(g_{1}\) gets close to \(g\). Make a change of variable
\[\frac{g_{1}}{g}=\frac{1}{2}\Big{(}1+\frac{x}{\sqrt{g}}\Big{)}\,.\]
As \(g_{1}\in[0,g]\), \(x\in[-\sqrt{g},\sqrt{g}]\).
To simplify the expression, we first factor out some finite factors of the second binomial coefficient in (15) as
\[\binom{3g-4+2n}{3g_{1}-2+n}=\binom{(2\lambda+3)g-4}{\lambda g+3g_{1}-2}=\binom {(2\lambda+3)g}{\lambda g+3g_{1}}\cdot r_{0}(g_{1};g,\lambda)\,, \tag{16}\]
where
\[r_{0}(g_{1};g,\lambda)=\frac{\lambda g+3g_{1}}{(2\lambda+3)g}\cdot\frac{ \lambda g+3g_{1}-1}{(2\lambda+3)g-1}\cdot\frac{\lambda g+3(g-g_{1})-1}{(2 \lambda+3)g-2}\cdot\frac{\lambda g+3(g-g_{1})}{(2\lambda+3)g-3}. \tag{17}\]
The numerator of \(r_{0}\) obtains its maximal at \(g_{1}=g/2\), because it is made of two quadratic polynomials
\[\Big{(}\lambda g+3g_{1}\Big{)}\Big{(}\lambda g+3(g-g_{1})\Big{)}\times\Big{(} \lambda g+3g_{1}-1\Big{)}\Big{(}\lambda g+3(g-g_{1})-1\Big{)}\]
that both obtain the maximal value at that point. Therefore, we conclude that \(r_{0}\) is bounded by the value at \(g_{1}=g/2\):
\[r_{0}\leq r_{0}(\frac{g}{2};g,\lambda)=\frac{1}{32}\left(\frac{3}{g(2\lambda+ 3)-3}+\frac{1}{2g\lambda+3g-1}+2\right)\,. \tag{18}\]
Clearly the first two terms vanishes uniformly in the large \(g\) limit, so we conclude
\[r_{0}\leq\frac{1}{16}\Big{(}1+o(1)\Big{)}\,.\]
For the binomial coefficient product, using Lemma 3.2, we find
\[\binom{g}{g_{1}}\binom{3g+2n}{3g_{1}+n}\sim\frac{1}{2\pi}R(g_{1};g,n)\exp\left( gH(\frac{g_{1}}{g})+(3g+2n)H(\frac{3g_{1}+n}{3g+2n})\right)\,, \tag{19}\]
where
\[R(g_{1};g,n)=\frac{1}{\sqrt{g(3g+2n)}}\frac{1}{\sqrt{\Big{(}\frac{g_{1}}{g}\Big{)} \Big{(}1-\frac{g_{1}}{g}\Big{)}}}\frac{1}{\sqrt{\Big{(}\frac{3g_{1}+n}{3g+2n} \Big{)}\Big{(}1-\frac{3g_{1}+n}{3g+2n}\Big{)}}}\,. \tag{20}\]
Our strategy is to divide the whole domain into the _tail_ region, where \(x\in[-\sqrt{g},-(1-\varepsilon)\sqrt{g}]\cup[(1-\varepsilon)\sqrt{g},\sqrt{g}]\) and the rest, where \(x\in[-(1-\varepsilon)\sqrt{g},(1-\varepsilon)\sqrt{g}]\). The constant \(\varepsilon\) satisfies \(0<\varepsilon<1\). The lemma 3.4 and the remark 3.5 imply that the contribution of the tail region is negligible.
We now focus on the region when \(x\in[-(1-\varepsilon)\sqrt{g},(1-\varepsilon)\sqrt{g}]\), for any \(0<\varepsilon<1\). Note that
\[\frac{3g_{1}+n}{3g+2n}=\frac{\lambda g+3g_{1}}{(2\lambda+3)g}=\frac{1}{2} \Big{(}1+\frac{3x}{(3+2\lambda)\sqrt{g}}\Big{)}.\]
Thus the remainder term \(R(g_{1};g,n)\) reads
\[R(g_{1};g,n)=\frac{4}{g\sqrt{2\lambda+3}}\frac{1}{\sqrt{1-\frac{x^{2}}{g}}} \frac{1}{\sqrt{1-\frac{9x^{2}}{(3+2\lambda)^{2}g}}} \tag{21}\]
Together with the estimate of the product term \(r_{0}\) in (16), we see that the original product of binomials is bounded by
\[\begin{split}\binom{g}{g_{1}}\binom{3g-4+2n}{3g_{1}-2+n}\leq \frac{1}{4g\sqrt{2\lambda+3}}\frac{1}{\sqrt{1-\frac{x^{2}}{g}}}\frac{1}{\sqrt{ 1-\frac{9x^{2}}{(3+2\lambda)^{2}g}}}\\ \qquad\times\exp\left[gH\Big{(}\frac{1}{2}+\frac{x}{2\sqrt{g}} \Big{)}+(3g+2n)H\Big{(}\frac{1}{2}+\frac{1}{2}\frac{3x}{(3+2\lambda)\sqrt{g}} \Big{)}\right]\Big{(}1+o(1)\Big{)}\,.\end{split} \tag{22}\]
Now we estimate the exponential term. Using the inequality (11), we find
\[\begin{split}& gH(\frac{1}{2}+\frac{x}{2\sqrt{g}})+(3g+2n)H( \frac{1}{2}+\frac{1}{2}\frac{3x}{(3+2\lambda)\sqrt{g}})\\ \leq& g\Big{(}\log 2-\frac{x^{2}}{2g}\Big{)}+(3g+2n) \Big{(}\log 2-\big{(}\frac{3x}{(3+2\lambda)\sqrt{g}}\big{)}^{2}\Big{)}\\ =& g(2+\lambda)\log 4-\frac{(6+\lambda)x^{2}}{3+2 \lambda}\,.\end{split} \tag{23}\]
Therefore, we have proven
\[\binom{g}{g_{1}}\binom{3g-4+2n}{3g_{1}-2+n}\leq\frac{1}{\sqrt{1-\frac{x^{2}}{g }}}\frac{1}{\sqrt{1-\frac{9x^{2}}{(3+2\lambda)^{2}g}}}\frac{2^{(2\lambda+4)g-4 }}{\sqrt{(2\lambda+3)\frac{\pi g}{2}}}e^{-(1+\frac{9}{2\lambda+3})\frac{x^{2}} {2}}\,, \tag{24}\]
when \(x\in[-(1-\varepsilon)\sqrt{g},(1-\varepsilon)\sqrt{g}]\). In that regime the first two \(x\)-dependent prefactors decrease with \(x\), thus are bounded by the value at \(x^{2}=(1-\varepsilon)^{2}g\). Therefore,
\[\begin{split}&\binom{g}{g_{1}}\binom{3g-4+2n}{3g_{1}-2+n}\leq \frac{1}{\sqrt{1-\frac{x^{2}}{g}}}\frac{1}{\sqrt{1-\frac{9x^{2}}{(3+2\lambda)^ {2}g}}}\frac{2^{(2\lambda+4)g-4}}{\sqrt{(2\lambda+3)\frac{\pi g}{2}}}e^{-(1+ \frac{9}{2\lambda+3})\frac{x^{2}}{2}}\\ &\leq\frac{1}{\sqrt{1-(1-\varepsilon)^{2}}}\frac{1}{\sqrt{1-(1- \varepsilon)^{2}\frac{9}{(3+2\lambda)^{2}}}}\frac{2^{(2\lambda+4)g-4}}{\sqrt{ (2\lambda+3)\frac{\pi g}{2}}}e^{-(1+\frac{9}{2\lambda+3})\frac{x^{2}}{2}}\\ &\leq\frac{1}{1-(1-\varepsilon)^{2}}\frac{2^{(2\lambda+4)g-4}}{ \sqrt{3}\frac{\pi g}{2}}e^{-\frac{x^{2}}{2}}\,,\end{split} \tag{25}\]
where in the last step we substitute the upper bound for each \(\lambda\)-dependent term to get an uniform upper bound.
On the other hand, when \(x\in[-g^{\frac{1}{4}-\delta},\ g^{\frac{1}{4}-\delta}]\), \((0<\delta<1/4)\) we can find a finer estimate. Recall that the \(r_{0}\) term (16) is bounded by \(\frac{1}{16}\big{(}1+o(1)\big{)}\), in this regime, we can prove it saturates this upper bound. We rewrite \(r_{0}\) as
\[r_{0}=\frac{1}{16}\frac{[(2\lambda+3)g]^{4}}{[(2\lambda+3)g][(2 \lambda+3)g-1][(2\lambda+3)g-2][(2\lambda+3)g-3]}\\ \times\Big{(}1-\frac{9x^{2}}{g(2\lambda+3)^{2}}\Big{)}\Big{(} \frac{4-9gx^{2}}{g^{2}(2\lambda+3)^{2}}-\frac{4}{2g\lambda+3g}+1\Big{)}\,. \tag{26}\]
In the large \(g\) limit the terms above is of order \(1/16+o(1)\) uniformly in \(n\), because the \(x\)-dependent terms appears in the combination \(x^{2}/g\), which is at most of order \(O(g^{-\frac{1}{2}-2\delta})\).
For the remaining terms of the binomial product, starting from (19) again in this region, we find the prefactor \(R\) of (19) uniformly asymptotes to \(\frac{4}{g\sqrt{2\lambda+3}}\):
\[\begin{split}\frac{4}{g\sqrt{2\lambda+3}}\frac{1}{\sqrt{1-\frac {x^{2}}{g}}}\frac{1}{\sqrt{1-\frac{9x^{2}}{(3+2\lambda)^{2}g}}}& =\frac{4}{g\sqrt{2\lambda+3}}\Big{(}1+O(\frac{x^{2}}{g})\Big{)} \Big{(}1+o(1)\Big{)}\\ &=\frac{4}{g\sqrt{2\lambda+3}}\Big{(}1+o(1)\Big{)}\,,\end{split} \tag{27}\]
and the exponential also asymptotes to a quadratic function in \(x\),
\[\begin{split}& gH(\frac{1}{2}+\frac{x}{2\sqrt{g}})+(3g+2n)H( \frac{1}{2}+\frac{1}{2}\frac{3x}{(3+2\lambda)\sqrt{g}})\\ &=g(\log 2-\frac{x^{2}}{2g})+O(\frac{x^{4}}{g})+(3g+2n)(\log 2-( \frac{3x}{(3+2\lambda)\sqrt{g}})^{2})+O(\frac{x^{4}}{(3+2\lambda)^{2}g})\\ &=g(2+\lambda)\log 4-\frac{(6+\lambda)x^{2}}{3+2\lambda}+O(g^{-4 \delta})+O(\frac{g^{-4\delta}}{(3+2\lambda)^{2}})\,.\end{split} \tag{28}\]
The Taylor expansions above are legitimate because comparing to \(\frac{1}{2}\), both \(\frac{x}{2\sqrt{g}}=O(g^{-\frac{1}{2}-2\delta})\) and \(\frac{3x}{(3+2\lambda)\sqrt{g}}=\frac{1}{3+2\lambda}O(g^{-\frac{1}{2}-2\delta})\) are small, no matter what value \(\lambda\) takes.
As the result of the quadratic expansion equals to the uniform upper bound in (23), we conclude the difference is negligible.
Therefore, we have proven, for \(x\in[-g^{\frac{1}{4}-\delta},\ g^{\frac{1}{4}-\delta}]\),
\[\binom{g}{g_{1}}\binom{3g-4+2n}{3g_{1}-2+n}\sim 2^{(2\lambda+4)g-4}\frac{1}{ \sqrt{(2\lambda+3)\frac{\pi g}{2}}}e^{-(1+\frac{9}{2\lambda+3})\frac{\pi^{2}}{ 2}}\,. \tag{29}\]
To summarize, the dominate contribution of the integral comes from the region \(x\in[-g^{\frac{1}{4}-\delta},\ g^{\frac{1}{4}-\delta}]\), with contribution given in (29). The reason is that when \(x\) is in the complement \([-(1-\varepsilon)\sqrt{g},(1-\varepsilon)\sqrt{g}]\backslash[-g^{\frac{1}{4}- \delta},\ g^{\frac{1}{4}-\delta}]\), the upper bound (25) implies the contribution to the integration is at most of order \(\exp(-g^{\frac{1}{2}-2\delta})(\sqrt{g})^{2}\), with the first factor of \(\sqrt{g}\) being the length of the interval and the second factor of \(\sqrt{g}\) being the Jacobian \(dg_{1}/dx\). Compared with the dominant contribution (29), we see that the summation in the complement region is exponentially suppressed.
This dominant contribution to the sum (15) in the region \(x\in[-g^{\frac{1}{4}-\delta},\ g^{\frac{1}{4}-\delta}]\) can now be approximated as an Gaussian integral over \(x\), with Jacobian factor \(\frac{\sqrt{g}}{2}\) from changing sum over \(g_{1}\) into integral over \(dx\),
\[\sum_{g_{1}=0}^{g}\binom{g}{g_{1}}\binom{3g-4+2n}{3g_{1}-2+n} \sim 2^{(2\lambda+4)g-4}\frac{1}{\sqrt{(2\lambda+3)\frac{\pi g}{2} }}\frac{\sqrt{g}}{2}\int_{-\infty}^{+\infty}e^{-(1+\frac{9}{2\lambda+3})\frac{ \pi^{2}}{2}}\,dx\] \[=\frac{1}{\sqrt{\pi g(\lambda+6)}}2^{(2\lambda+4)g-4}\,. \tag{30}\]
**Proposition 3.7**.: _The asymptotics of the denominator of (7) is given by_
\[\sum_{k=0}^{3g-4}\frac{\binom{3g-4+2n}{n+k}\binom{6g-6}{2k+1}}{\binom{3g-4}{k} }=\sqrt{\frac{3}{3+\lambda}}2^{(2\lambda+6)g-7}\big{(}1+o(1)\big{)}\,, \tag{31}\]
_with the error term uniformly small in \(n\) as \(g\to\infty\)._
Proof.: Notice that all three binomial coefficients obtain their maximum at \(k=\frac{3g-4}{2}\). We could naturally define
\[\frac{k}{3g-4}=\frac{1}{2}\Big{(}1+\frac{x}{\sqrt{3g-4}}\Big{)}\,,\]
and repeat the steps that leads to the previous proposition.
As for the previous case, we divide the domain into the _tail_ region, where \(x\in[-\sqrt{3g-4},-(1-\varepsilon)\sqrt{3g-4}]\cup[(1-\varepsilon)\sqrt{3g-4},\sqrt{3g-4}]\) and the rest, where \(x\in[-(1-\varepsilon)\sqrt{3g-4},(1-\varepsilon)\sqrt{3g-4}]\). The constant \(\varepsilon\) satisfies \(0<\varepsilon<1\). Lemma 3.4 and Remark 3.5 imply that the contribution of the tail region is negligible.
To simplify the computation, we first express the second term in the numerator of (31) as
\[\binom{6g-6}{2k+1}=\binom{6g-8}{2k}r_{1}(g_{1};g) \tag{32}\]
with
\[r_{1}(k;g)=\frac{6(g-1)(6g-7)}{(2k+1)(6g-2k-7)}=\frac{6(g-1)(6g-7)}{9(g-1)^{2}-(3g -4)x^{2}}\,. \tag{33}\]
\(r_{1}\) obtains its maximum value at the boundary \(x^{2}=(1-\varepsilon)\sqrt{3g-4}\):
\[r_{1}(k;g)\leq\frac{4}{\varepsilon(2-\varepsilon)}\Big{(}1+O(\frac{1}{g}) \Big{)}\,. \tag{34}\]
Using lemma (3.2), we find
\[\frac{\binom{3g-4+2n}{n+k}\binom{6g-8}{2k}}{\binom{3g-4}{k}}\sim R_ {2}(k;g,\lambda)\] \[\times\exp\Big{[}(3g-4+2n)H\big{(}\frac{1}{2}+\frac{\sqrt{3g-4}x} {2g(2\lambda+3)-4}\big{)}+(3g-4)H(\frac{1}{2}+\frac{x}{2\sqrt{3g-4}})\Big{]}\,, \tag{35}\]
where
\[R_{2}(k;g,\lambda)=\frac{1}{\sqrt{\pi}}\frac{1}{\sqrt{(3g-4)\left(1-\frac{x^{ 2}}{2g\lambda+(3g-4)}\right)+2g\lambda}}\,. \tag{36}\]
We only has two \(H\) function instead of three in the exponential, because the \(\binom{6g-8}{2k}\) term in the numerator and the \(\binom{3g-4}{k}\) term in the denominator can be expressed by \(H\) with the same argument.
We find, for \(x\in[-(1-\varepsilon)\sqrt{3g-4},(1-\varepsilon)\sqrt{3g-4}]\), the remainder term \(R_{2}(k;g,\lambda)\) is bounded by the value at \(x^{2}=(3g-4)(1-\varepsilon)^{2}\),
\[R_{2}(k;g,\lambda)\leq\frac{1}{\sqrt{\pi}}\frac{1}{\sqrt{(3g-4)\left(1-\frac{( 3g-4)(1-\varepsilon)^{2}}{2g\lambda+(3g-4)}\right)+2g\lambda}}\leq\frac{1}{ \sqrt{\pi(3g-4)}}\frac{1}{\sqrt{1-(1-\varepsilon)^{2}}}\,, \tag{37}\]
where in the last step we substitute the maximal value at \(\lambda=0\) to obtain a uniform upper bound.
The exponential terms are bounded using (11),
\[\exp\Big{[}(3g-4+2n)H\big{(}\frac{1}{2}+\frac{\sqrt{3g-4}x}{2g(2 \lambda+3)-4}\big{)}+(3g-4)H(\frac{1}{2}+\frac{x}{2\sqrt{3g-4}})\Big{]}\] \[\leq 4^{(3+\lambda)g-4}\exp\Big{(}-\frac{x^{2}(g(\lambda+3)-4)}{(2 \lambda+3)g-4}\Big{)}\leq 4^{(3+\lambda)g-4}\exp\Big{(}-\frac{x^{2}}{2}\Big{)}\,. \tag{38}\]
where we bound the coefficient of exponential as \(1/2\). As one can easily verify that when \(g>2/3\) this is always true.
The results above shows that the binomial sum is uniformly bounded by
\[\frac{\binom{3g-4+2n}{n+k}\binom{6g-6}{2k+1}}{\binom{3g-4}{k}}\leq\frac{\text {const}}{\sqrt{3g-4}}\cdot 4^{(3+\lambda)g}\exp\Big{(}-\frac{x^{2}}{2}\Big{)}\,. \tag{39}\]
where the constant only depends on \(\varepsilon\). The precise value of the constant can be found using (34), (37) and (38), but it is not needed for our purposes.
Now we estimate the contributions to the sum in the region \(x\in[-(3g-4)^{\frac{1}{4}-\delta},\ (3g-4)^{\frac{1}{4}-\delta}]\), with \(0<\delta<1/4\). In this region, a finer estimation can be made, since
\(x^{2}=O(g^{\frac{1}{2}-2\delta})\). We find in this regime the term in the denominator of (31) can be approximated as
\[\binom{3g-4}{k}=2^{3g-4}\cdot e^{-\frac{x^{2}}{2}}\cdot\sqrt{\frac{2}{\pi(3g-4)} }\cdot\Big{(}1+\mathcal{O}(g^{-4\delta})\Big{)}\sim 2^{3g-4}\cdot e^{-\frac{x^{2}}{2}} \cdot\sqrt{\frac{2}{3\pi g}}\,. \tag{40}\]
The first term in the in the numerator of (31) reads
\[\binom{3g-4+2n}{k+n}\sim 2^{(3+2\lambda)g-\frac{7}{2}}\sqrt{\frac{\pi}{g(3+2 \lambda)}}e^{-\frac{3}{3+2\lambda}\frac{x^{2}}{2}}\,, \tag{41}\]
while for the second term \(\binom{6g-6}{2k+1}=r_{1}\binom{6g-8}{2k}\) in the numerator of (31), we find
\[\binom{6g-8}{2k}\sim 2^{6g-8}\frac{1}{\sqrt{3\pi g}}e^{-x^{2}}\,,\quad r_{1 }=4\Big{(}1+O(\frac{x^{2}}{g})\Big{)}=4\Big{(}1+O(g^{-\frac{1}{2}-2\delta}) \Big{)}\,. \tag{42}\]
Therefore, using (40), (41) and (42), we conclude that when \(x\in[-(3g-4)^{\frac{1}{4}-\delta},\ (3g-4)^{\frac{1}{4}-\delta}]\),
\[\frac{\binom{3g-4+2n}{n+k}\binom{6g-6}{2k+1}}{\binom{3g-4}{k}}\sim 2^{(2\lambda+ 6)g-6}\frac{1}{\sqrt{\pi(3+2\lambda)g}}e^{-\frac{3+\lambda}{3+2\lambda}x^{2}}\,. \tag{43}\]
This estimate, combined with the uniform upper bound given in equation (39), suggests that the dominant contribution to the sum comes from the central region \(x\in[-(3g-4)^{\frac{1}{4}-\delta},\ (3g-4)^{\frac{1}{4}-\delta}]\), as similarly demonstrated in the proof of Proposition 3.6. Contributions outside the central region are exponentially suppressed compared to those within the region.
We can now approximate the summation in the region \(x\in[-(3g-4)^{\frac{1}{4}-\delta},\ (3g-4)^{\frac{1}{4}-\delta}]\) as a Gaussian integral over \(x\), with Jacobian \(\frac{\sqrt{3g-4}}{2}\).
\[\begin{split}\sum_{k=0}^{3g-4}\frac{\binom{3g-4+2n}{n+k}\binom{6 g-6}{2k+1}}{\binom{3g-4}{k}}&\sim 2^{(2\lambda+6)g-6}\frac{1}{ \sqrt{\pi(3+2\lambda)g}}\frac{\sqrt{3g-4}}{2}\int_{-\infty}^{+\infty}e^{- \frac{3+\lambda}{3+2\lambda}x^{2}}dx\\ &=\sqrt{\frac{3g-4}{(3+\lambda)g}}2^{(2\lambda+6)g-7}\sim\sqrt{ \frac{3}{3+\lambda}}2^{(2\lambda+6)g-7}\,.\end{split} \tag{44}\]
Eventually we have the following effortless proof for the main theorem:
Proof of Theorem 2.1.: Direct calculation by Proposition 3.6 and 3.7.
|
2301.06092
|
The Voronoi Region of the Barnes-Wall Lattice $Î_{16}$
|
We give a detailed description of the Voronoi region of the Barnes-Wall
lattice $\Lambda_{16}$, including its vertices, relevant vectors, and symmetry
group. The exact value of its quantizer constant is calculated, which was
previously only known approximately. To verify the result, we estimate the same
constant numerically and propose a new very simple method to quantify the
variance of such estimates, which is far more accurate than the commonly used
jackknife estimator.
|
Daniel Pook-Kolb, Erik Agrell, Bruce Allen
|
2023-01-15T13:21:10Z
|
http://arxiv.org/abs/2301.06092v1
|
# The Voronoi Region of the Barnes-Wall Lattice \(\Lambda_{16}\)
###### Abstract
We give a detailed description of the Voronoi region of the Barnes-Wall lattice \(\Lambda_{16}\), including its vertices, relevant vectors, and symmetry group. The exact value of its quantizer constant is calculated, which was previously only known approximately. To verify the result, we estimate the same constant numerically and propose a new very simple method to quantify the variance of such estimates, which is far more accurate than the commonly used jackknife estimator.
**Barnes-Wall lattice, lattice quantizer, normalized second moment, quantizer constant, Voronoi region**
## I Introduction
In 1959, E. S. Barnes and G. E. Wall introduced a family of lattices in dimensions \(4,8,16,\ldots\) based on Abelian groups [1]. In dimensions \(4\) and \(8\), the proposed construction reproduced known lattices, which are nowadays denoted as \(D_{4}\) and \(E_{8}\), respectively, whereas previously unknown lattices were obtained in dimensions \(16\) and up. Alternative constructions and further properties of the _Barnes-Wall (BW) lattices_ were investigated in [2, 3, 4].
The BW lattices are remarkably good in three of the standard figures of merit for lattices: _packing, kissing, and quantization_. In fact, they are known or conjectured to be optimal in all three figures of merit in dimensions \(n=4\), \(8\), and \(16\)[5, Ch. 1]. For this reason, they have been applied in a number of applications, including digital communications [2], data compression [6], cryptography [7], quantum computing [8], and algebraic geometry [9].
The Voronoi regions of \(D_{4}\) and \(E_{8}\) have been fully determined. Hence their _packing densities, kissing numbers, and quantizer constants_ are known exactly [10, 5, Ch. 4], and we will not discuss these lattices further. In this paper, we determine the Voronoi region of the \(16\)-dimensional BW lattice \(\Lambda_{16}\). Its relevant vectors, vertices and quantizer constant are reported exactly for the first time. We furthermore characterize its full symmetry group, which is known to be of order \(89\,181\,388\,800\)[5, Section 4.10], using two transformation matrices.
## II The face hierarchy
In this section, we describe the Voronoi region of \(\Lambda_{16}\) in a bottom-up manner, beginning from the \(0\)-faces (vertices) and making our way upwards in the hierarchy of dimensions to the single \(16\)-face, which is the Voronoi region itself. We describe the faces in the coordinate system defined by the lower block triangular generator matrix
\[\begin{bmatrix}2&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 1&1&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 1&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 1&1&1&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 1&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0\\ 1&1&0&0&1&1&0&0&0&0&0&0&0&0&0&0\\ 1&0&1&0&1&0&1&0&0&0&0&0&0&0&0&0\\ \frac{1}{2}&\frac{1}{2}&\frac{1}{2}&\frac{1}{2}&\frac{1}{2}&\frac{1}{2}&\frac{ 1}{2}&\frac{1}{2}&\frac{1}{2}&\vdots\\ \frac{1}{2}&\frac{1}{2}&\frac{1}{2}&\frac{1}{2}&\frac{1}{2}&\frac{1}{2}&\frac{ 1}{2}&\vdots\\ \frac{1}{2}&0&\frac{1}{2}&0&\frac{1}{2}&0&\frac{1}{2}&0&\frac{1}{2}&0&\frac{1}{2 }&0&\frac{1}{2}&0\\ \frac{1}{2}&\frac{1}{2}&\frac{1}{2}&\frac{1}{2}&\frac{1}{2}&\frac{1}{2}&\frac{ 1}{2}&\frac{1}{2}&\frac{1}{2}&\frac{1}{2}&\frac{1}{2}&\frac{1}{2}&\frac{1}{2} \\ \end{bmatrix}. \tag{1}\]
This generator matrix is scaled down by a linear factor of \(\surd 2\) (or, equivalently, a volume factor of 256) compared with the generator matrix for the same lattice in [5, Fig. 4.10]. Some lattice parameters depend on the scaling of the lattice.
### _0-faces_
The Voronoi region has \(201\,343\,200\) vertices, which belong to six equivalence classes listed as \(\mathbf{v}_{1},\mathbf{v}_{2},\ldots,\mathbf{v}_{6}\) in Tab. I. Equivalence is defined by the rotations \(\operatorname{Aut}(\Lambda_{16})\) that take \(\Lambda_{16}\) into \(\Lambda_{16}\). If translation by a lattice vector is considered as another equivalence operation, \(\mathbf{v}_{2}\) becomes equivalent to \(\mathbf{v}_{4}\) and \(\mathbf{v}_{3}\) to \(\mathbf{v}_{5}\), reducing the six equivalence classes to only four. The vertices are located at a squared distance from the origin of \(3/2\), \(10/9\), or \(1\). Hence, the covering radius is \(\sqrt{3/2}\), as already known [3, 5, Section 4.10].
### _1-faces_
The vertices are connected by a total of about \(3\cdot 10^{10}\) edges, which belong to \(23\) equivalence classes.1 Their lengths are \(\sqrt{3/2}\), \(1\), \(\sqrt{17/18}\), \(\sqrt{7}/3\), \(\sqrt{11/18}\), \(2/3\), \(\sqrt{5/18}\), and \(1/3\). At each vertex equivalent to \(\mathbf{v}_{1}\), \(\mathbf{v}_{2}\), \(\mathbf{v}_{3}\), \(\mathbf{v}_{4}\), \(\mathbf{v}_{5}\), or \(\mathbf{v}_{6}\), respectively, \(32\,768\), \(144\), \(403\), \(179\), \(220\), or \(398\,824\) edges meet.
### _2-faces_
There are about \(5\cdot 10^{11}\)\(2\)-faces in \(58\) equivalence classes. These consist of \(3\) classes of about \(4\cdot 10^{9}\) squares with an area of \(1/9\) and about \(5\cdot 10^{11}\) triangles in \(55\) classes, \(21\) of which are geometrically distinct, with areas between \(\sqrt{15}/72\) and \(\sqrt{5}/4\). The \(22\) geometrically distinct \(2\)-faces are shown in Fig. 1.
### _3- to 14-faces_
In dimensions \(3\) to \(14\), there are \(6\,052\) classes of faces, which we will not describe in detail here. Some of their properties are summarized in Tab. II, where we show the number of face classes under \(\operatorname{Aut}(\Lambda_{16})\), numbers of child faces (i.e., subfaces of dimension \(d-1\)) and vertices for the faces in all dimensions \(d=0,1,\ldots,16\). Further information is available as supplementary material [12].
### _15-faces_
The \(15\)-faces, or _facets,_ all lie halfway between the origin and another lattice vector, orthogonal to the line between them. There are in total \(65\,760\) such facet-defining nonzero vectors, or _relevant vectors_. They belong to two equivalence classes at different distances from the origin (see Tab. I). The ones closest to the origin are the _minimal vectors_ at a squared distance of \(2\), which were found already in [1]. The packing radius is half of their length, i.e., \(\sqrt{2}/2\). There are \(4\,320\) such vectors, which is the kissing number of the lattice. There are also \(61\,440\) other relevant vectors, which have a squared length of \(3\).
The facets belonging to the \(4\,320\) minimal vectors each have \(7\,704\) child faces and \(1\,046\,430\) vertices of all six classes, while the remaining \(61\,440\) facets have \(828\) child faces and \(26\,160\) vertices equivalent to either \(\mathbf{v_{2}}\), \(\mathbf{v_{4}}\), \(\mathbf{v_{5}}\), or \(\mathbf{v_{6}}\).
### _16-face_
Having enumerated all inequivalent \(d\)-faces for \(d=0,1,\ldots,15\) and computed their volumes and second moments using the recursion relations in [13, Sec. 3], a complete characterization of the \(16\)-face is obtained. Using [11], we estimate that the Voronoi region has between \(1\cdot 10^{14}\) and \(3\cdot 10^{14}\) faces across all dimensions.
Next, the _covariance matrix_ or _second moment tensor_ is computed as
\[\mathbf{U}=\frac{U}{16}\mathbf{I}_{16}\, \tag{2}\]
where the (unnormalized) _second moment_
\[U=\operatorname{tr}\mathbf{U}=\frac{207\,049\,815\,983}{4\,287\,303\,820\,800} \tag{3}\]
and \(\mathbf{I}_{16}\) the \(16\times 16\) identity matrix. After proper normalization, the quantizer constant is obtained as
\[G=\frac{1}{n}\frac{U}{V^{1+2/n}}\, \tag{4}\]
where \(n=16\) is the lattice's dimension and \(V=1/16\) is the volume of its Voronoi region, which yields
\[G=U\sqrt{2}\approx 0.068\,297\,622\,489\,318\,7. \tag{5}\]
To verify our enumeration of face classes, we use the recursion relations in [13, Sec. 3] to calculate the volume of the Voronoi region, which agrees with the expected value of \(1/16\). We also verify the result (5) numerically in Sec. IV.
## III The symmetry group of \(\Lambda_{16}\)
The symmetries of \(\Lambda_{16}\) are generated by products of sign changes, permutations and the matrix
\[\mathbf{H}=\begin{bmatrix}\mathbf{H}_{4}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{H}_{4}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathbf{H}_{4}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{H}_{4}\end{bmatrix}\, \tag{6}\]
where
\[\mathbf{H}_{4}=\frac{1}{2}\begin{bmatrix}1&1&1&1\\ 1&-1&1&-1\\ 1&1&-1&-1\\ 1&-1&-1&1\end{bmatrix} \tag{7}\]
is a Hadamard matrix.
There are \(2\,048\) sign changes, which can be described as a product of three subgroups \(\mathcal{S}_{1}\), \(\mathcal{S}_{2}\) and \(\mathcal{S}_{3}\). The first subgroup \(\mathcal{S}_{1}\) contains all even numbers of sign changes of component pairs \((\mathbf{x}_{i},\mathbf{x}_{i+1})\) for \(i=1,3,\ldots 15\), and has order \(128\). \(\mathcal{S}_{2}\) changes the signs of an even number of the first and last \(4\) odd components \((\mathbf{x}_{i},\mathbf{x}_{16-i})\), \(i=1,3,5,7\). This subgroup has order \(8\). Finally, \(\mathcal{S}_{3}\) is of order \(2\) and changes the signs of the components \((\mathbf{x}_{1},\mathbf{x}_{3},\mathbf{x}_{5},\mathbf{x}_{7})\).
\begin{table}
\begin{tabular}{l r
The permutations \(\mathcal{P}\subset\mathrm{Aut}(\Lambda_{16})\) of vector components that keep \(\Lambda_{16}\) invariant are described in [1, Lemma 3.2]. The Lemma makes use of a \(4\)-dimensional vector space over the Galois field \(\mathrm{GF}(2)\) to represent indices of components of the lattice vectors. The reader is referred to [1] for a detailed description of this construction. Using [14, Eq. (19) of Ch. 13] the order of \(\mathcal{P}\) is
\[|\mathcal{P}|=16\prod_{l=0}^{3}(16-2^{l})=322\,560\, \tag{8}\]
These are precisely the permutations that keep the first-order binary Reed-Muller codes of length \(2^{4}\) invariant [14, Theorem 24 of Ch. 13].
Examples of permutations in \(\mathcal{P}\) are
\[p_{1} =(1\ 2\ 3\ 4)(5\ 6\ 7\ 8)(9\ 10\ 11\ 12)(13\ 14\ 15\ 16)\,\] \[p_{2} =(1\ 2)(5\ 6)(9\ 10)(13\ 14)\,\] \[p_{3} =(1\ 6\ 13)(2\ 8)(3\ 9\ 12\ 5\ 15\ 14)(4\ 11\ 7)\,\] \[p_{4} =(1\ 9\ 16\ 15\ 5\ 7\ 4\ 8\ 10\ 6\ 13\ 2\ 3\ 14\ 12)\, \tag{9}\]
here given in cycle notation for compactness. The complete subgroup \(\mathcal{P}\) can be generated using various subsets of these permutations, for example \(\{p_{1},p_{2},p_{3}\}\), \(\{p_{1},p_{4}\}\), or \(\{p_{3},p_{4}\}\).
The full automorphism group \(\mathrm{Aut}(\Lambda_{16})\) can be generated by combining \(\mathbf{H}\) with the generators of \(\mathcal{S}_{1}\), \(\mathcal{S}_{2}\), and \(\mathcal{S}_{3}\) and one of the sets of generators of \(\mathcal{P}\). Remarkably, it can also be generated by just two matrices. The first is the \(16\times 16\) permutation matrix
\[\mathbf{M}_{2}=\begin{bmatrix}\mathbf{H}_{4}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{\bar{H}}_{4}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathbf{\bar{H}}_{4}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{\bar{H}}_{4}\end{bmatrix}\, \tag{10}\]
which is built using (7) with a sign change of the last row, i.e., with the Hadamard matrix
\[\mathbf{\bar{H}}_{4}=\frac{1}{2}\begin{bmatrix}1&1&1&1\\ 1&-1&1&-1\\ 1&1&-1&-1\\ -1&1&1&-1\end{bmatrix}. \tag{11}\]
## IV Numerical verification and error estimates
To validate (3), we estimate \(U\) by Monte-Carlo integration over the Voronoi region. We also estimate the variance of the estimate of \(U\), for which we use a different method than the "jackknife estimator" in [15]. In this section, we first describe our estimate of \(U\) and the variance thereof, then motivate why we prefer our variance estimator over the jackknife, and finally compare our numerical estimate of \(G\) for \(\Lambda_{16}\) with the true value in (5).
The Monte-Carlo estimate of \(U\) is
\[\hat{U}=\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{x}_{i}\|^{2}\, \tag{12}\]
where \(\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\) are \(N\) independent random vectors uniformly distributed in the Voronoi region of \(\Lambda\).
To estimate \(\mathrm{var}\,\hat{U}\), we first note that since the vectors \(\mathbf{x}_{i}\) are independent and identically distributed, \(\mathrm{var}\,\hat{U}=\mathbf{x}_{i}\), we have
\[\hat{U}=\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{x}_{i}\|^{2}\, \tag{13}\]
where \(\mathbf{x}_{i},\ldots,\mathbf{x}_{N}\) are \(N\) independent random vectors uniformly distributed in the Voronoi region of \(\Lambda\).
To estimate \(\mathrm{var}\,\hat{U}\), we first note that since the vectors \(\mathbf{x}_{i}\) are independent and identically distributed, \(\mathrm{var}\,\hat{U}=\mathbf{x}_{i}\), we have
\[\hat{U}=\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{x}_{i}\|^{2}\, \tag{14}\]
where \(\mathbf{x}_{i}\) are independent random vectors uniformly distributed in the Voronoi region of \(\Lambda\).
To estimate \(\mathrm{var}\,\hat{U}\), we first note that since the vectors \(\mathbf{x}_{i}\) are independent and identically distributed, \(\mathrm{var}\,\hat{U}=\mathbf{x}_{i}\), we have
\[\hat{U}=\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{x}_{i}\|^{2}\, \tag{15}\]
where \(\mathbf{x}_{i}\) are independent random vectors uniformly distributed in the Voronoi region of \(\Lambda\).
To estimate \(\mathrm{var}\,\hat{U}\), we first note that since the vectors \(\mathbf{x}_{i}\) are independent and identically distributed, \(\mathrm{var}\,\hat{U}=\mathbf{x}_{i}\), we have
\[\hat{U}=\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{x}_{i}\|^{2}\, \tag{16}\]
where \(\mathbf{x}_{i}\) are independent random vectors uniformly distributed in the Voronoi region of \(\Lambda\).
To estimate \(\mathrm{var}\,\hat{U}\), we first note that since the vectors \(\mathbf{x}_{i}\) are independent and identically distributed, \(\mathrm{var}\,\hat{U}=\mathbf{x}_{i}\), we have
\[\hat{U}=\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{x}_{i}\|^{2}\, \tag{17}\]
where \(\mathbf{x}_{i}\) are independent random vectors uniformly distributed in the Voronoi region of \(\Lambda\).
To estimate \(\mathrm{var}\,\hat{U}\), we first note that since the vectors \(\mathbf{x}_{i}\) are independent and identically distributed, \(\mathrm{var}\,\hat{U}=\mathbf{x}_{i}\), we have
\[\hat{U}=\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{x}_{i}\|^{2}\, \tag{18}\]
where \(\mathbf{x}_{i}\) are independent random vectors uniformly distributed in the Voronoi region of \(\Lambda\).
To estimate \(\mathrm{var}\,\hat{U}\), we first note that since the vectors \(\mathbf{x}_{i}\) are independent and identically distributed, \(\mathrm{var}\,\hat{U}=\mathbf{x}_{i}\), we have
\[\hat{U}=\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{x}_{i}\|^{2}\, \tag{19}\]
where \(\mathbf{x}_{i}\) are independent random vectors uniformly distributed in the Voronoi region of \(\Lambda\).
To estimate \(\mathrm{var}\,\hat{U}\), we first note that since the vectors \(\mathbf{x}_{i}\) are independent and identically distributed, \(\mathrm{var}\,\hat{U}=\mathbf{x}_{i}\), we have
\[\hat{U}=\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{x}_{i}\|^{2}\, \tag{20}\]
where \(\mathbf{x}_{i}\) are independent random vectors uniformly distributed in the Voronoi region of \(\Lambda\).
To estimate \(\mathrm{var}\,\hat{U}\), we first note that since the vectors \(\mathbf{x}_{i}\) are independent and identically distributed, \(\mathrm{var}\,\hat{U}=\mathbf{x}_{i}\), we have
\[\hat{U}=\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{x}_{i}\|^{2}\, \tag{21}\]
where \(\mathbf{x}_{i}\) are independent random vectors uniformly distributed in the Voronoi region of \(\Lambda\).
To estimate \(\mathrm{var}\,\hat{U}\), we first note that since the vectors \(\mathbf{x}_{i}\) are independent and identically distributed, \(\mathrm{var}\,\hat{U}=\mathbf{x}_{i}\), we have
\[\hat{U}=\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{x}_{i}\|^{2}\, \tag{22}\]
where \(\mathbf{x}_{i}\) are independent random vectors uniformly distributed in the Voronoi region of \(\Lambda\).
To estimate \(\mathrm{var}\,\hat{U}\), we first note that since the vectors \(\mathbf{x}_{i}\) are independent and identically distributed, \(\mathrm{var}\,\hat{U}=\mathbf{x}_{i}\), we have
\[\hat{U}=\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{x}_{i}\|^{2}\, \tag{23}\]
where \(\mathbf{x}_{i}\) are independent random vectors uniformly distributed in the Voronoi region of \(\Lambda\).
To estimate \(\mathrm{var}\,\hat{U}\), we first note that since the vectors \(\mathbf{x}_{i}\) are independent and identically distributed, \(\mathrm{var}\,\hat{U}=\mathbf{x}_{i}\), we have
\[\hat{U}=\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{x}_{i}\|^{2}\, \tag{24}\]
where \(\mathbf{x}_{i}\) are independent random vectors uniformly distributed in the Voronoi region of \(\Lambda\).
To estimate \(\mathrm{var}\,\hat{U}\), we first note that since the vectors \(\mathbf{x}_{i}\) are independent and identically distributed, \(\mathrm{var}\,\hat{U}=\mathbf{x}_{i}\), we have
\[\hat{U}=\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{x}_{i}\|^{2}\, \tag{25}\]
where \(\mathbf{x}_{i}\) are independent random vectors uniformly distributed in the Voronoi region of \(\Lambda\).
To estimate \(\mathrm{var}\,\hat{
\((1/N)\operatorname{var}\lVert\mathbf{x}\rVert^{2}\), where \(\mathbf{x}\) is a single random vector with the same distribution as \(\mathbf{x}_{i}\). Therefore, our estimate of \(\operatorname{var}\hat{U}\), denoted by \(\widehat{\operatorname{var}}\hat{U}\), is defined by
\[\widehat{\operatorname{var}}\hat{U}=(1/N)\,\widehat{\operatorname{var}} \lVert\mathbf{x}\rVert^{2}\;. \tag{13}\]
Applying the standard unbiased variance estimator of \(\operatorname{var}\lVert\mathbf{x}\rVert^{2}\)
\[\widehat{\operatorname{var}}\lVert\mathbf{x}\rVert^{2}=\frac{1}{N-1}\sum_{i=1}^{N }\left(\lVert\mathbf{x}_{i}\rVert^{2}-\hat{U}\right)^{2} \tag{14}\]
in (13) yields
\[\widehat{\operatorname{var}}\hat{U} =\frac{1}{N(N-1)}\sum_{i=1}^{N}\left(\lVert\mathbf{x}_{i}\rVert^{2}- \hat{U}\right)^{2}\] \[=\frac{1}{N-1}\left(\frac{1}{N}\sum_{i=1}^{N}\lVert\mathbf{x}_{i} \rVert^{4}-\hat{U}^{2}\right) \tag{15}\]
or after normalization as in (4)
\[\hat{G} =\frac{\hat{U}}{nV^{1+2/n}}, \tag{16}\] \[\widehat{\operatorname{var}}\hat{G} =\frac{\widehat{\operatorname{var}}\hat{U}}{(nV^{1+2/n})^{2}}\;. \tag{17}\]
The variance estimator (15) follows directly from fundamental laws of probability. What is surprising is that a different estimator has been used, unchellenged, in most, or perhaps all, previous works involving numerical estimates of lattice second moments [15, 16, 17]. To rectify this 39-year old misconception, we now elaborate on why (15) is more accurate.
The jackknife works by partitioning the independent randomly selected vectors \(\mathbf{x}_{1},\dots,\mathbf{x}_{N}\) into \(g\) groups, computing the average squared length within each group, and finally computing the sample variance of these \(g\) averages [15, Eqs. (3)
\begin{table}
\begin{tabular}{c|c c c c} \hline dim & classes & child faces & vertices & vertex counts of face classes & vertex counts of faces (approx.) \\ \hline
0 & 6 & 0 & 1 & 10\({}^{1}\) \\
1 & 23 & 2 & 2 & 10\({}^{2}\) \\
2 & 58 & 3, 4 & 3, 4 & 10\({}^{1}\) \\
3 & 168 & 4–6 & 4–8 & 10\({}^{1}\) \\
4 & 441 & 5–16 & 5–16 & 10\({}^{1}\) \\
5 & 867 & 6–21 & 6–32 & 10\({}^{1}\) \\
6 & 1 257 & 7–30 & 7–64 & 10\({}^{1}\) \\
7 & 1 329 & 8–51 & 8–128 & 10\({}^{1}\) \\
8 & 1 023 & 9–128 & 9–256 & 10\({}^{1}\) \\
9 & 566 & 10–194 & 10–400 & 10\({}^{1}\) \\
10 & 253 & 11–258 & 11–641 & 10\({}^{1}\) \\
11 & 96 & 12–620 & 12–1281 & 10\({}^{1}\) \\
12 & 35 & 16–862 & 24–2945 & 10\({}^{1}\) \\
13 & 12 & 42–1 312 & 64–11 138 & 10\({}^{1}\) \\
14 & 5 & 144–2 763 & 520–59 907 & 10\({}^{1}\) \\
15 & 2 & 828, 7 704 & 26 160, 1 046 430 & 10\({}^{1}\) \\
16 & 1 & 65 760 & 201 343 200 & 10\({}^{1}\) \\ \hline \end{tabular}
\end{table} TABLE II: Summary information about the faces of the Voronoi region of \(\Lambda_{16}\). The first column lists the dimension \(d\) of the faces and the second the number of classes of \(d\)-faces under \(\operatorname{Aut}(\Lambda_{16})\). The third column shows the range of numbers of child faces of each \(d\)-face and the fourth column the range of numbers of vertices of each \(d\)-face. In the fifth column, we visualize the number of face classes (\(y\)-axis) containing a certain number of vertices (\(x\)-axis). The last column shows the same information for the numbers of faces instead of face classes, which have been approximated using [11].
(4)]. This method brings at least two disadvantages: First, the estimated variance depends on how the list \(\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\) is ordered; reordering the list would yield a different variance estimate, although the estimated second moment (12) remains the same. And second, the variance of vectors within a group is ignored. The proposed estimator (15) suffers from neither of these disadvantages.
To quantify the accuracy of both variance estimators, we numerically estimate the second moment of the cubic lattice \(\mathbb{Z}^{n}\) for \(n=3\). The second moment of \(\mathbb{Z}^{n}\) is \(U=\mathbb{E}[\|\mathbf{x}\|^{2}]=n/12\), and the variance of \(\hat{U}\) can be calculated exactly as \(\operatorname{var}\hat{U}=(1/N)\operatorname{var}\lVert\mathbf{x}\rVert^{2}=(1/ N)(\mathbb{E}[\|\mathbf{x}\|^{4}]-\mathbb{E}[\|\mathbf{x}\|^{2}]^{2})=n/(180N)\). We generated \(N=100\,000\) vectors uniformly in the Voronoi region of \(\mathbb{Z}^{3}\), which is the unit cube, computed \(\hat{U}\) using (12), and estimated the variance of \(\hat{U}\) using the two methods. For the jackknife, we used a group size of \(g=100\) as in [15]. Both estimators were run 10 000 times, each time with \(N\) new random vectors. Fig. 2 shows histograms of the resulting estimates of the standard deviation, together with the exact value. It can be observed that (12) in this example is more than an order of magnitude more accurate than the jackknife with \(g=100\).
The accuracy of the jackknife improves with increasing \(g\), and it is most accurate when each group consists of a single sample, i.e., when \(g=N\). In this extreme case, the jackknife simplifies into (15)--but this is not how the jackknife was applied in previous studies [15, 16, 17].
Having established the usefulness of the new variance estimator, we proceed to estimate the quantizer constant \(G\) of \(\Lambda_{16}\) with high accuracy. Numerically evaluating (12) and (16) for the mean and (15) and (17) for the standard deviation, using \(N=4\cdot 10^{12}\) random \(16\)-dimensional vectors, we obtain
\[\hat{G} =0.068297616\, \tag{18}\] \[\sqrt{\operatorname{\widehat{var}}\hat{G}} =0.000000009. \tag{19}\]
The difference between \(\hat{G}\) and the exact \(G\) in (5) is only \(0.7\) standard deviations, which may serve as a numerical verification of the face hierarchy. The results are also in agreement with the previous (less accurate) estimate of the same constant in [15, Eq. (13)].
## V The algorithm
Our algorithm2 is described in detail in [13], which builds on previous methods for finding all relevant vectors [21] and faces [22]. In this section, we briefly summarize the main concept and present minor modifications to the methods of [13].
Footnote 2: The algorithms are implemented in _Python_ and the data types “List” and “Dictionary” we use in the code listings are meant to behave like the respective Python types. Group-theoretic aspects make use of _GAP_[18, 19], which is called from Python using _gappy_[20].
The basic approach remains the same: We first find all relevant vectors, i.e., normals of the facets, and all the vertices of the Voronoi region. The hierarchy of subfaces of the facets is then built by recursively intersecting the sets of vertices of parent faces. The computational cost is kept low by finding the classes of faces equivalent under \(\operatorname{Aut}(\Lambda_{16})\) and then only constructing the child faces of one (arbitrarily chosen) representative face per class. In total, only \(159\,143\) faces are constructed explicitly.
The classification of faces is performed iteratively as described in [13, Section 2.4.4]. In this method, we begin identifying equivalent faces using a proper subgroup \(\mathcal{U}\subset\operatorname{Aut}(\Lambda_{16})\), which creates classes of faces under \(\mathcal{U}\). The set consisting of one (arbitrarily) representative per class is then classified using another subgroup \(\mathcal{U}^{\prime}\). This can be repeated with different subgroups until we finally use the full group \(\operatorname{Aut}(\Lambda_{16})\). For \(\Lambda_{16}\), we found that a good option is to use only a single subgroup \(\mathcal{U}\), chosen as the stabilizer of the relevant vector \(\mathbf{n}_{2}\) with a stabilizer size of \(1\,451\,520\) (see Tab. I).
We made three changes to the method in [13], which affect how the equivalence of two faces is tested and how the orbits and stabilizers of individual vectors are constructed. We now describe these changes in turn, briefly revisiting the respective previous methods followed by our new algorithms.
### _Testing the equivalence of faces_
Our previous method of testing whether a face \(F\) is equivalent to another face \(F^{\prime}\) under a group \(\mathcal{G}\) is based on the following idea.3 For each face, we take a set of vectors that uniquely identifies that face. We use either the set of relevant vectors associated with the facets containing the face (i.e., the "normal vectors" of the face) or alternatively the face's vertices. The choice depends on the number of vectors in either of the two sets and on their classification under \(\mathcal{G}\). Let \(\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\) be the vectors of \(F\) and \(\mathbf{y}_{1},\ldots,\mathbf{y}_{N}\) be those of \(F^{\prime}\). We order these vectors such that \(\mathbf{x}_{i}\) is equivalent to \(\mathbf{y}_{i}\) for all \(i\) (if that is not possible, the faces are inequivalent). We then form the sets of all transformations between pairs \((\mathbf{x}_{i},\mathbf{y}_{i})\) for all \(i\). If the intersection of these sets is non-empty, it consists of transformations taking \(F\) into \(F^{\prime}\). If it is empty, however, we permute one of the sets and try again. The faces
Fig. 2: Histograms of two estimates of the standard deviation of the estimated second moment \(\hat{U}\) of the cubic lattice. The exact standard deviation \((\operatorname{var}\hat{U})^{1/2}\), which can be calculated analytically for the cubic lattice, reveals that the proposed estimator (12) is much more accurate than the jackknife with 100 groups.
are inequivalent if and only if all permutations lead to empty intersections of the sets of transformations.
In principle, the full set of transformations between any two equivalent vectors can easily be constructed as follows. Let \(\mathbf{x}=g_{x}\mathbf{x}^{\text{rep}}\) and \(\mathbf{y}=g_{y}\mathbf{x}^{\text{rep}}\) be two equivalent vectors with \(g_{x},g_{y}\in\mathcal{G}\) and \(\mathbf{x}^{\text{rep}}\) representing their equivalence class. Then, the full set of transformations in \(\mathcal{G}\) taking \(\mathbf{x}\) into \(\mathbf{y}\) is [13]
\[\mathcal{T}_{xy}=g_{y}\operatorname{Stab}_{\mathcal{G}}(\mathbf{x}^{\text{rep}})g _{x}^{-1}\, \tag{20}\]
where \(\operatorname{Stab}_{\mathcal{G}}(\mathbf{x}^{\text{rep}})\) is the stabilizer of \(\mathbf{x}^{\text{rep}}\) in \(\mathcal{G}\).
From Tab. I, we see that for \(\Lambda_{16}\), the sets (20) contain between \(1\,344\) and \(20\,643\,840\) elements. When forming the intersections using _GAP_, these sets are held in memory, which becomes a problem when multiple intersections need to be calculated.
We now describe a memory-efficient alternative, shown in Alg. 1. As in [13], this method is used after ensuring that \(F\) and \(F^{\prime}\) have the same number of vertices and number of normal vectors, and that the respective sets of vectors can be ordered such that \(\mathbf{x}_{i}\sim\mathbf{y}_{i}\) for all \(i\).
The main idea is to fix one vector \(\mathbf{x}\) of \(F\) and then construct all transformations
\[\mathcal{T}_{x}=\bigcup_{\mathbf{y}\in\mathcal{Y}}\mathcal{T}_{xy} \tag{21}\]
taking \(\mathbf{x}\) into any of the vectors \(\mathbf{y}\in\mathcal{Y}\), where \(\mathcal{Y}\) denotes the vectors of \(F^{\prime}\). Clearly, if \(F\) and \(F^{\prime}\) are equivalent, say \(gF=F^{\prime}\) for some \(g\in\mathcal{G}\), then \(g\) takes \(\mathbf{x}\) into one of the vectors \(\mathbf{y}\) of \(F^{\prime}\) and thus \(g\in\mathcal{T}_{x}\). Choosing \(\mathbf{x}\) as the vector with the smallest stabilizer and fewest equivalent vectors of \(F^{\prime}\), \(\mathcal{T}_{x}\) will often be very small and can be checked one by one. However, even if the smallest stabilizer is large, the elements of \(\mathcal{T}_{x}\) can be enumerated without holding the full set in memory.
Alg. 1 performs this test as follows. In lines 6 and 7, \(\mathbf{x}\) is chosen as the vector with the smallest stabilizer and, if there are multiple possibilities, then the one with the smallest number of equivalent vectors of \(F^{\prime}\). In line 10, we store the set of these equivalent vectors as \(\mathcal{Y}_{x}\). Independently from the choice of \(\mathbf{x}\), let \(\mathcal{D}\) be the smaller of the sets of vertices and of normal vectors of \(F\) (lines 12-17). We choose \(\mathcal{D}^{\prime}\) analogously for \(F^{\prime}\). Since the stabilizer is a group, we can use methods in _GAP_ to iterate over all its elements in line 18, while holding only one element in memory at any given time. For each element \(g_{s}\in\operatorname{Stab}_{\mathcal{G}}(\mathbf{x}^{\text{rep}})\) and each \(\mathbf{y}\in\mathcal{Y}_{x}\), we form the transformation (line 21)
\[g=g_{y}g_{s}g_{x}^{-1} \tag{22}\]
and evaluate if the two sets \(g\mathcal{D}\) and \(\mathcal{D}^{\prime}\) are equal. If they are, then \(F\) is equivalent to \(F^{\prime}\) and \(gF=F^{\prime}\). If they are unequal for all \(g_{s}\in\operatorname{Stab}_{\mathcal{G}}(\mathbf{x}^{\text{rep}})\) and all \(\mathbf{y}\in\mathcal{Y}_{x}\), then the two faces are inequivalent under \(\mathcal{G}\).
### _Constructing the orbit of a vector_
We use a variation of the standard orbit enumeration technique as implemented, e.g., in [18]. Alg. 2 constructs the orbit of a vector \(\mathbf{x}\) under a group \(\mathcal{G}\) and stores the group elements taking \(\mathbf{x}\) to the elements in its orbit. These group elements are needed in the procedure TransformOf in Alg. 1. The result is stored as a dictionary, where each key-value pair consists of an element \(\mathbf{y}\) of the orbit as key and one arbitrary transformation matrix taking \(\mathbf{x}\) into \(\mathbf{y}\) as value. We will call such a dictionary an _orbit map_.
For vertices, most of the group elements and, in fact, most of the vertices themselves are not needed in Alg. 1. Since child faces are constructed only for the fixed representative parent faces, only the vertices of the representative facets can appear. Our orbit algorithm therefore selectively stores only some of the group elements, which is decided in Alg. 2 using a _condition_ function. This significantly reduces the memory usage for the large orbits of vertices. For \(\Lambda_{16}\), only the group elements corresponding to \(1\,067\,070\) out of all \(201\,343\,200\)
vertices are needed.4
Footnote 4: Because some vertices appear in both facets, this number will vary depending on which facets are chosen as representatives.
The idea of the standard orbit algorithm is to repeatedly apply the generators of the group to the initial and the newly constructed vectors until no new vector appears. This is used in Alg. 2, where the _pool_ and _new_pool_ variables keep track of which new vectors have appeared in the last iteration. In lines 16-17, we conditionally store the vector and its transformation in _orbit_map_. If all vectors are known, the new pool remains empty and the termination condition of the while-loop is satisfied. When constructing the orbits of vertices, the _condition_ is chosen to evaluate to _true_ only when the vector lies in one of the representative facets. For relevant vectors, _condition_ is set to always evaluate to _true_.
### _Constructing the stabilizer of a vector_
The third change to the method in [13] is an algorithm to construct the stabilizer of a vector under a group \(\mathcal{G}\). Our method is again inspired by a standard orbit-stabilizer algorithm such as the one implemented in [18]. Stabilizers are needed in line 18 of Alg. 1, where we iterate over all elements of the stabilizer of one of the representative vectors. For \(\mathcal{G}=\mathrm{Aut}(\Lambda_{16})\), there are in total \(8\) representative vectors listed in Tab. I. We previously let _GAP_ find the stabilizer of a vector. With the knowledge about each vector's orbit size, however, we can implement a more efficient method.
```
1:procedureOrbit(\(\boldsymbol{x}\), _gens_, _condition_)
2:\(\textit{orbit}\leftarrow\{\boldsymbol{x}\}\)
3:\(\textit{orbit\_map}\leftarrow\) new empty Dictionary
4:\(\textit{orbit\_map}[\boldsymbol{x}]\leftarrow\) identity matrix
5:\(\textit{pool}\leftarrow\) copy of _orbit_map_
6:while\(pool\) is not empty do
7:\(\textit{new\_pool}\leftarrow\) new empty Dictionary
8:for all\(\boldsymbol{y}\in\) keys of pool do
9:\(h\gets\textit{pool}[\boldsymbol{y}]\)
10:for all\(g\in\mathcal{G}\)do
11:\(\boldsymbol{y}^{\prime}\gets g\boldsymbol{x}\)
12:if\(\boldsymbol{y}^{\prime}\notin\textit{orbit}\)then
13:\(\textit{orbit}\gets\textit{orbit}\cup\{\boldsymbol{y}^{\prime}\}\)
14:\(h^{\prime}\gets gh\)
15:\(\textit{new\_pool}[\boldsymbol{y}^{\prime}]\gets h^{\prime}\)
16: See the main text for this if-statement:
17:if\(\textit{condition}(\boldsymbol{y}^{\prime})\)then
18:\(\textit{orbit\_map}[\boldsymbol{y}^{\prime}]\gets h^{\prime}\)
19:\(pool\leftarrow\textit{new\_pool}\)
20:return\(\textit{orbit}\), \(\textit{orbit\_map}\)
```
**Algorithm 2** Construct the orbit of a vector \(\boldsymbol{x}\) under a group \(\mathcal{G}\). The group is given as a set _gens_ of transformation matrices generating the full group. For \(\Lambda_{16}\) we use \(\textit{gens}=\{\boldsymbol{M}_{1},\boldsymbol{M}_{2}\}\), where \(\boldsymbol{M}_{1},\boldsymbol{M}_{2}\) are given in Sec. III. The _condition_ is a boolean function of one vector and specifies if the transformation matrix should be stored for the given vector. This procedure returns a set of all vectors in the orbit as well as an orbit map. See the main text for details.
```
1:procedureStabilizer(\(\boldsymbol{x}\), _gens_, _orbit_size_)
2:\(\mathcal{G}\leftarrow\textit{GAP}\) group from _gens_
3:\(\textit{stab\_size}\leftarrow|\mathcal{G}|/\textit{orbit\_size}\)
4:\(\textit{stab\_gens}\leftarrow\) new empty List
5:\(\textit{stab\_GAP}\) group containing only \(\boldsymbol{I}_{16}\)
6:\(\textit{orbit\_map}\leftarrow\) new empty Dictionary
7:\(\textit{orbit\_map}[\boldsymbol{x}]\leftarrow\) identity matrix
8:for all\(g\in\mathcal{G}\)do
9:\(\boldsymbol{x}^{\prime}\gets g\boldsymbol{x}\)
10:if\(\boldsymbol{x}^{\prime}\in\) keys of _orbit_map_then
11:\(g^{\prime}\leftarrow\textit{orbit\_map}[\boldsymbol{x}^{\prime}]\)
12:\(g_{s}\gets g^{-1}g^{\prime}\)
13:if\(g_{s}\notin\textit{stab}\)then
14: append \(g_{s}\) to _stab_gens_
15:\(\textit{stab}\gets\textit{GAP}\) group from _stab_gens_
16:if\(|\textit{stab}|=\textit{stab\_size}\)then
17:return\(\textit{stab}\)
18:else
19:\(\textit{orbit\_map}[\boldsymbol{x}]\gets g\)
```
**Algorithm 3** Construct the stabilizer of a vector \(\boldsymbol{x}\) in \(\mathcal{G}\) whose orbit size is known. As in Alg. 2, the group \(\mathcal{G}\) is given as a set _gens_ of generator matrices. See the main text for details.
In Alg. 3, we construct elements of the orbit by applying different group elements to the vector \(\boldsymbol{x}\) (line 9). Any vector \(\boldsymbol{x}^{\prime}\) that is visited this way is stored together with the corresponding group element in an orbit map (line 19). Whenever we encounter a vector \(\boldsymbol{x}^{\prime}\) previously found, we retrieve the stored group element \(g^{\prime}\) (line 11). Since \(g\boldsymbol{x}=g^{\prime}\boldsymbol{x}\), we have \(g^{-1}g^{\prime}\boldsymbol{x}=\boldsymbol{x}\) and so \(g_{s}=g^{-1}g^{\prime}\) is an element of the stabilizer of \(\boldsymbol{x}\). If it is not yet an element of the subgroup \(\textit{stab}\subseteq\mathrm{Stab}_{\mathcal{G}}(\boldsymbol{x})\) found thus far, it is added to the list of group generators in line 14. After updating _stab_ in line 15, we check if it is complete by comparing its size against the known stabilizer size.
This is made efficient by two facts. First, due to the "birthday paradox" [23, Section 3], the first coincidence in line 10 occurs on average after \(1+\sum_{n=1}^{N}\prod_{i=1}^{n-1}(1-i/N)\) group elements (see the second unnumbered equation below [23, Eq. (12)]), where \(N\) is the size of the orbit of \(\boldsymbol{x}\) under \(\mathcal{G}\). For \(\mathrm{Aut}(\Lambda_{16})\), this means that the first element of the stabilizers of the vectors in Tab. I is found after about \(83\) (for \(\boldsymbol{n}_{1}\) and \(\boldsymbol{v}_{1}\)) to \(10\,210\) (for \(\boldsymbol{v}_{2},\boldsymbol{v}_{4},\boldsymbol{v}_{5}\)) iterations. Second, the stabilizers are often generated by very few group elements. In the case of \(\Lambda_{16}\), the set of all \(8\) stabilizers is found within minutes on a single core, since each stabilizer can be generated by only two generators.
## VI Conclusions
In this work, we provide a complete account of the relevant vectors, vertices, and face classes of the Voronoi region of the Barnes-Wall lattice \(\Lambda_{16}\). This is used to calculate the exact second moment of \(\Lambda_{16}\). In order to obtain these results, we improve our algorithm [13], allowing it to be used with larger symmetry groups than previously possible. We believe that our algorithm can be used to analyse the Voronoi regions of
many lattices with known symmetry group, potentially even in dimensions higher than 16.
Using Monte-Carlo integration, the exact value of the second moment is numerically verified. Furthermore, it is shown that the variance of the numerical result can be approximated with much higher accuracy than conventionally obtained with the jackknife estimator. This may provide significant improvements in numerical second moment estimates in the future.
|
2303.06833
|
Transformer-based Planning for Symbolic Regression
|
Symbolic regression (SR) is a challenging task in machine learning that
involves finding a mathematical expression for a function based on its values.
Recent advancements in SR have demonstrated the effectiveness of pre-trained
transformer-based models in generating equations as sequences, leveraging
large-scale pre-training on synthetic datasets and offering notable advantages
in terms of inference time over classical Genetic Programming (GP) methods.
However, these models primarily rely on supervised pre-training goals borrowed
from text generation and overlook equation discovery objectives like accuracy
and complexity. To address this, we propose TPSR, a Transformer-based Planning
strategy for Symbolic Regression that incorporates Monte Carlo Tree Search into
the transformer decoding process. Unlike conventional decoding strategies, TPSR
enables the integration of non-differentiable feedback, such as fitting
accuracy and complexity, as external sources of knowledge into the
transformer-based equation generation process. Extensive experiments on various
datasets show that our approach outperforms state-of-the-art methods, enhancing
the model's fitting-complexity trade-off, extrapolation abilities, and
robustness to noise.
|
Parshin Shojaee, Kazem Meidani, Amir Barati Farimani, Chandan K. Reddy
|
2023-03-13T03:29:58Z
|
http://arxiv.org/abs/2303.06833v5
|
# Transformer-based Planning for Symbolic Regression
###### Abstract
Symbolic regression (SR) is a challenging task in machine learning that involves finding a mathematical expression for a function based on its values. Recent advancements in SR have demonstrated the effectiveness of pretrained transformer-based models in generating equations as sequences, leveraging large-scale pretraining on synthetic datasets and offering notable advantages in terms of inference time over GP-based methods. However, these models primarily rely on supervised pretraining goals borrowed from text generation and overlook equation-specific objectives like accuracy and complexity. To address this, we propose TPSR, a Transformer-based Planning strategy for **S**ymbolic **R**egression that incorporates Monte Carlo Tree Search into the transformer decoding process. Unlike conventional decoding strategies, TPSR enables the integration of non-differentiable feedback, such as fitting accuracy and complexity, as external sources of knowledge into the transformer-based equation generation process. Extensive experiments on various datasets show that our approach outperforms state-of-the-art methods, enhancing the model's fitting-complexity trade-off, extrapolation abilities, and robustness to noise.
## 1 Introduction
Symbolic regression (SR) is a powerful method to discover mathematical expressions for governing equations of complex systems and to describe data patterns in an interpretable symbolic form. It finds extensive applications in science and engineering, enabling the modeling of physical phenomena in various domains such as molecular dynamics, fluid dynamics, and cosmology [1; 2; 3; 4; 5; 6]. Symbolic representations provide valuable insights into complex systems, facilitating a better understanding, prediction, and control of these systems through the design of accurate, generalizable, and efficient models [7; 8; 9]. SR models establish the functional relationship between independent and target variables by mapping them to mathematical equations. The input data can be obtained from simulations, experimental measurements, or real-world observations. Symbolic regression, however, poses several challenges, including the combinatorial nature of the optimization search space, vulnerability to the quality of input data, and the difficulty of striking a balance between model fitting, complexity, and generalization performance [10; 11].
Symbolic regression encompasses a wide range of methods, spanning different categories. Traditional approaches, such as Genetic Programming (GP), use a heuristic population-based search strategy where each individual represents a potential solution to the problem [12; 13]. Though GP algorithms are capable of finding solutions for nonlinear and complex problems, they are typically slow to converge due to the vast functional search space. Also, as they start the search from scratch for each given equation, they tend to be computationally expensive, prone to overfitting, and sensitive to the choice of parameters [14]. Recent works in SR have shown promising results by using pretrained
transformers [15] for generating equations as sequences of tokens. These models leverage the large-scale pretraining and can generate equations with a single forward pass, leading to faster inference times compared to GP-based methods [16; 17; 18; 19]. However, one of the limitations of these models is that they focus on the supervised pretraining goals borrowed from text generation, i.e., they are trained solely with the token-level cross-entropy (CE) loss, which can result in equations that may exhibit high token-level similarities but are suboptimal with respect to equation-specific objectives such as fitting accuracy and complexity. To mitigate this issue, beam search [20; 21] or sampling [22] approaches are employed as decoding strategies to propose multiple candidate equations for a given dataset, and then select the optimal candidate equation based on the fitting accuracy after optimizing for constants. Nonetheless, both beam search and sampling decoding strategies primarily rely on the pretrained transformer's logits and next token probability distributions, and therefore do not receive any performance feedback during the generation of equation candidates. To consider the equation-specific objectives in the transformer generation process and still benefit from the pretrained model logits, we propose TPSR, a Transformer-based **P**lanning strategy for **S**ymbolic **R**egression. TPSR leverages a lookahead planning algorithm, using Monte Carlo Tree Search (MCTS) as a decoding strategy on top of pretrained transformer-based SR models to guide equation sequence generation. TPSR significantly improves the performance of generated equations by considering feedback during the generation process and still remains faster than GP-based models which do not leverage the pretraining priors and learn each expression from scratch. Notably, our approach is model-agnostic and can be applied to any pretrained SR model, enabling optimization of generated equation sequences for non-differentiable objectives that may encompass combinations of fitting accuracy, complexity, and equation forms. Additionally, we incorporate different caching mechanisms to reduce the overall inference time. Our experimental results demonstrate that applying TPSR on top of the pretrained E2E SR model [18] significantly enhances its performance across various benchmark datasets. As depicted in Fig. 1, TPSR achieves a strong balance between fitting accuracy and model complexity compared to other leading baselines. It also effectively drives the E2E model towards the optimal trade-off, represented by the first Pareto front. The major contributions of this work are summarized below:
* Proposing TPSR, a new method that combines pretrained transformer SR models with Monte Carlo Tree Search (MCTS) lookahead planning to optimize the generation of equation sequences while considering non-differentiable performance feedback.
* Developing a new reward function that balances equation fitting accuracy and complexity to optimize the generated equations for an effective trade-off.
* Demonstrating that TPSR consistently outperforms state-of-the-art baselines across various SR benchmark datasets, generating equations with higher fitting accuracy while maintaining lower complexity to avoid non-parsimonious solutions.
Figure 1: Pareto plot comparing the rankings of all methods in terms of the \(R^{2}\) performance and identified equation complexity for **(a) SRBench_Black-box_ dataset and **(b)_Feynman_ dataset. Our results with Transformer-based Planning (TPSR) applied on top of E2E transformer SR model improves its average accuracy on both datasets while maintaining a similar range of equation complexity. _TPSR can successfully reach the first Pareto-front which is better than E2E baseline on both datasets_. Connecting lines and colors denote Pareto dominance rankings and "\(*\)" indicates SR methods in _Black-box_ datasets.
* Showcasing the extrapolation and noise robustness of TPSR compared to the baseline and conducting an ablation study to investigate the impact of various model components.
## 2 Related Work
Single-Instance Symbolic Regression.Genetic Programming (GP) algorithms are typically employed for single-instance SR, aiming to find the best-fit equation for function observations [12]. Recently, alternative neural network-based search algorithms have been explored, including deep reinforcement learning (RL) [23; 14; 24], combinations of GP and RL [25], and Monte Carlo Tree Search (MCTS) as a standalone framework [26]. Despite their successes, all these methods lack the benefits of semantic knowledge learned from large-scale pretraining. Consequently, they are slow during inference as they need to restart the search from scratch for new equations.
Pretrained Transformers for Symbolic Regression.In recent years, pretrained transformers have shown remarkable performance in natural language and programming language tasks [27; 28; 29]. This success has inspired researchers to develop pretrained transformer models for SR [16; 17; 18; 19; 30]. For example, Biggio _et al._[16] introduced a Neural Symbolic Regression (NSR) model that scales with the amount of synthetic training data and generates equation skeletons where all the numerical constants are represented by a single token "\(C\)". Kamienny _et al._[18] proposed an end-to-end framework that predicts the complete equation form along with its constants. More recent works [30; 31] introduced unified frameworks that include a transformer-based pretraining stage as the prior for subsequent RL or GP optimization steps. While GP and RL methods have to start anew for each problem, the purely transformer-based approaches rely on synthetic data and the power of large-scale pretrained priors to generate equations in a single forward pass. However, these models are mostly pretrained on token-level sequence generation losses, and thus can perform suboptimal for other equation-specific objectives such as fitting accuracy and complexity. Our model, TPSR, utilizes MCTS lookahead planning to guide the generation of equations towards better performance by employing fitting and complexity feedback during the transformer generation process.
Planning in Sequence Generation.Recently, planning algorithms such as Monte Carlo Tree Search (MCTS) have been utilized in NLP tasks to optimize text output for specific objectives, such as controlling generated text to meet certain constraints like non-toxicity or conveying certain emotions [32; 33; 34]. Recent advances in programming language models developed in code generation have also yielded promising techniques that could be adapted for SR, as they share several vital similarities with each other. Both involve generating sequences of symbols for a given input and typically require optimizing the generated sequences for specific criteria. For code generation, this may involve optimizing objectives like code compilability, readability, or passing test cases [35; 36; 37]. Similarly, in SR, the focus may be on equation-specific sequence-level objectives such as fitting accuracy or minimizing complexity. Motivated by these successes, we develop an approach that combines MCTS with pretrained transformer SR models for improved equation generation.
## 3 Methodology
### Preliminaries
In SR, the main goal is to find a symbolic expression for the unknown function \(f(\cdot)\) mapping the \(d\)-dimensional input \(\mathbf{x}\in\mathbb{R}^{d}\) to the target variable \(y=f(\mathbf{x})\in\mathbb{R}\). Given a dataset of \(n\) observations \(\mathcal{D}=(\mathbf{x}_{i},y_{i})_{i=1}^{n}\), SR methods try to generate an equation \(\tilde{f}(\cdot)\) such that \(y_{i}\approx\tilde{f}(\mathbf{x}_{i})\) for all \(i\in\mathbb{N}_{n}\). Also, the proposed equation is desired to generalize well and to effectively balance the fitting accuracy and complexity. The transformer-based SR models are trained on a large-scale dataset of equation instances \(\{(\mathcal{D}_{1},f_{1}(\cdot))\ \ldots\ (\mathcal{D}_{M},f_{M}(\cdot))\}\), where \(M\) is the dataset size. During inference, the trained model directly generates the equation \(\tilde{f}(\cdot)\) as a sequence of tokens in an autoregressive manner. An effective way to represent the expression tree of equations in a sequence is to use prefix notation as in [38]. Transformer-based SR models first embed and encode the input observations, and then pass the encoded representation along with the masked tokens to decode the equation sequence. To train the model, token-level cross-entropy loss is employed to learn the distribution of next token prediction conditioned on the encoded dataset and the current state of sequence (Fig. 2(a)).
Achieving a good fitting performance from the model's predicted sequence demands generating accurate constants in the equation. To address this, the generated skeleton or equation can undergo a round of optimization to estimate their constants using nonlinear methods, such as Broyden-Fletcher-Goldfarb-Shanno algorithm (BFGS) [39]. Previous works [18; 16] employ beam search and sampling strategies for transformer decoding in combination with constant optimization to
propose several candidate equations. Subsequently, they use fitting metrics such as \(R^{2}\) to order these candidates and output the final equation with the best performance (Fig. 2(b)). Transformer models utilizing beam search or sampling decoding strategies can generate multiple high-likelihood equation sequences, but they rely on logits obtained from model parameters pretrained with token-matching loss relative to the reference equation. As a result, such models lack the capability to receive feedback and optimize generation for equation-specific objectives such as fitting or complexity of equations.
### MCTS-Guided Equation Generation
To generate equations that are both better-fitting and less-complex, it is crucial to incorporate feedback into the equation generation process. To achieve this, we utilize Monte Carlo Tree Search (MCTS) during inference, guiding the decoder towards optimal solutions for fitting and complexity objectives (as shown in Fig. 2(c)). The MCTS-guided transformer decoding explores different possibilities, identifying the most promising paths based on the objectives.
We frame the SR equation generation task as a Markov Decision Process (MDP) where state \(s\) represents the current sequence at generation iteration \(t\). If \(s\) has not reached the terminal state (i.e., the <EOS> token), we select the next token from the vocabulary as action \(a\), updating state \(s^{\prime}\) by concatenating \(s\) and \(a\). Upon reaching the terminal state, the reward \(r\) is computed and used to update the decoding model. MCTS represents states as nodes and actions as edges within a tree structure, navigating state-space from the root node (i.e., initial state) to reach terminal states with maximum rewards. MCTS balances exploration and exploitation, considering nodes with higher quality equations (i.e., higher Q-values) and under-explored nodes (i.e., those with fewer visits). During the generation process of the transformer, we utilize the MCTS algorithm iteratively to conduct lookahead planning and determine the next token. However, the large search-space requires more than the sole application of MCTS to generate high-quality equations. We need to effectively share information between the pretrained trnasformer model and MCTS for better generations. To achieve this, we incorporate the probabilities of the next-token that are acquired through the pretrained transformer SR models into the MCTS planning process. This incorporation helps to enhance the search process, leading to more efficient and effective results. The key steps of MCTS for transformer decoding in SR models, as depicted in Fig. 3, are as follows:
**Selection.** The Upper Confidence Bound for Trees (UCT) [40] criterion is employed to select actions (i.e., next tokens) for fully extended nodes in the search tree, balancing exploration and exploitation. We use the P-UCB heuristic in [41] as
\[\mathrm{UCT}(s,a)=Q(s,a)+\beta(s)\cdot P_{\theta}(a|s)\cdot\sqrt{\frac{\ln{(N( s))}}{1+N(s^{\prime})}}, \tag{1}\]
where \(Q(s,a)\) is the maximum return for action \(a\) in state \(s\) across all simulations, promoting the exploitation of the optimal child node. The second term encourages exploration of less-visited children, with \(N(s)\) as state \(s\)'s visit count and \(s^{\prime}\) as the subsequent state. \(P_{\theta}(a|s)\) is the probability of the next token \(a\) given the partial sequence state \(s\) from pretrained transformer model parameterized by \(\theta\). The exploration-exploitation trade-off is adjusted by \(\beta(s)\), which depends on state \(s\)'s visit count. Lastly, the next token action maximizes the UCT: \(\mathrm{Select}(s)=\arg\max_{a}\mathrm{UCT}(s,a)\).
Figure 2: An overview of our proposed method with MCTS-guided decoding at inference compared to the concurrent works with beam search/sampling decoding strategy.
**Expansion.** In the expansion stage, after selecting a node that is not fully expanded, a new child (next token) for the current state is explored. Random expansion of the node from the vocabulary, however, might result in an invalid equation (that does not comply with the prefix notation) and makes the process very time-consuming. Therefore, given partial equations, only \(top\)-\(k\) most likely choices of the next token are considered as the possible children of the node for expansion. In other words, we are restricting the actions to be only from the \(top\)-\(k\) high-likelihood options which are retrieved from the pretrained transformer SR model's logits. These options are then ordered to determine the sequence in which the children will be expanded.
**Evaluation.** To evaluate the newly expanded nodes, we perform simulations to complete the equation sequence. This is necessary because the new state may still be a partial equation and performance feedback can only be obtained at the end of the sequence when the equation generation is completed. In MCTS, it is common to employ random actions during the simulation stage. Nevertheless, random action selection for equation generation, much like during expansion, suffers from certain drawbacks in terms of time and the possibility of generating invalid equations. Consequently, the pretrained transformer SR model is invoked again, this time utilizing beam search with a beam size of \(b\), to generate complete equation candidates based on the current state. The beam size \(b\) determines the number of complete equations to be generated from the current partial equation. Following the simulations, the highest reward among all the candidates is assigned to the new node value.
**Backpropagation.** After generating a complete equation \(\tilde{f}(\cdot)\), the corresponding reward \(r(\tilde{f}(\cdot))\) can be computed. The highest reward among all simulations is then assigned to the new node, which recursively backpropagates its estimated value to its parents until it reaches the root of the tree. This update process involves updating the \(Q\) values of all state-action pairs, denoted as \(s^{\prime}\) and \(a^{\prime}\), along the trajectory in the tree to reach the root. Specifically, for each state-action pair, the \(Q\) value is updated by taking the maximum of the current \(Q\) value and the new value \(r\): \(Q(s^{\prime},a^{\prime})\leftarrow\max{(Q(s^{\prime},a^{\prime}),r)}\). More details on TPSR, including its steps and implementation can be found in Appendix C.
### Reward Definition
We define a numerical reward \(r\in\mathbb{R}\) to evaluate complete equation candidate \(\tilde{f}(\cdot)\), promoting fitting accuracy and regulating complexity. After optimizing constants in the complete sequence, we compute the reward. We first calculate the normalized mean squared error (NMSE) between ground-truth target variable \(y\) and predicted target variable \(\tilde{y}=\tilde{f}(\mathbf{x})\), and formulate the reward as:
\[r(\tilde{f}(\cdot)|\mathbf{x},y)=\frac{1}{1+\mathrm{NMSE}(y,\tilde{f}(\mathbf{x}))}+ \lambda\exp(-\frac{l(\tilde{f}(\cdot))}{L}), \tag{2}\]
where \(l\) represents equation complexity as the sequence length in prefix notation [18; 42; 16]; \(L\) denotes the model's maximum sequence length; and \(\lambda\) is a hyperparameter balancing fitting and complexity reward. Higher \(\lambda\) values favor less complex equations, encouraging best-fitting and penalizing non-parsimonious solutions. NMSE is calculated as \((\frac{1}{n}\|y-\tilde{f}(\mathbf{x})\|_{2}^{2})/(\frac{1}{n}\|y\|_{2}^{2}+\epsilon)\), where \(\epsilon\) is a small constant to prevent numerical instability.
Figure 3: Overview of TPSR’s key steps: Selection, Expansion, Evaluation, and Backpropagation. MCTS-guided decoding interacts with the pretrained transformer SR model in the expansion and evaluation steps employing the transformer \(top\)-\(k\) sampling and beam search, respectively. The designed reward is used to guide the backpropagation.
### Efficient Implementation with Caching
During MCTS evaluation, the transformer model generates complete sequences from a given state, constructing implicit tree structures for beam search and computing \(top\)-\(k\) next tokens for visited states. These computations are required in future MCTS iterations, so we employ two caching mechanisms, \(top\)-\(k\)_caching_ and _sequence caching_, to reduce redundancy and improve efficiency. \(Top\)-\(k\)_caching_ stores computed \(top\)-\(k\) values for given states. For example, in Fig. 4, when evaluating state \(s=[+,\sin]\) in MCTS iteration \(t\), \(top\)-\(k\) tokens are computed for \(s\) and subsequent visited states, such as \([+,\sin,x_{2}]\). State-\(top\)-\(k\) value pairs are cached for future use, avoiding redundant token retrieval. _Sequence caching_ caches complete equations generated greedily with a beam size of one. If a state matches a stored equation partially, the cached equation can be used directly in future iterations, bypassing iterative sequence generation. Both caching strategies enhance efficiency without compromising performance. More details are provided in Appendix C.
## 4 Experiments
In this section, we present our experimental results that evaluate the effectiveness and efficiency of TPSR. While the proposed decoding strategy is generally model-agnostic, here we showcase the results of using TPSR for the end-to-end (E2E) pretrained SR transformer backbone [18], as E2E is the SOTA open-source pretrained SR model with publicly accessible model weights and logits. We evaluate our framework by answering the following research questions (**RQs**):
* Does TPSR perform better than other decoding strategies (beam search/sampling) and competing baseline methods over standard SR benchmark datasets?
* Does TPSR provide better extrapolation and robustness to noise?
* Are TPSR's caching mechanisms effective in reducing computation time?
* What is the role of individual MCTS components in TPSR's overall performance gain?
### Datasets
We evaluate TPSR and various baseline methods on standanrd SR benchmark datasets from Penn Machine Learning Benchmark (PMLB) [43] studied in SRBench [42], as well as _In-domain Synthetic Data_ generated based on [18]. The benchmark datasets include 119 equations from _Feynman Lectures on Physics database_ series2[44], 14 symbolic regression problems from the _ODE-Strogatz database3[45], and 57 _Black-box4_ regression problems without known underlying equations. We limit the datasets to those with continuous features and input dimension \(d\leq 10\), as the transformer SR model [18] is pretrained with \(d_{max}=10\). The _In-domain Synthetic Data_ consists of 400 validation equations with different levels of difficulty and number of input points. This data is referred to as "in-domain" because the validation data is generated using the same approach as the data on which the backbone transformer model [18] is pretrained. More details on each of these datasets are provided in Appendix A.
Footnote 2: [https://space.mit.edu/home/tegmark/aifeynman.html](https://space.mit.edu/home/tegmark/aifeynman.html)
Footnote 3: [https://github.com/lacava/ode-strogatz](https://github.com/lacava/ode-strogatz)
Footnote 4: [https://github.com/EpistasisLab/pmlb/tree/master/datasets](https://github.com/EpistasisLab/pmlb/tree/master/datasets)
### Evaluation Metrics
We evaluate our model using the following three metrics: \(R^{2}\) score [42], accuracy to tolerance \(\omega\)[16; 46], and equation complexity [18; 42].
\[R^{2}=1-\frac{\sum_{i}^{N_{test}}(y_{i}-\tilde{y}_{i})^{2}}{\sum_{i}^{N_{test }}(y_{i}-\bar{y})^{2}},\quad Acc_{\omega}=\mathbbm{1}(\max_{1\leq i\leq N_{test }}\left|\frac{\tilde{y}_{i}-y_{i}}{y_{i}}\right|\leq\omega),\quad\textit{ Complexity}=\left|\mathcal{T}(\tilde{f}(\cdot))\right|,\]
where \(R^{2}\) measures fitting performance, \(Acc_{\omega}\) evaluates equation precision based on tolerance threshold \(\omega\), and equation complexity is determined by the number of nodes in the expression tree \(\mathcal{T}\)
Figure 4: An illustration of caching mechanisms in TPSR.
of the generated equation \(\tilde{f}(\cdot)\). Following [18; 42], we set \(R^{2}=0\) for rare pathological examples and discard the worst \(5\%\) predictions for \(Acc_{\omega}\) to reduce outlier sensitivity.
### (RQ1) Effectiveness of TPSR
Table 1 presents the performance comparison results of TPSR with the baseline decoding strategies on the SRBench benchmark and the in-domain synthetic dataset. For the E2E baseline, we use the settings reported in [18], including beam/sample size of \(C=10\) candidates, and the refinement of all the candidates \(K=10\). For our model, we use the width of tree search as \(k_{max}=3\), number of rollouts \(r=3\), and simulation beam size \(b=1\) as the default setting. For PMLB datasets that contain more than 200 points, we follow [18] and use \(B\) bags of data, each containing \(N=200\) points, due to the limitation that the baseline method is pretrained with \(N\leq 200\) data points. In the baseline method [18], a total of \(BC\) candidates are generated (\(C\) candidates for \(B\) bags), which are then sorted and refined to generate the best equation. However, for TPSR, since we need to train an MCTS for each bag, we use an iterative decoding approach, starting with the first bag and continuing with subsequent bags until a criterion (\(R^{2}>0.99\)) is met or we use a maximum of \(B=10\) bags. To ensure a fair comparison, we use \(B=10\) for the E2E baseline method as well. In this table, we demonstrate the results of our proposed framework, TPSR, with varying values of the \(\lambda\) parameter that controls the trade-off between fitting performance and complexity in the hybrid reward function defined in Eq. (2).
As shown in Table 1, when \(\lambda=0\), the framework generates complex equations that overoptimize for fitting performance. However, as we increase \(\lambda\), the framework generates less complex equations with a slight reduction in fitting performance. Notably, even for large values of \(\lambda\), such as \(\lambda=1\), the fitting performance of TPSR significantly outperforms that of the baseline methods. These findings demonstrate the superiority of TPSR over the baseline methods in terms of fitting performance across all datasets, while generating equations with comparable or reduced complexity than those generated by the baseline methods. Table 1 shows that TPSR exhibits a more significant gap in fitting performance when compared to E2E on SRBench datasets, while this gap is smaller for In-domain datasets (even performing slightly worse on \(Acc_{\omega}\) for larger \(\lambda=0.5,1\)). This is due to the In-domain dataset being generated using the same approach as the E2E pretraining data, resulting in the E2E model's superior performance on this dataset. Furthermore, qualitative comparisons of TPSR with baseline symbolic and black-box regression models [47] demonstrate the superior performance of TPSR in learning the underlying equation and out-of-domain extrapolation (see Appendix D).
Fig. 5 presents a detailed comparison of our proposed TPSR with the baseline E2E transformer model and all the SRBench baselines on the PMLB _Feynman_ and _Black-box_ datasets. These figures illustrate the relative position of each algorithm with respect to (1) fitting performance, (2) model complexity, and (3) inference time. The results indicate that transformer-based planning in the TPSR significantly enhances the performance of E2E and outperforms even the state-of-the-art GP baselines, achieving the highest fitting performance on the black-box datasets. This is achieved while the complexity of the generated equations in TPSR is not greater than that of E2E, and shows a great fitting-complexity balance compared to other SR algorithms. The pareto plots provided in Fig. 1 also demonstrate the effectiveness of our TPSR in balancing fitting and complexity compared to all other SRBench
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline \multirow{2}{*}{Data Group} & \multirow{2}{*}{Model} & \multicolumn{2}{c}{Feynman} & \multicolumn{2}{c}{Strogatz} & \multicolumn{2}{c}{Black-box} \\ \cline{3-8} & & \(\uparrow\)\(R^{2}>0.99\) & \(\downarrow\)\(Complexity\) & \(\uparrow\)\(R^{2}>0.99\) & \(\downarrow\)\(Complexity\) & \(\uparrow\)\(R^{2}\) & \(\downarrow\)\(Complexity\) \\ \hline \multirow{8}{*}{SRBench} & E2E+Beam & 0.815 & 54.19 & 0.357 & 53.21 & 0.847 & 83.61 \\ & E2E+Sampling & 0.848 & 50.73 & 0.357 & 50.14 & 0.864 & 82.78 \\ \cline{1-1} \cline{2-8} & TPSR (\(\lambda\)=0) & **0.952** & 84.42 & **0.928** & 82.78 & 0.938 & 129.85 \\ \cline{1-1} & TPSR (\(\lambda\)=0.1) & 0.949 & 57.22 & 0.785 & 56.14 & **0.945** & 95.71 \\ \cline{1-1} & TPSR (\(\lambda\)=0.5) & 0.924 & 50.01 & 0.714 & 47.02 & 0.931 & 52.58 \\ \cline{1-1} & TPSR (\(\lambda\)=1) & 0.916 & **47.24** & 0.571 & **43.42** & 0.924 & **79.43** \\ \hline \hline \end{tabular}
\begin{tabular}{l l l l l l l l} \hline \hline Data Group & Model & \(\uparrow\)\(R^{2}>0.99\) & \(\uparrow\)\(R^{2}\) & \(\uparrow\)\(Acc_{0.1}\) & \(\uparrow\)\(Acc_{0.01}\) & \(\uparrow\)\(Acc_{0.01}\) & \(\downarrow\)\(Complexity\) \\ \hline \multirow{8}{*}{In-domain} & E2E+Beam & 0.657 & 0.782 & 0.461 & 0.298 & 0.2 & 38.37 \\ & E2E+Sampling & 0.640 & 0.794 & 0.472 & 0.332 & 0.208 & 39.82 \\ \cline{1-1} \cline{2-8} & TPSR (\(\lambda\)=0) & 0.702 & 0.828 & **0.550** & **0.416** & **0.333** & 67.11 \\ \cline{1-1} & TPSR (\(\lambda\)=0.1) & **0.708** & **0.833** & 0.514 & 0.326 & 0.213 & 40.31 \\ \cline{1-1} & TPSR (\(\lambda\)=0.5) & 0.697 & 0.830 & 0.459 & 0.274 & 0.184 & 36.55 \\ \cline{1-1} & TPSR (\(\lambda\)=1) & 0.691 & 0.827 & 0.439 & 0.271 & 0.176 & **35.67** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of TPSR compared with beam search and sampling decoding strategies on the SRBench [42] and In-domain Synthetic [18] datasets.
baselines. Our TPSR effectively pushes this balanced performance to the first pareto front for both the _Feynman_ and _Black-box_ datasets. Moreover, it is important to note that, while the inference time of TPSR is longer than the baseline E2E transformer model, it still has significantly lower inference time than RL or GP-based SRBench baselines. Further results on the SRBench and In-domain datasets are provided in Appendix D.
### (RQ2) Extrapolation and Robustness
The ability to extrapolate well is inherently linked to the quality of the equation obtained through symbolic regression. To investigate the extrapolation performance of TPSR to out-of-training regions, we normalize the input test data points to different scales (\(\sigma\)) instead of unit variance (used for training points) as per [18]. Fig. 6(a) depicts the average performance of TPSR compared to E2E with sampling decoding on the training data as well as testing data in scales of \(\sigma=\{1,2,4,8,16\}\) for the _In-domain Synthetic_ dataset. Also, we investigate the effect of different complexity controlling levels (\(\lambda=\{0,0.1,0.5,1.0\}\)) on the extrapolation performance. It can be observed that, while \(\lambda=0\) (i.e., no complexity regularization) achieves the best fitting accuracy
Figure 5: Performance comparison of TPSR and SRBench algorithms in terms of Accuracy-Complexity-Time on _Feynman_ (top) and _Black-box_ (bottom) datasets. For _Feynman_ dataset, algorithms are sorted based on mean accuracy defined as the ratio of solutions with \(R^{2}>0.99\) on test set under various noise levels, and for _Black-box_ dataset, the algorithms are sorted based on the median \(R^{2}\) score on test set. TPSR demonstrates a strong balance of performance with relatively low model complexity and lower inference time compared to GP-based algorithms. The error bars represent the 95% confidence interval and ”\(*\)” refers to SR methods for _Black-box_ dataset.
Figure 6: TPSR with \(\lambda\) range in \(\{0,0.1,0.5,1\}\) compared to E2E using sampling for **(a) Extrapolation performance** where in-domain accuracy is shown for different input variances (\(\sigma\)), and **(b) Robustness to noise**, where mean accuracy (\(R^{2}>0.99\)) is shown for various target noise levels (\(\gamma\)).
on the training data, it has a sub-par performance for \(\sigma>8\). This can be due to the overfitting issue when the symbolic model is much more complex than the real complexity of the equation, similar to the common overfitting issue in ML models. The results highlight the importance of controlling complexity in the extrapolation of identified equations. For values of \(\lambda>0\), the overfitting issue is mitigated as the generated equations become less complex. However, very high values of \(\lambda\) (e.g., \(\lambda=1\)) can result in poor fitting performance. The flexibility of TPSR for allowing different values of \(\lambda\) to balance fitting and complexity for a given task is crucial for optimal performance. Fig. 6(b) also presents the robustness of TPSR with different \(\lambda\) levels compared to the E2E transformer baseline on the _Feynman_ dataset. The results indicate that MCTS-guided decoding can offer robust performance with a smaller drop compared to the baseline in the presence of noise.
### Ablation Study
In this section, we investigate the effect of different MCTS parameters and caching mechanisms on the performance of TPSR by conducting ablative experiments on the _Feynman_ datasets.
**(RQ3) Caching Mechanisms.** In Fig. 7(a), we illustrate the effectiveness of the _sequence_ and \(top\)-\(k\) caching mechanisms in reducing the total inference time of TPSR. Our experiments show that sequence caching has more effect in dropping the inference time as it replaces the time-consuming sequence generation process. Overall, these two mechanisms can reduce the total inference time by around \(28\%\).
**(RQ4) Search Parameters.** Fig. 7(b) shows the fitting performance vs. the number of generated equations throughout the decoding process for both TPSR (\(\lambda=0.1\)) and the baseline E2E with sampling decoding. The results show that under the same number of generated equation candidates, TPSR significantly outperforms the E2E baseline. This is primarily attributed to the fact that the E2E baseline is deprived of any feedback on the fitting performance of the generated equations. We report the results for variants of TPSR with different MCTS parameters. We assess the performance with varying number of rollouts, \(r=\{1,3,6,9\}\), number of beams in simulations, \(b=\{1,3\}\), and the maximum number of possible expansions at each state, \(k_{max}=\{2,3,4\}\). The default setting of TPSR parameters are \(b=1\), \(k_{max}=3\), and \(r=3\). The results indicate that increasing \(r\), \(k_{max}\), and \(b\) all contribute to the better performance of TPSR, with the most significant improvement observed when increasing \(r\). This is because more rollouts provide the model with more opportunities to learn from trials and learn better values.
## 5 Conclusion
In this work, we propose TPSR, a model-agnostic decoding strategy for symbolic regression that leverages the power of pretrained SR transformer models and MCTS algorithm, and outperforms the existing methods in generating equations with superior fitting-complexity trade-off. We demonstrate the flexibility of TPSR in controlling equation complexity without finetuning the pretrained model. We hope that this work can inspire further research into the integration of pretrained models with planning or reinforcement learning algorithms. Future research could focus on enhancing the adaptability of feedback-based expression generation mechanisms, potentially by modulating the flexibility of MCTS or SR model weights. Furthermore, employing parallelization and distributed computing could potentially improve MCTS search efficiency.
Figure 7: Ablation study on the modules and parameters of TPSR. **(a) Effect of caching mechanisms:**_Sequence caching_ and _top-\(k\) caching_ improve the inference time of TPSR (\(\lambda=0.1\)). **(b) Efficiency and parameters of TPSR**: Average accuracy of TPSR (varying model parameters), and baseline E2E (varying sampling size) vs. number of generated candidates.
|
2302.09829
|
Spin squeezing in open Heisenberg spin chains
|
Spin squeezing protocols successfully generate entangled many-body quantum
states, the key pillars of the second quantum revolution. In our recent work
[Phys. Rev. Lett. 129, 090403 (2022)] we showed that spin squeezing described
by the one-axis twisting model could be generated in the Heisenberg spin-1/2
chain with periodic boundary conditions when accompanied by a
position-dependent spin-flip coupling induced by a single laser field. This
work shows analytically that the change of boundary conditions from the
periodic to the open ones significantly modifies spin squeezing dynamics. A
broad family of twisting models can be simulated by the system in the weak
coupling regime, including the one- and two-axis twisting under specific
conditions, providing the Heisenberg level of squeezing and acceleration of the
dynamics. Full numerical simulations confirm our analytical findings.
|
Tanausú Hernández Yanes, Giedrius Žlabys, Marcin Płodzień, Domantas Burba, Mažena Mackoit Sinkevičienė, Emilia Witkowska, Gediminas Juzeliūnas
|
2023-02-20T08:32:53Z
|
http://arxiv.org/abs/2302.09829v2
|
# Spin squeezing in open Heisenberg spin chains
###### Abstract
Spin squeezing protocols successfully generate entangled many-body quantum states, the key pillars of the second quantum revolution. In our recent work [Phys. Rev. Lett. 129, 090403 (2022)] we showed that spin squeezing described by the one-axis twisting model could be generated in the Heisenberg spin-1/2 chain with periodic boundary conditions when accompanied by a position-dependent spin-flip coupling induced by a single laser field. This work shows analytically that the change of boundary conditions from the periodic to the open ones significantly modifies spin squeezing dynamics. A broad family of twisting models can be simulated by the system in the weak coupling regime, including the one- and two-axis twisting under specific conditions, providing the Heisenberg level of squeezing and acceleration of the dynamics. Full numerical simulations confirm our analytical findings.
Neutral atom arrays have recently emerged as promising platforms for realizing programmable quantum systems [1; 2; 3]. Based on individually trapped cold atoms in optical lattices [4] and tweezers with strong interactions between Rydberg states [5], atom arrays have been utilized to explore physics involving Hubbard and Heisenberg models [6; 7; 8; 9; 10]. It has been shown that indistinguishable Hubbard bosons serve as a platform for the generation and storage of metrologically useful many-body quantum states [11; 12; 13; 14] In some regime of parameters, arrays of ultra-cold atoms simulate chains of distinguishable spins (qubits) which are perfectly suitable for quantum information tasks and the generation of massive non-classical correlations, including Bell correlations and non-locality [15; 16; 17; 18]. These quantum many-body systems are crucial resources for emerging quantum technologies [19; 20].
Systems composed of ultra-cold fermions in optical lattices have also attracted a lot of attention currently in the context of the generation of non-classical states, see e.g. in [21; 22; 23]. In particular, in our recent work [24], we have shown that in a lattice of strongly interacting ultra-cold fermionic atoms involving two internal
Figure 1: Illustration of the Ramsey-type spectroscopy scheme. (a) Preparation of the initial spin coherent state. (b) The excitation of spin waves states (different color lines) by the spin-flip coupling serves as an intermediate state to induce “effective” interaction and establish correlations between elementary spins. (c) Turning off the coupling freezes the dynamics, and the spin-squeezed states are stored in the Mott insulating phase. Panels (b) and (c) illustrate an example of a configuration of spins. Yet, the resulting state during and at the end of evolution is a superposition of various possible configurations including the initial one presented in (a).
states, it is possible to generate non-classical correlations when adding position-dependent atom-light coupling. The Fermi-Hubbard model describing the system under periodic boundary conditions (PBC) can be cast onto an isotropic spin-1/2 Heisenberg chain in a deep Mott regime, while the atom-light coupling can be considered as a position-dependent spin-flipping. To generate spin squeezing the Ramsey-type spectroscopy scheme is considered [24], as illustrated in Fig. 1. As soon as the atoms are put in a coherent superposition of two internal states by an electromagnetic pulse, an additional weak atom-laser coupling is turned on. This coupling activates the general mechanism in PBC case: it induces excitation of a pair of spin waves with opposite quasi-momentum. These spin waves extend over the entire system allowing individual atoms to interact "effectively" and establish non-trivial quantum correlations [25, 21, 22, 26, 24]. When the desired level of spin squeezing is established, the spin-flip coupling is turned off but the quantum correlations survive and are stored deeply in the Mott insulating phase. We showed that the isotropic Heisenberg spin-1/2 chain with the weak position-dependent spin-flip coupling generates spin-squeezing dynamics given by the one-axis twisting (OAT) model. Furthermore, we numerically observed that open boundary conditions (OBC) change the spin squeezing dynamics. Depending on the coupling parameters, an acceleration of squeezing generation was observed with the same or similar level of squeezing [24].
In this paper, we provide a detailed analytical and numerical analysis of the impact of OBC on the spin squeezing dynamics in Heisenberg spin chains. To this end, we develop the spin-waves theory for OBC by modifying the coordinate Bethe ansatz [27]. Next, by using the Schrieffer-Wolf transformation [28, 29, 30, 31, 22] we derive the effective model in terms of collective spin operators to describe the squeezing dynamics generated in the weak coupling regime. For OBC the coupling leads to the excitation of a superposition of spin waves with different energies and amplitudes rather than a pair of spin waves with opposite quasi-momentum, as it is the case for PBC. This still allows individual atoms to correlate and generate squeezing. However, the excitation of a superposition of spin waves complicates the form of the effective model. We analyze this unconventional model in detail identifying the initial conditions and the coupling parameters for spin squeezing generation with the level given by the OAT and two-axis counter twisting (TACT) models [32, 33, 24]. Consequently, we show that it is possible to generate Heisenberg level of squeezing in spin-1/2 Heisenberg chains under OBC. In addition, we show that the corresponding time scale of the best squeezing is reduced with respect to PBC when keeping the same perturbation level. Our analytical findings were confirmed by full numerical simulations. The results obtained can be used in the current state-of-the-art experiments with ultra-cold atoms in optical lattices [34, 35, 36] and tweezer arrays [37, 38].
## 1 Heisenberg model and spin-waves states for OBC
Let us concentrate on a specific physical system composed of the total even number \(N\) of fermionic ultra-cold atoms loaded into a one-dimensional optical lattice potential of \(N\) sites. Each atom has two internal states \(\ket{\uparrow}\) and \(\ket{\downarrow}\) corresponding to a spin-1/2 degree of freedom. The atoms are assumed to occupy the lowest Bloch band, interact through s-wave collisions, and hence can be described by the Fermi-Hubbard model.
We assume the interaction dominates over the tunnelling and the system is in the Mott insulating phase at half-filling when double occupancy of a single site is energetically unfavourable. The second order processes, obtained by a projection onto the manifold of single occupancy of lattice sites, lead to the nearest-neighbour spin-exchange interactions [24, 22, 28, 29, 30, 31, 22]. The spin dynamics of this system is well captured by the isotropic Heisenberg (spin exchange) model [39, 40]
\[\begin{split}\hat{H}_{\text{SE}}=J_{\text{SE}}\sum_{j=1}^{N-1} \bigg{(}&\hat{S}_{j}^{x}\hat{S}_{j+1}^{x}+\hat{S}_{j}^{y}\hat{S}_ {j+1}^{y}\\ &+\hat{S}_{j}^{z}\hat{S}_{j+1}^{z}-\frac{1}{4}\bigg{)},\end{split} \tag{1}\]
where \(J_{\text{SE}}\) represents the spin-exchange energy, \(\hat{S}_{j}^{+}=\hat{a}_{j,\uparrow}^{\dagger}\hat{a}_{j,\downarrow}\), \(\hat{S}_{j}^{-}=\hat{a}_{j,\downarrow}^{\dagger}\hat{a}_{j,\uparrow}\), \(\hat{S}_{j}^{\pm}=\hat{S}_{j}^{x}\pm i\hat{S}_{j}^{y}\), \(\hat{S}_{j}^{z}=(\hat{n}_{j,\uparrow}-\hat{n}_{j,\downarrow})/2\) are on-site spin operators, and where we take \(\hbar=1\). The fermionic operators \(\hat{a}_{j,s}\) annihilate an atom in the \(j\)th lattice site in the state \(s\in\{\uparrow,\downarrow\}\), and \(\hat{n}_{j,s}=\hat{a}_{j,s}^{\dagger}\hat{a}_{j,s}\) is the
corresponding on-site operator of the number of atoms. We also introduce the collective spin operators \(\hat{S}_{\sigma}=\sum_{j}\hat{S}_{j}^{\sigma}\) with \(\sigma=x,y,z,\pm\). The analytical form of the energy spectrum of the Hamiltonian (1) and corresponding eigenstates for PBC are known from 1931 due to the famous work of Bethe [27]. Their counterpart for OBC is less explored, up to our knowledge.
The Hamiltonian (1) is spherically symmetric with respect to spin rotation. Thus eigenstates of \(\hat{H}_{\rm SE}\) can be taken to be also the eigenstates of the square of the total spin \(\hat{S}^{2}=\hat{S}_{x}^{2}+\hat{S}_{y}^{2}+\hat{S}_{z}^{2}\) and its \(z\) projection \(\hat{S}_{z}\) with the eigenvalues \(S(S+1)\) and \(m\), respectively. To understand the spin squeezing dynamics let us first recall the analytical form of two energy manifolds of \(\hat{H}_{\rm SE}\) characterized by the largest values of the total spin.
The first energy manifold corresponding to the total spin quantum number \(S=N/2\) is spanned by Dicke states \(\left|m\right\rangle\equiv\left|N/2,m\right\rangle\) which are zero energy eigenstates of \(\hat{H}_{SE}\). They can be represented in terms of the all spins up state affected \(N/2-m\) times by the collective spin lowering operator \(\hat{S}_{-}\):
\[\left|m\right\rangle=\sqrt{\frac{(N/2+m)!}{(N/2-m)!(N)!}}\hat{S}_{-}^{N/2-m} \bigotimes_{j=1}^{N}\left|\uparrow\right\rangle_{j}, \tag{2}\]
where the quantization axis is chosen to be along the \(z\) direction: \(\hat{S}_{j}^{z}\left|\uparrow\right\rangle_{j}=1/2\left|\uparrow\right\rangle _{j}\) and \(\hat{S}_{j}^{z}\left|\downarrow\right\rangle_{j}=-1/2\left|\downarrow\right\rangle _{j}\). Alternatively, the Dicke states \(\left|m\right\rangle\) can be defined by using the rising operator \(\hat{S}_{+}\equiv(\hat{S}_{-})^{\dagger}\) in the place of \(\hat{S}_{-}\) when replacing \(m\) and \(\left|\uparrow\right\rangle_{j}\) with \(-m\) and \(\left|\downarrow\right\rangle_{j}\), respectively, on the right-hand side of (2). The Dicke states are eigenstates of \(\hat{H}_{\rm SE}\) with zero eigen-energies for both PBC and OBC. Altogether there are \(N+1\) Dicke states corresponding to different values of \(m\in(-N/2,-N/2+1,\cdots,N/2)\).
The second energy manifold to be considered is spanned by the spin-wave states [41, 42, 24, 22] containing one spin excitation and characterized by the total spin quantum number \(S=N/2-1\). In the case of OBC one can solve analytically the eigenproblem of these states for the Hamiltonian (1) by using the coordinate Bethe ansatz modified appropriately to account for the difference coming from the two boundary points, see Appendix A for derivation. This leads to the following form of the spin-wave states
\[\left|m,q\right\rangle=\pm\sqrt{N}c_{N/2,\pm m}\sum_{j=1}^{N}p_{j}^{(q)}\hat{ S}_{j}^{\pm}|m\mp 1\rangle, \tag{3}\]
where
\[c_{N/2,\pm m}=\sqrt{\frac{N-1}{(N/2\mp m)(N/2\mp m+1)}}. \tag{4}\]
The sign \(\pm\) in Eq. (3) for \(\left|m,q\right\rangle\) corresponds to two equivalent definitions of the spin waves in terms of the on-site spin raising and lowering operators \(\hat{S}_{j}^{\pm}\) acting on the Dicke states. Furthermore, the coefficients featured in Eq. (3) are
\[p_{j}^{(q)}=\sqrt{\frac{2}{N}}\cos\left[\frac{\pi}{N}\left(j-\frac{1}{2} \right)q\right]\,. \tag{5}\]
Altogether there are \((N-1)^{2}\) different spin-wave states corresponding to various combinations of quantum numbers \(m\in(-N/2+1,-N/2+2,\cdots,N/2-1)\) and \(q=1,2,\cdots,N-1\). The corresponding eigenenergies \(E_{q}\) do not depend on the spin projection quantum number \(m\) and read
\[E_{q}=J_{SE}\left[\cos(\frac{\pi}{N}q)-1\right]. \tag{6}\]
Notice, that for OBC the amplitudes \(p_{j}^{(q)}\) given by Eq. (5) represent standing waves. They thus differ from the solution for PBC where the amplitudes \(p_{j}^{(q)}=N^{-1/2}e^{i2\pi qj/N}\) are plane waves [42]. This has substantial consequences for the coupling mechanism and the spin squeezing dynamics analyzed in Sections 3 and 4.
## 2 Protocol for dynamical generation of spin squeezing
In order to generate spin squeezing in this Heisenberg spin-1/2 chain with OBC described by Hamiltonian (1) we add an atom-light coupling which induces position-dependent spin-flipping. The resulting system Hamiltonian \(\hat{H}_{\rm spin}\) reads
\[\hat{H}_{\rm spin} =\hat{H}_{\rm SE}+\hat{H}_{\uparrow\downarrow}, \tag{7}\] \[\hat{H}_{\uparrow\downarrow} =\frac{\Omega}{2}\sum_{j=1}^{N}\left(e^{i(\phi j-\phi_{0})}\hat{S} _{j}^{+}+e^{-i(\phi j-\phi_{0})}\hat{S}_{j}^{-}\right)\,, \tag{8}\]
where the extra term \(\hat{H}_{\uparrow\downarrow}\) represents the sum over the on-site spin-flip coupling with the amplitude \(\Omega\) and position-dependent phase \(\phi j\), where
\(\phi=\pi\cos(\alpha)\lambda_{\rm latt}/\lambda_{L}\) can be tuned by properly choosing an angle \(\alpha\) between laser beams producing the optical lattice and the direction of laser field inducing the coupling. The two beams are characterized by the wave-lengths \(\lambda_{\rm latt}\) and \(\lambda_{L}\), respectively, see e.g. in [24]. Here, \(\phi_{0}\in[0,2\pi)\) is the global off-set phase of the coupling lasers, which can be interpreted as the transformation of \(\hat{H}_{\uparrow\downarrow}\) due to the global spin rotation around the \(z\) axis by the angle \(\phi_{0}\). Equivalently, it can also be interpreted as the spin rotation for the initial state around the same \(z\) axis and by the same angle \(\phi_{0}\), but in the opposite direction.
In the case of PBC, the coupling phase \(\phi\) should be commensurate with \(2\pi/N\), namely \(\phi=2\pi n/N\), where \(n=1,2,\cdots,N-1\), to ensure periodicity of \(\hat{H}_{\uparrow\downarrow}\)[24]. Here, however, we are interested in OBC, and therefore \(\phi\) can take any real values apart from the trivial one \(\phi=0\) or \(\phi=2\pi\) for which \(\hat{H}_{\uparrow\downarrow}\) does not provide coupling between the Dicke and the spin-wave state manifolds needed for the generation of spin squeezing.
The initial state convenient to start the evolution is the spin coherent state
\[|\theta,\varphi\rangle=e^{-i\hat{S}_{z}\varphi}e^{-i\hat{S}_{y}\theta} \bigotimes_{j=1}^{N}|\uparrow\rangle_{j}\,, \tag{9}\]
where all the spins point in the same direction parameterized by the spherical angles \(\theta\) and \(\varphi\). In general, the spin-coherent state (9) belongs to the Dicke manifold of the total spin \(S=N/2\) and hence can be expressed in the basis of the Dicke states (2) as
\[|\theta,\varphi\rangle=\sum_{m=-N/2}^{N/2}a_{m}|m\rangle, \tag{10}\]
where
\[\begin{split} a_{m}=&\sqrt{\left(\frac{N}{\frac{N}{ 2}}+m\right)}\cos^{\frac{N}{2}+m}\left(\frac{\theta}{2}\right)\\ &\times\sin^{\frac{N}{2}-m}\left(\frac{\theta}{2}\right)e^{i( \frac{N}{2}-m)\varphi}\end{split} \tag{11}\]
are coefficients of decomposition.
The subsequent evolution of the initial state is defined by the unitary operator \(\hat{U}=e^{-it\hat{H}_{\rm spin}}\). To quantify the level of squeezing generated in time we use the spin squeezing parameter
\[\xi^{2}=\frac{N(\Delta\hat{S}_{\perp})_{\rm min}^{2}}{\langle\hat{S}\rangle^{ 2}} \tag{12}\]
where the length of the mean collective spin is \(\langle\hat{S}\rangle\) and the minimal variance of the collective spin orthogonally to its direction is \((\Delta\hat{S}_{\perp})_{\rm min}^{2}\)[43].
Non-trivial quantum correlations are produced in the weak coupling regime, where the characteristic energy of the coupling Hamiltonian \(\hat{H}_{\uparrow\downarrow}\) is smaller than that of the spin-exchange term \(\hat{H}_{\rm SE}\). In the next section, we derive the effective model describing the spin squeezing dynamics in terms of collective spin operators.
## 3 Effective model
When the spin-flip coupling is weak compared to the energy of the spin exchange, the dynamics of the initial spin coherent state \(|\theta,\varphi\rangle\) governed by the spin Hamiltonian \(\hat{H}_{\rm spin}\) within the Dicke manifold can be well approximated using perturbation theory. Therefore, the coupling term \(\hat{H}_{\uparrow\downarrow}\) can be treated as a perturbation. For reasons that will be explained later, let us rephrase this operator in the following way:
\[\hat{H}_{\uparrow\downarrow}=\hat{\bar{H}}_{\uparrow\downarrow}+v_{x}\hat{S}_ {x}+v_{y}\hat{S}_{y}, \tag{13}\]
where
\[\hat{\bar{H}}_{\uparrow\downarrow}=\frac{\Omega}{2}\sum_{j=1}^{N}\left(\alpha _{j}^{+}\hat{S}_{j}^{+}+\alpha_{j}^{-}\hat{S}_{j}^{-}\right)\,. \tag{14}\]
Here, \(\alpha_{j}^{\pm}=e^{\pm i(\phi j-\phi_{0})}-A^{\pm}\) with \(A^{\pm}=\frac{1}{N}\sum_{j}e^{\pm i(\phi j-\phi_{0})}\), as well as \(v_{x}=\Omega{\rm Re}[A^{+}]/2\) and \(v_{y}=-\Omega{\rm Im}[A^{+}]/2\). The separation of the two last terms in (13) is made in such a way that \(\alpha_{j}^{\pm}\) sum up to zero. Notice, \(v_{x}\) and \(v_{y}\) are non-zero only for phases \(\phi\) incommensurate with \(2\pi/N\).
### First and second order contributions
The operator \(\hat{\bar{H}}_{\uparrow\downarrow}\) on the right-hand side of (13) induces the coupling between the Dicke and spin-wave state manifolds while the remaining ones directly couple the Dicke states and represent the first-order perturbation term
\[\hat{H}_{\rm eff}^{(1)}=v_{x}\hat{S}_{x}+v_{y}\hat{S}_{y}\,. \tag{15}\]
To generate spin squeezing one needs to take into account the second-order contribution induced by \(\hat{H}_{\uparrow\downarrow}\). It can be obtained via the Schrieffer-Wolf transformation [24, 22, 28, 29, 30, 31, 22] leading to
\[\hat{H}_{\rm eff}^{(2)}=\hat{I}_{N/2}\hat{\bar{H}}^{\uparrow\downarrow}\hat{G }_{N/2-1}\hat{\bar{H}}^{\uparrow\downarrow}\hat{I}_{N/2}, \tag{16}\]
where \(\hat{I}_{N/2}=\sum_{m}|m\rangle\langle m|\) is the unit operator for projection onto the Dicke manifold, while \(\hat{G}_{N/2-1}=\sum_{q\neq 0,m}\frac{|m,q\rangle\langle m,q|}{-E_{q}}\) is an operator which sums projectors onto the spin-wave states manifold with the corresponding energy mismatch denominator \(-E_{q}\). The matrix elements of (16) are
\[\langle m^{\prime}|\hat{H}_{\rm eff}^{(2)}|m\rangle=\sum_{m^{ \prime},q}\frac{\langle m^{\prime}|\hat{\bar{H}}_{\uparrow\downarrow}|m^{ \prime\prime},q\rangle\langle m^{\prime\prime},q|\hat{\bar{H}}_{\uparrow \downarrow}|m\rangle}{-E_{q}}. \tag{17}\]
Details about the transformation and its application to the Heisenberg spin-1/2 chain with the spin-flip coupling can be found in the Supplementary Material of reference [24]. In the following, we focus on the derivation of the effective Hamiltonian \(\hat{\bar{H}}_{\rm eff}^{(2)}\) and its representation in terms of the collective spin operators.
Let us start with expressing the action of \(\hat{\bar{H}}_{\uparrow\downarrow}\) on Dicke states, namely
\[\hat{\bar{H}}_{\uparrow\downarrow}|m\rangle=\frac{\Omega}{2}| \Psi,m+1\rangle^{+}+\frac{\Omega}{2}|\Psi,m-1\rangle^{-}, \tag{18}\]
where states \(|\Psi,m\pm 1\rangle^{\pm}=\sum_{j}\alpha_{j}^{\pm}\hat{\bar{S}}_{j}^{\pm}|m\rangle\) can be expanded in terms of the spin-wave states \(|m\pm 1,q\rangle\) as
\[|\Psi,m\pm 1\rangle^{\pm}=\sqrt{N}c_{N/2,\pm m+1}\sum_{q}f_{q}^{ \pm}|m\pm 1,q\rangle. \tag{19}\]
Here, \(c_{N/2,m\pm 1}\) are given by Eq. (4) and
\[f_{q}^{\pm}=\sum_{j}p_{j}^{(q)}\alpha_{j}^{\pm}=\sum_{j}p_{j}^{(q)}e^{\pm i( \phi j-\phi_{0})}\,, \tag{20}\]
with \(f_{q}^{+}=(f_{q}^{-})^{*}\) because \(p_{j}^{(q)}\) is real. Note, the spin-flip term \(\hat{\bar{H}}_{\uparrow\downarrow}\) couples each Dicke state \(|m\rangle\) with a superposition of spin-wave states (19) characterized by energies \(E_{q}\). This is different from the PBC case where \(\hat{\bar{H}}_{\uparrow\downarrow}\) couples each Dicke state with a pair of spin-wave states of well-defined quantum numbers \(q=\pm\phi N/(2\pi)\) set by the coupling phase \(\phi\)[24]. An example of the amplitude of elementary couplings \(f_{q}^{+}\) to the \(|m,q\rangle\) states is presented in Fig. 2. We can see that, indeed, the coupling could be non-negligible even to the lowest state \(|m,q=1\rangle\). Therefore, the perturbative regime is defined by the smallest energy gap, namely \(\Omega\ll|E_{q=1}|=J_{\rm SE}|\cos(\pi/N)-1|\).
The relevant matrix elements of the second-order contribution can be written as
\[\langle m^{\prime\prime},q|\hat{\bar{H}}_{\uparrow\downarrow}|m\rangle =\frac{\Omega}{2}N^{-1/2}c_{N/2,m+1}^{-1}f_{q}^{+}\delta_{m^{ \prime\prime},m+1}\] \[+\frac{\Omega}{2}N^{-1/2}c_{N/2,-m+1}^{-1}f_{q}^{-}\delta_{m^{ \prime\prime},m-1}, \tag{21}\]
where the coefficients \(N^{-1/2}c_{N/2,\pm m+1}^{-1}\) come from the scalar product between the Dicke state \(|m\rangle\) and the states \(|\Psi,m\pm 1\rangle^{\pm}\). The non-zero matrix elements of the second-order term (17), namely \(H_{m^{\prime},m}=\langle m^{\prime}|\hat{\bar{H}}_{\rm eff}^{(2)}|m\rangle\), read
\[H_{m,m} =-(c_{N/2,m}^{-2}+c_{N/2,-m}^{-2})\,(N-1)\chi_{z}, \tag{22}\] \[H_{m,m-2} =c_{N/2,m-1}^{-1}c_{N/2,-(m-1)}^{-1}\,(N-1)\chi_{x},\] (23) \[H_{m,m+2} =c_{N/2,m+1}^{-1}c_{N/2,-(m+1)}^{-1}\,(N-1)\chi_{x}, \tag{24}\]
where
\[\chi_{z} =\frac{\Omega^{2}}{4NJ_{\rm SE}(N-1)}\sum_{q=1}^{N-1}\frac{f_{q} ^{+}f_{q}^{-}}{\cos(\frac{\pi}{N}q)-1}, \tag{25}\] \[\chi_{x} =\frac{\Omega^{2}}{4NJ_{\rm SE}(N-1)}\sum_{q=1}^{N-1}\frac{\left( f_{q}^{-}\right)^{2}}{\cos(\frac{\pi}{N}q)-1}. \tag{26}\]
Comparing the matrix elements presented in Eqs. (22)-(24) with the matrix elements of the
Figure 2: The absolute values of the normalized coefficients \(|f_{q}^{+}|N^{-1/2}\) are shown by color versus the coupling phase \(\phi\in\mathbb{R}\) and the spin-waves quantum number \(q\in\mathbb{Z}\) for an arbitrary \(\phi_{0}\) when \(N=8\).
appropriate collective spin operators, the second-order perturbation contribution can be represented in the operator form as
\[\hat{H}_{\rm eff}^{(2)} =-2\chi_{z}\left(\hat{S}^{2}+\hat{S}_{z}^{2}\right)+{\rm Re}\left[ \chi_{x}\right]\left(\hat{S}_{+}^{2}+\hat{S}_{-}^{2}\right)\] \[+i{\rm Im}\left[\chi_{x}\right]\left(\hat{S}_{+}^{2}-\hat{S}_{-}^{ 2}\right), \tag{27}\]
as explained in Appendix B. The full effective Hamiltonian is a sum of the first- and second-order contributions:
\[\hat{H}_{\rm eff}^{(\phi_{0})}=\hat{H}_{\rm eff}^{(1)}+\hat{H}_{\rm eff}^{(2)}. \tag{28}\]
### Choosing the off-set phase
In what follows, we will take a value of the global coupling phase to be \(\phi_{0}=\phi(N+1)/2\), so that \(v_{y}\) entering Eqs. (13) and (15), as well as the imaginary part of \(\chi_{x}\) vanish, i.e. \(v_{y}={\rm Im}\left[\chi_{x}\right]=0\), see Appendix C. This simplifies the form of the effective model leading to
\[\hat{H}_{\rm eff}^{(\phi_{0})}=-2\chi_{z}\left(\hat{S}^{2}+\hat{S}_{z}^{2}- \eta\hat{S}_{x}^{2}+\eta\hat{S}_{y}^{2}+\gamma\hat{S}_{x}\right), \tag{29}\]
where \(\eta=\chi_{x}/\chi_{z}\) and \(\gamma=v_{x}/\chi_{z}\). This specific choice of phase \(\phi_{0}\) does not involve a loss of generality as the full effective Hamiltonian (28) containing \(\hat{H}_{\rm eff}^{(1)}\) and \(\hat{H}_{\rm eff}^{(2)}\) of Eqs. (15) and (27) is related to that given by Eq. (29) via a unitary transformation set by the global rotation around the \(z\) axis through the angle \(\phi_{0}\).
In Fig. 3 we show variation of the two parameters of the effective model (29), namely \(\eta\) and \(\gamma\), versus \(\phi\). The commensurate phases corresponding to \(\phi=2\pi n/N\) with \(n\in[1,N-1]\) are marked by open points in Fig. 3 for which one has \(\gamma=0\). In this case, we numerically observe that \(\eta=-1/2\) for \(\phi\neq\pi\), and \(\eta=-1\) for \(\phi=\pi\). In addition, we have also analytically found that
\[\chi_{z}=-\frac{\Omega^{2}}{4J_{\rm SE}(N-1)}\frac{2}{\cos(\phi) -1}, \tag{30}\] \[\chi_{x}=\frac{\Omega^{2}}{4J_{\rm SE}(N-1)}\frac{1}{\cos(\phi) -1}, \tag{31}\]
for commensurate phases \(\phi=2\pi n/N\) apart from \(\phi=\pi\) where
\[\chi_{z}=-\chi_{x}=-\frac{\Omega^{2}}{4J_{\rm SE}(N-1)}. \tag{32}\]
The derivation is presented in Appendix E. The non-commensurate coupling phases \(\phi\) result in both positive and negative values of the parameter \(\eta\) which is independent of \(J_{\rm SE}\), \(\Omega\), and \(N\). On the contrary, the coefficient \(\gamma\) depends on the system parameters, and scales as \(\gamma\propto NJ_{\rm SE}/\Omega\).
In this way, we derived the second-order contribution (27), and consequently, the effective model (29) showing that the boundaries significantly modify the spin squeezing Hamiltonian with respect to PBC in which one arrives at the effective Hamiltonian in a form of the OAT model, namely \(\hat{H}_{\rm eff}=-\chi_{\pi}\hat{S}_{x}^{2}\) for \(\phi=\pi\) and \(\hat{H}_{\rm eff}=\chi_{\phi}\hat{S}_{z}^{2}\) for \(\phi\neq\pi\)[24]. Therefore, it is not only the time scale that is changed due to OBC but the entire dynamics as well. This is a counter-intuitive result as usually, the PBC describes well the system in the limit of large \(N\).
## 4 Spin squeezing for OBC
In these subsections, we analyze the unitary evolution of spin squeezing parameter governed by the effective spin Hamiltonian (29). We distinguish two cases depending on the commensurability of the coupling phase \(\phi\). We demonstrate that if the coupling phase is commensurate, the
Figure 3: The parameters \(\eta\) (top panel) and \(\gamma\) (bottom panel) of the effective model (29) versus the coupling phase \(\phi\) are marked by black and orange lines, respectively, for \(N=8\), \(\Omega=|E_{q=1}|/10\) and \(\phi_{0}=\phi(N+1)/2\). The values of \(\eta\) and \(\gamma\) for commensurate phases are marked by open circles. The regions shaded in blue present examples when \(\eta<0\) while the one shaded in red when \(\eta>0\).
resulting model (29) can be either OAT for \(\phi=\pi\) or non-isotropic TACT for \(\phi\neq\pi\). The most general case of non-commensurate phases gives rise to the squeezing dynamics, however, not simulated by the conventional OAT and TACT twisting models.
### Spin squeezing with commensurate phase
Tuning the value of the coupling phase \(\phi\) to the integer multiple of \(2\pi/N\) simplifies the problem. In particular, by taking \(\phi=\pi\) we have \(\eta=-1\) and the effective Hamiltonian (29) acquires the form of the OAT one, namely
\[\hat{H}_{\rm eff}=4\chi_{z}\hat{S}_{y}^{2}, \tag{33}\]
where we omitted a term proportional to \(\hat{S}^{2}\), as it only shifts the origin of energy. The convenient initial spin coherent states are the ones polarized in the \(x-z\) plane, namely \(|\theta,\varphi=0\rangle\) and for any \(\theta\). The best level of squeezing \(\xi_{\rm best}^{2}\approx N^{-2/3}\) is achievable for times \(t_{\rm best}\approx N^{-2/3}|4\chi_{z}|^{-1}\), in the large \(N\) limit according to the OAT dynamics [32, 44]. Next, taking the analytical expression (32) for \(\chi_{z}\) we obtain \(t_{\rm best}\approx N^{1/3}J_{\rm SE}/\Omega^{2}\). Therefore, the twisting dynamics is essentially the same as for PBC [24]. The only difference is that for OBC the resulting time scale is four times shorter compared to the PBC case when keeping the same perturbation level \(\Omega\). Acceleration of the best squeezing time takes place because of a broader range of amplitudes \(p_{j}^{(q)}\) contributing to the generation of spin squeezing.
In another situation, when the coupling phase is not equal to \(\pi\) we have \(\eta=-1/2\) and \(\gamma=0\), so the effective Hamiltonian (29) reduces to
\[\hat{H}_{\rm eff}=2\chi_{z}\left(\hat{S}_{y}^{2}-\hat{S}_{z}^{2}/2\right), \tag{34}\]
where we omitted the term proportional to \(\hat{S}^{2}\). Equation (34) represents the anisotropic TACT with the anisotropy equal to \(1/2\). It is worth stressing here, that OBC provides anisotropic TACT without adding an extra atom-light coupling characterized by two different phases. In the case of PBC it was necessary to include two spin-flipping terms in order to simulate TACT [24]. Let us consider again the initial state for the spin squeezing generation to be the spin coherent state polarized in the \(x-z\) plane, \(|\theta,\varphi=0\rangle\). The anisotropic TACT given by (34) generates the Heisenberg limited level of squeezing \(\xi_{\rm best}^{2}\approx N^{-1}\) on the time scale \(t_{\rm best}\approx(2\chi_{z}N\sqrt{2})^{-1}\ln(N/2)\)[12]. Therefore, taking into account the system parameters and the relation for \(\chi_{z}\) given by (30) we have \(t_{\rm best}\approx J_{\rm SE}\mathrm{ln}(N/2)|\cos\phi-1|/(\sqrt{2}\Omega^{2})\) which weakly depends on the system size \(N\).
In Fig. 4 we show examples of spin squeezing dynamics for different values of \(\Omega\). A perfect agreement with the effective model (29) is observed in the perturbative regime when \(\Omega\ll\pi\).
Figure 4: Variation of spin squeezing parameter (12) in time for different values of \(\Omega\) when the initial state is \(|\theta=\pi/2,\varphi=0\rangle\), \(N=8\) and \(\phi=\pi-2\pi/N\), \(\phi_{0}=\phi(N+1)/2\). The result for the effective model (29) is marked by olive crosses while results for the coupled Heisenberg model (7) are shown with black lines for \(\Omega=|E_{q=1}|/10\) (solid), \(\Omega=|E_{q=1}|\) (dashed) and \(\Omega=2|E_{q=1}|\) (dotted).
\(|E_{q=1}|\). Significant spin squeezing can also be generated beyond this regime, yet large discrepancies arise with respect to the TACT dynamics.
It is also worth commenting here on the importance of the coupling phase \(\phi\) on the best squeezing time. The dependence on \(\phi\) is hidden in the function \(\chi_{z}\). In Fig. 5 we plotted the variation of the best squeezing time with the phase \(\phi\). We can see the time scale increases by orders of magnitude for values of the coupling phase from \(\phi=2\pi/N\) to \(\phi=\pi\) and then decreases symmetrically to \(\phi=2\pi(N-1)/N\). Thus, in practical applications, the optimization of the system parameters \(J_{\rm SE}\), \(\Omega\), \(\phi\) will be necessary to have the shortest possible time scale.
### Spin squeezing with non-commensurate phases
The resulting effective model (29) simulated by the coupled Heisenberg one (7) gives rise to the spin squeezing generation also for non-commensurate coupling phases \(\phi\), i.e. the one which is not equal to integer multiplications of \(2\pi/N\). In general, the results depend strongly on the chosen initial spin coherent state \(|\theta,\varphi\rangle\) and parameters \(\eta\) and \(\gamma\).
Let us discuss the situation when the initial spin coherent state is polarized along the \(z\) axis: \(|0,0\rangle=\bigotimes_{j=1}^{N}\left|\uparrow\right\rangle_{j}\). Examples of the best squeezing and the best squeezing times are shown in Fig. 6 (a)-(d) panels when \(N=100\). A characteristic behavior is the OAT level of best squeezing for positive values of \(\eta\) which is demonstrated in panel (b). In other cases, when \(\eta\) is negative, the OAT level is also achieved mainly with \(\eta\) close to zero, see e.g. in panels (c) and (d). It is possible to exceed the OAT level of squeezing when \(\eta\) approaches the local minimum, see panels (a), (c), and (d). Interestingly, the last term in the effective model (29), namely \(\gamma\hat{S}_{x}\), does not dominate the dynamics even if \(\gamma\) is orders of magnitude larger than \(\eta\). In Appendix D we show the corresponding results for two different initial states. The OAT level of squeezing can be achieved when the initial state is polarized along the \(y\)-axis, \(|\theta=\pi/2,\varphi=\pi/2\rangle\). The best squeezing and times are of the same level as the ones presented in Fig. 6. On the other hand, if the
Figure 6: The best squeezing \(\xi_{\rm best}^{2}\) (green points) and the best squeezing time \(t_{\rm best}\) (red points) are shown in panels (a)-(d) for different regions of \(\phi\). The numerical results for the effective model (29) with \(N=100,J_{\rm SE}=1\), \(\Omega=|E_{q=1}|/10\), \(\phi_{0}=\phi(N+1)/2\) and \(\eta>0\) as indicated by the red shadowing areas and \(\eta<0\) indicated by the blue ones. The numerical values of \(\eta\) and \(\gamma\) used in simulations are shown in the top panels. The two limit cases for the values of \(\xi_{\rm best}^{2}\), namely OAT and TACT for \(N=100\), are marked with horizontal green dotted dashed lines, respectively.
evolution starts with the state polarized along the \(x\)-axis, \(|\theta=\pi/2,\varphi=0\rangle\), the dominant Zeeman-like term \(\gamma\hat{S}_{x}\) in (29) freezes the dynamics of the spin state and only weak spin squeezing is generated for non-commensurate phases.
## 5 Conclusions and Summary
We studied in detail the effect of OBC on the generation of spin squeezing in one-dimensional isotropic Heisenberg spin-1/2 chains induced by the position-dependent spin-flip coupling. We extended the spin-wave theory for the case of OBC using the coordinate Bethe ansatz. We derived analytically the effective model in terms of the collective spin operators which describe the squeezing dynamics in the weak coupling regime. The resulting effective model obtained differs significantly from the one under PBC and, therefore, provides an example when the boundaries significantly modify the dynamics of the system. To classify the squeezing scenarios, we distinguished two cases depending on the commensurability of the coupling phase \(\phi\). When the coupling phase is commensurate, the dynamics of spin squeezing is well captured by the non-isotropic TACT if \(\phi\neq\pi\) and OAT for \(\phi=\pi\). The most general non-commensurate phase still gives rise to the simulation of a squeezing model although not a conventional one. Our analytical predictions were confirmed by the full numerical simulations.
The results presented here show how to produce entangled states in the isotropic spin-1/2 Heisenberg chains with nearest-neighbor interactions. This is possible by the addition of the position-dependent spin-flip coupling that is weak enough to maintain the dynamics within the Dicke manifold and strong enough to excite spin waves that are extended over the entire system, allowing "effective" all-to-all interaction between the individual spins. It is also worth adding that the dynamics of generated spin-squeezed states can be frozen at a desired time just by turning off the spin-flipping term. The results obtained can be verified experimentally by current state-of-the-art experiments with ultra-cold atoms.
## Acknowledgments
We gratefully acknowledge discussions with B. B. Laburthe-Tolra, M. R. de Saint-Vincent and A. Sinatra. We thank O. Stachowiak for discussions and providing us Fig. 7. This work was supported by the European Social Fund (Project No. Nr 09.3.3-LMT-K-712-23-0035) under grant agreement with the Research Council of Lithuania (M.M.S.), the Polish National Science Centre project DEC-2019/35/O/ST2/01873 (T.H.Y.) and Grant No. 2019/32/Z/ST2/00016 through the project MAQS under QuantERA, which has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement no 731473 (E.W.). M. P. acknowledges the support of the Polish National Agency for Academic Exchange, the Bekker program no: PPN/BEK/2020/1/00317, and ERC AdG NO-QIA; Ministerio de Ciencia y Innovation Agencia Estatal de Investigaciones (PGC2018-097027-B-I00/10.13039/501100011033, CEX2019-000910-S/10.13039/501100011033, Plan National FIDEUA PID2019-106901GB-I00, FPI, QUANTERA MAQS PCI2019-111828-2, QUANTUME DYNAMITE PCI2022-132919, Proyectos de I+D+I "Retos Colaboracion" QUSPIN RTC2019-007196-7); MICIIN with funding from European Union NextGenerationEU(PRTR-C17.I1) and by Generalitat de Catalunya; Fundacio Cirbelacios Mir-Puig; Generalitat de Catalunya (European Social Fund FEDER and CERCA program, AGAUR Grant No. 2021 SGR 01452, QuantumCAT & U16-011424, co-funded by ERDF Operational Program of Catalonia 2014-2020); Barcelona Supercomputing Center MareNostrum (FI-2022-1-0042); EU Horizon 2020 FET-OPEN OPTologic (Grant No 899794); EU Horizon Europe Program (Grant Agreement 101080086 -- NeQST), National Science Centre, Poland (Symfonia Grant No. 2016/20/W/ST4/00314); ICFO Internal "QuantumGaudi" project; European Union's Horizon 2020 research and innovation program under the Marie-Sklodowska-Curie grant agreement No 101029393 (STRECDH) and No 847648 ("La Caixa" Junior Leaders fellowships ID100010434: LCF/BQ/PI19/11690013, LCF/BQ/PI20/11760031, LCF/BQ/PR20/11770012, LCF/BQ/PR21/11840013). Views and opinions expressed in this work are, however, those of the author(s) only and do not necessarily reflect those of the European Union, European
Climate, Infrastructure and Environment Executive Agency (CINEA), nor any other granting authority. Neither the European Union nor any granting authority can be held responsible for them.
A part of the computations was carried out at the Centre of Informatics Tricity Academic Supercomputer & Network.
## Author contributions
THY performed many-body simulations and provided all numerical data presented in the paper.
## Appendix A Spin-waves states for OBC
In this section, we are interested in spin-wave states which are eigenstates of the isotropic Heisenberg model,
\[\hat{H}_{\rm SE}=J_{\rm SE}\sum_{j=1}^{N-1}\left(S_{j}^{z}S_{j+1}^{z}+S_{j}^{y }S_{j+1}^{y}+S_{j}^{x}S_{j+1}^{x}-\frac{1}{4}\right), \tag{35}\]
for \(N\) spins and open boundary conditions. In the following, we will show that the spin-wave states are given by Eq. (3) of the main text, namely
\[|m,q\rangle=\pm\sqrt{N}c_{N/2,\pm m}\sum_{j=1}^{N}p_{j}^{(q)}\hat{S}_{j}^{\pm} |m\mp 1\rangle. \tag{36}\]
In the above equation, the states \(|m\mp 1\rangle\) are Dicke states while the usage of the on-site rising and lowering operators \(\hat{S}_{j}^{\pm}\) corresponds to the two ways of definition of spin wave states. Note that \(S_{z}|m,q\rangle=m|m,q\rangle\), as each term comprising the state-vector (36) is characterized by the same spin projection \(m\). Furthermore, \(\hat{S}^{2}|m,q\rangle=S(S+1)|m,q\rangle\), with \(S=N/2-1\). To see this we notice that the states (36) are constructed in such a way that
\[|m,q\rangle\propto\hat{S}_{\pm}^{N/2-1\pm m}|q\rangle^{\pm}, \tag{37}\]
where the state-vector \(|q\rangle^{\pm}\equiv|\mp(N/2-1),q\rangle\) corresponds to the minimum and maximum value of the spin projection \(m=\mp(N/2-1)\). Since \([\hat{S}^{2},\hat{S}_{\pm}]=0\), then
\[\hat{S}^{2}|m,q\rangle\propto\hat{S}_{\pm}^{N/2-1\pm m}\hat{S}^{2}|q\rangle^{ \pm}, \tag{38}\]
Therefore, one needs to find the action of the operator \(\hat{S}^{2}\) on the state-vector \(|q\rangle^{\pm}\) which is
\[\hat{S}^{2}|q\rangle^{\pm}=\left(\hat{S}_{z}^{2}+\hat{S}_{z}+\hat{S}_{-}\hat{ S}_{+}\right)|q\rangle^{\pm}=\left[\left(\frac{N}{2}\right)^{2}-\frac{N}{2} \right]|q\rangle^{\pm}+\left(\sum_{j}p_{j}^{(q)}\right)\hat{S}_{\pm}|N/2,\mp N /2\rangle. \tag{39}\]
One can see that the state-vectors \(|q\rangle^{\pm}\) are eigenstates of the \(\hat{S}^{2}\) operator with the spin quantum number \(S=N/2-1\) if the last term in (39) is zero, i.e.
\[\sum_{j}p_{j}^{(q)}=0. \tag{40}\]
In that case the state-vectors \(|m,q\rangle\) with an arbitrary \(m\) are also the eigenstates of \(\hat{S}^{2}\) with the quantum number \(S=N/2-1\). Note that the explicit form of the coefficients \(p_{j}^{(q)}\) presented later in Eq.(50) do obey the condition (40).
We are looking for the spin-wave states \(|m,q\rangle\) which are eigenstates of the Hamiltonian (35). Since \([\hat{H}_{\text{SE}},\hat{S}_{\pm}]=0\), using Eq. (37), one can see that the eigenstates \(|m,q\rangle\) of the Hamiltonian \(\hat{H}_{\text{SE}}\) have eigenenergies \(E_{q}\) which do not depend on the quantum number \(m\). Therefore, by choosing the amplitudes \(p_{j}^{(q)}\) in such a way that \(|q\rangle^{\pm}\) are eigenstates of the spin exchange Hamiltonian (35), the states \(|m,q\rangle\) for any magnetization \(m\) are also its eigenstates with the same eigen-energies \(E_{q}\).
Below we show how to derive the form of \(p_{j}^{(q)}\) for \(|q\rangle^{+}\) using OBC. The equations for \(|q\rangle^{-}\) give the same expansion coefficients \(p_{j}^{(q)}\) and the same eigen-energies \(E_{q}\). Using the coordinate basis vectors:
\[|\tilde{l}\rangle\equiv\hat{S}_{l}^{+}|-N/2\rangle=\hat{S}_{l}^{+}\bigotimes_{ j=1}^{N}|\downarrow\rangle_{j}\,, \tag{41}\]
the spin wave states \(|q\rangle^{+}\) can be represented as
\[|q\rangle^{+}=\sum_{l=1}^{N}p_{l}|\tilde{l}\rangle. \tag{42}\]
The coefficients \(p_{l}\) are evaluated by considering the eigenvalue problem
\[(H-EI)\vec{p}=0, \tag{43}\]
where \(I\) is the identity matrix, \(\vec{p}=(p_{1},p_{2},...)\) and the matrix elements of \(H\) are \(H_{l^{\prime},l}=\langle\vec{l}^{\prime}|\hat{H}_{SE}|\tilde{l}\rangle\).
The matrix form of eigenproblem (43) leads to the set of equations
\[-\frac{J_{SE}}{2}p_{1}+\frac{J_{SE}}{2}p_{2} =Ep_{1} \tag{44}\] \[\frac{J_{SE}}{2}p_{l-1}-J_{SEpl}+\frac{J_{SE}}{2}p_{l+1} =Ep_{l},\ \text{for}\ l\in[2,N-1]\] (45) \[-\frac{J_{SE}}{2}p_{N}+\frac{J_{SE}}{2}p_{N-1} =Ep_{N} \tag{46}\]
where (44) and (46) are for the boundary sites of the lattice. We use the idea by Puszkarski [45] and add two virtual lattice sites \(p_{0}\) and \(p_{N+1}\) subject the boundary constrain \(p_{0}=p_{1}\) and \(p_{N+1}=p_{N}\). In that case, the set of equations (44)-(46) becomes equivalent to the following set of bulk equations valid for any \(l\):
\[\frac{J_{SE}}{2}p_{l-1}-J_{SE}p_{l}+\frac{J_{SE}}{2}p_{l+1}=Ep_{l}. \tag{47}\]
The solution to Eq.(47) can be represented as
\[p_{l}=p\cos\left[k(l+u)\right], \tag{48}\]
Figure 7: (a) Energy spectrum \(E_{q}\) of the spin-wave states for open boundary conditions, numerical (black points) and analytical (red dashed line) results. (b) and (c) show eigenvectors \(p_{l}\) being solutions of (44)-(46) for open boundary conditions when \(q=15\) and \(q=2\), respectively. Analytical results are marked by lines while the numerical one are marked by points (orange dashed lines mark real parts of \(p_{l}\) while blue solid line are imaginary parts of \(p_{l}\)). An example for \(N=20\).
with the corresponding eigen-energies \(E=J_{SE}(\cos k-1)\). The boundary constrain \(p_{0}=p_{1}\) requires \(\cos(uk)=\cos(uk+k)\) which is fulfilled for \(u=-1/2\). The second constrain \(p_{N+1}=p_{L}\) gives the requirement
\[\cos(kN+k+uk)=\cos(kN+uk), \tag{49}\]
which is fulfilled when \(k=q\pi/N\), with \(q=1,2,\cdots,N-1\) being an integer. Therefore, we arrive at the required expansion coefficients and the corresponding eigen-energies:
\[p_{l}^{(q)} =\sqrt{\frac{2}{N}}\cos\left[\frac{\pi}{N}\left(l-\frac{1}{2} \right)q\right], \tag{50}\] \[E_{q} =J_{SE}\left[\cos\!\left(\frac{\pi}{N}q\right)-1\right]. \tag{51}\]
Note, that the value \(q=0\) is not included here, as in that case, the coefficients \(p_{l}^{(q)}\) do not depend on \(l\) and thus do not obey the condition (40). Although, such a state with \(q=0\) is an eigenstate of the Hamiltonian \(\hat{H}_{\rm SE}\), it belongs to the Dicke manifold and is characterized by the spin quantum number \(S=N/2\) and zero eigen-energy.
In Fig. 7 we show comparison of the numerical solution of (44)-(46) with the analytical results. The perfect agreement can be noticed.
## Appendix B Matrix representation of spin operators needed for effective model
In the following, we will present the matrix representation of various spin operators \(\hat{S}_{\sigma}\) with \(\sigma=z,\pm\), by using \(\hat{S}_{-}|S,m\rangle=A_{-}^{S,m}|S,m-1\rangle\), \(A_{-}^{S,m}=\sqrt{(S+m)(S-m+1)}\), \(\hat{S}_{+}|S,m\rangle=A_{+}^{S,m}|S,m+1\rangle\), \(A_{+}^{S,m}=\sqrt{(S-m)(S+m+1)}\).
The non-zero elements relevant for the relation of matrix representation with the corresponding spin operators, are
\[\langle N/2,m|\hat{S}_{-}^{2}|N/2,m+2\rangle =\sqrt{(\frac{N}{2}+m+2)(\frac{N}{2}-m-1)(\frac{N}{2}+m+1)(\frac{ N}{2}-m)} \tag{52}\] \[\langle N/2,m|\hat{S}_{+}^{2}|N/2,m-2\rangle =\sqrt{(\frac{N}{2}+m)(\frac{N}{2}-m+1)(\frac{N}{2}+m-1)(\frac{N} {2}-m+2)} \tag{53}\]
One can show that the right hand site of Eq.(52) equals \((N-1)c_{N/2,m+1}^{-1}c_{N/2,-(m+1)}^{-1}\) and the right hand site of Eq.(53) equals \((N-1)c_{N/2,m-1}^{-1}c_{N/2,-(m-1)}^{-1}\). In addition, \(\langle N/2,m|\hat{S}_{z}^{2}|N/2,m\rangle=m^{2}\) and \(\langle N/2,m|\hat{S}^{2}|N/2,m\rangle=\frac{N}{2}\left(\frac{N}{2}+1\right)\) while \((c_{N/2,m}^{-2}+c_{N/2,-m}^{-2})=\frac{2}{N-1}\left(m^{2}+\frac{N}{2}+\frac{ N^{2}}{4}\right)\).
## Appendix C Effective model and off-set phase
The general form of the effective model including the first- and second-order perturbation terms is
\[\hat{H}_{\rm eff}=2\chi_{z}\left(\hat{S}^{2}+\hat{S}_{z}^{2}\right)-{\rm Re} \left[\chi_{x}\right]\left(\hat{S}_{+}^{2}+\hat{S}_{-}^{2}\right)-i{\rm Im} \left[\chi_{x}\right]\left(\hat{S}_{+}^{2}-\hat{S}_{-}^{2}\right)+v_{x}\hat{S} _{x}+v_{y}\hat{S}_{y} \tag{54}\]
which for \(\phi_{0}=\phi(M+1)/2\) leads to (29).
While the general form of the effective Hamiltonian (54) includes the mixed term \(\hat{S}_{+}^{2}-\hat{S}_{-}^{2}\propto\hat{S}_{x}\hat{S}_{y}+\hat{S}_{y}\hat{S} _{x}\) that complicates the effective model, it can be removed in general by a proper choice of the global phase factor in the atom-light coupling term. This is done by choosing a phase shift \(\phi_{0}\) so that \({\rm Im}\left[\chi_{x}\right]=0\). In fact, it is sufficient to fulfill \({\rm Im}[(f_{q}^{\pm})^{2}]=0\); \(\forall q\) since \({\rm Im}[\chi_{x}]\propto\sum_{q}\left({\rm Im}[(f_{q}^{\pm})^{2}]/E_{q}\right)\). By calculating explicitly
\[f_{q}^{\pm}=\sum_{j=1}^{N}p_{j}(q)\alpha_{j}^{\pm}=\frac{\sqrt{2}}{N}\sum_{j=1 }^{N}\cos\left[\frac{\pi}{N}q\left(j-\frac{1}{2}\right)\right]e^{i(\phi j-\phi _{0})}, \tag{55}\]
using the geometric series result
\[\sum_{j=1}^{N}r^{j}=\begin{cases}\frac{1-r^{N}}{r^{-1}-r}&\text{if }r\neq 1,\\ N&\text{if }r=1,\end{cases} \tag{56}\]
we obtain
\[f_{q}^{\pm}=\left\{\begin{array}{ll}\frac{e^{i\left(\frac{\phi}{2}-\phi_{0} \right)}}{\sqrt{2}}\bigg{[}&\frac{e^{-i\pi\left(\frac{\phi}{2}-\frac{N\phi}{2 \pi}\right)}}{N}g(q,-\phi)\\ &+\frac{e^{i\pi\left(\frac{\phi}{2}+\frac{N\phi}{2\pi}\right)}}{N}g(q,\phi) \bigg{]}&\text{if }\phi\neq\pm\frac{\pi}{N}q,\\ &\frac{e^{i\left(\frac{\phi}{2}-\phi_{0}\right)}}{\sqrt{2}}&\text{if }\phi=\pm \frac{\pi}{N}q,\end{array}\right. \tag{57}\]
where \(g(q,\phi)=\frac{\sin\pi\left(\frac{\phi}{2}+\frac{N\phi}{2\pi}\right)}{\sin \frac{\pi}{N}\left(\frac{\phi}{2}+\frac{N\phi}{2\pi}\right)}\). This can also be written as
\[f_{q}^{\pm}=\left\{\begin{array}{ll}\frac{e^{i\left(\frac{N+1}{2}\phi-\phi_ {0}\right)}}{\sqrt{2}}\frac{q^{i}}{N}&[(-1)^{q}g(q,-\phi)\\ &+g(q,\phi)],&\text{if }\phi\neq\pm\frac{\pi}{N}q\\ \frac{e^{i\left(\frac{\phi}{2}-\phi_{0}\right)}}{\sqrt{2}},&\text{if }\phi=\pm \frac{\pi}{N}q.\end{array}\right. \tag{58}\]
Then
\[\text{Im}[\left(f_{q}^{\pm}\right)^{2}]\propto\begin{cases}\sin\left((N+1) \phi-2\phi_{0}\right)&\text{if }\phi\neq\pm\frac{\pi}{N}q,\\ \sin(\phi-2\phi_{0})&\text{if }\phi=\pm\frac{\pi}{N}q,\end{cases} \tag{59}\]
for \(\text{Im}[\left(f_{q}^{\pm}\right)^{2}]=0\); \(\forall q\) it follows that
\[\phi_{0}=\begin{cases}\frac{N+1}{2}\phi+\frac{\pi}{2}n&\text{if }\phi\neq\pm\frac{\pi}{N}q,\\ \frac{\phi}{2}+\frac{\pi}{2}n&\text{if }\phi=\pm\frac{\pi}{N}q,\end{cases} \tag{60}\]
\(\forall n\in\mathbb{Z}\). Notice we can write the second case result as the first one without any generality loss by changing the variable \(n=q+n^{\prime}\). As such, \(\text{Im}[\chi_{x}]=0\) when
\[\phi_{0}=\frac{N+1}{2}\phi+\frac{\pi}{2}n;\quad\forall n\in\mathbb{Z}. \tag{61}\]
## Appendix D Spin squeezing for incommensurate phase
We have showcased the best squeezing results for the initial coherent state \(\ket{\theta=0,\phi=0}=\bigotimes_{j}\ket{\uparrow}_{j}\) in subsection 4.2, Fig. 6. Here we show that other choices for the initial state can provide different results. They are shown in Fig. 8 for the initial state \(\ket{\theta=\pi/2,\varphi=0}\) (middle panels) and \(\ket{\theta=\pi/2,\varphi=\pi/2}\) (bottom panels). The unitary evolution with the initial state being eigenstate of \(\hat{S}_{x}\), it is \(\ket{\theta=\pi/2,\varphi=0}\), shows practically no squeezing except very close to the commensurate phases or when \(\gamma\) is very small, see in panels (a)-(d) Fig. 8. On the other hand, when the initial state is eigenstate of \(\hat{S}_{y}\), it is \(\ket{\theta=\pi/2,\varphi=\pi/2}\), the squeezing dynamics is the same as for the initial state \(\ket{\theta=0,\phi=0}\) which is presented in Fig. 6. This is shown in panels (e)-(h) of Fig. 8.
## Appendix E Calculation of \(\eta\) for commensurate phases
For commensurate phase \(\phi=2\pi n/N\), it is possible to calculate \(\chi_{z}\) and \(\chi_{x}\) analytically. Consequently, one can obtain \(\eta\).
We make use of a method originally used in the study of random walks on lattices [46, 2] and also employed studying excitons in molecular aggregates [47].
Figure 8: The best squeezing \(\xi_{\rm best}^{2}\) (green points) and the best squeezing time \(t_{\rm best}\) (red points) are shown in panels (a)-(d) for initial state \(|\theta=\pi/2,\varphi=0\rangle\) and in panels (e)-(h) for initial state \(|\theta=\pi/2,\varphi=\pi/2\rangle\). The numerical results for the effective model (29) with \(N=100,J_{\rm SE}=1\), \(\Omega=|E_{q=1}|/10\), \(\phi_{0}=\phi(N+1)/2\) and \(\eta>0\) as indicated by the red shadowing areas and \(\eta<0\) indicated by the blue ones. The numerical values of \(\eta\) and \(\gamma\) used in simulations are shown in the top panels. The two limit cases for the values of \(\xi_{\rm best}^{2}\), namely OAT and TACT for \(N=100\), are marked with horizontal green dotted dashed lines, respectively.
For convenience, let us represent Eqs. (25) and (26) in the following way:
\[\chi_{z}=\frac{\Omega^{2}}{4J_{\rm SE}(N-1)}F_{\rm diag}^{(\phi)}, \tag{62}\]
\[\chi_{x}=\frac{\Omega^{2}}{4J_{\rm SE}(N-1)}F_{\rm off}^{(\phi)}, \tag{63}\]
where we have defined the dimensionless sums \(F_{\rm diag}^{(\phi)}\) and \(F_{\rm off}^{(\phi)}\):
\[F_{\rm diag}^{(\phi)}=\frac{1}{N}\sum_{j,l=1}^{N}E_{j,l}e^{{\rm i}\phi(j-l)}, \tag{64}\]
\[F_{\rm off}^{(\phi)}=\frac{1}{N}\sum_{j,l=1}^{N}E_{j,l}e^{{\rm i}\phi(j+l)-{ \rm i}2\phi_{0}}, \tag{65}\]
where
\[E_{j,l}=\frac{2}{N}\sum_{q=1}^{N}\frac{\cos\left[\frac{\pi q}{N}\left(j-\frac {1}{2}\right)\right]\cos\left[\frac{\pi q}{N}\left(l-\frac{1}{2}\right)\right] }{\cos(\pi q/N)-p}. \tag{66}\]
Here we added the \(q=N\) term which is zero, and introduced \(p=1+\epsilon\) to avoid divergences. The limit \(\epsilon\to 0^{+}\) will be taken at the end of calculations.
The main idea in finding this sum is to expand the denominator into a geometric series. To achieve this, one rewrites the denominator in the following way:
\[\cos(\pi q/N)-p=-\frac{b}{2}\left[1-b^{-1}e^{{\rm i}\pi q/N}\right]\left[1-b^ {-1}e^{-{\rm i}\pi q/N}\right], \tag{67}\]
where
\[b=p+\sqrt{p^{2}-1}. \tag{68}\]
By using the symmetry of the summand to expand the summation limits, one can rewrite \(E_{j,l}\) as
\[E_{j,l}=-C_{j+l-1}-C_{j-l}-\frac{1}{N}\frac{1}{1-p}, \tag{69}\]
where
\[C_{n}=\frac{1}{bN}\sum_{q=1-N}^{N}\frac{e^{{\rm i}\pi qn/N}}{\left[1-b^{-1}e^ {{\rm i}\pi q/N}\right]\left[1-b^{-1}e^{-{\rm i}\pi q/N}\right]} \tag{70}\]
with \(C_{-n}=C_{n}^{*}\). Note that the last term in Eq. (69) cancels the added \(q=0\) term in the summation.
Representing the denominator in terms of the geometric series, one has:
\[C_{n}=\frac{1}{bN}\sum_{q=1-N}^{N}\sum_{r=0}^{\infty}\sum_{s=0}^{\infty}e^{{ \rm i}\pi q(n+r-s)/N}b^{-(r+s)}. \tag{71}\]
Using
\[\frac{1}{N}\sum_{q=1-N}^{N}e^{{\rm i}\pi q(n+r-s)/N}=2\sum_{m=-\infty}^{ \infty}\delta_{n+r-s,2Nm}, \tag{72}\]
one obtains
\[C_{n}=\frac{2}{b}\sum_{m=-\infty}^{\infty}\sum_{r=0}^{\infty}\sum_{s=0}^{ \infty}b^{-(r+s)}\delta_{n+r-s,2Nm}.\]
Due to the Kronecker delta, the terms in the summation are non-zero only if \(s=r+n-2Nm\) or equivalently if \(r=s-n+2Nm\). Assuming that \(0\leq n<2N\), the integer \(s=r+n-2Nm\) is \(s\geq 0\) if \(m\leq 0\), whereas the integer \(r=s-n+2Nm\) is \(r\geq 0\) if \(m\geq 1\). Therefore it is convenient to split the summation over \(m\) into a part with \(m<1\) and that with \(m>0\), giving:
\[C_{n}= \,2b^{-1}\sum_{m=0}^{\infty}\sum_{r=0}^{\infty}b^{-(2r+2Nm+n)}+ \tag{73}\] \[\,2b^{-1}\sum_{m=1}^{\infty}\sum_{s=0}^{\infty}b^{-(2s+2Nm-n)}. \tag{74}\]
After evaluating the geometric sums, one arrives at
\[C_{n}=\frac{2}{b-b^{-1}}\frac{b^{-|n|}+b^{-2N+|n|}}{1-b^{-2N}} \tag{75}\]
where we have used the relation \(C_{-n}=C_{n}^{*}\).
Taking the limit \(\epsilon\to 0^{+}\), one obtains:
\[-3NE_{j,l}= \,1-3j+3j^{2}-3l+3l^{2}+3N-6\max\left(j,l\right)N+2N^{2}. \tag{76}\]
Therefore, one can rewrite Eq. (64) in terms of a double summation over \(j>l\) and a single summation for \(j=l\):
\[F_{\text{diag}}^{(\phi)}=\frac{2}{N}\sum_{j=1}^{N}\sum_{l=1}^{j-1}E_{j,l}\,e^ {\mathrm{i}\phi(j-l)}+\frac{1}{N}\sum_{j=1}^{N}E_{j,j}. \tag{77}\]
Performing this summation, one obtains:
\[F_{\text{diag}}^{(\phi)}=-\csc^{2}\left(\frac{\pi n}{N}\right). \tag{78}\]
Remembering that \(\phi=2\pi n/N\), one can rewrite this into:
\[F_{\text{diag}}^{(\phi)}=\frac{2}{\cos\phi-1}, \tag{79}\]
thus proving the identity mentioned in the main text.
As for \(F_{\text{off}}^{(\phi)}\), the steps are analogous, first rewriting the sum (65):
\[F_{\text{off}}^{(\phi)}=\frac{2}{N}\sum_{j=1}^{N}\sum_{l=1}^{j-1}E_{j,l}\,e^{ \mathrm{i}\phi(j+l)-\mathrm{i}2\phi_{0}}+\frac{1}{N}\sum_{j=1}^{N}E_{j,j}\,e^ {\mathrm{i}2\phi j-\mathrm{i}2\phi_{0}}. \tag{80}\]
For the initial phase \(\phi_{0}=\phi\left(N+1\right)/2\), summation yields:
\[F_{\text{off}}^{(\phi)}=\frac{1}{2}\csc^{2}\left(\frac{\pi n}{N}\right), \tag{81}\]
or equivalently:
\[F_{\text{off}}^{(\phi)}=-\frac{1}{\cos\phi-1}, \tag{82}\]
as expected.
Having both \(F_{\text{diag}}^{(\phi)}\) and \(F_{\text{off}}^{(\phi)}\), one can confirm that:
\[\eta=\frac{\text{Re}\left[F_{\text{off}}^{(\phi)}\right]}{F_{\text{diag}}^{( \phi)}}=-\frac{1}{2}, \tag{83}\]
as clearly seen in Fig. 3.
The exceptional case of \(\phi=\pi\) must be considered separately giving for \(\phi_{0}=\phi(N+1)/2\):
\[F_{\text{diag}}^{(\pi)}=-F_{\text{off}}^{(\pi)}=1. \tag{84}\]
In general, for any \(\phi_{0}\) one has the following identities:
\[F_{\text{off}}^{(\pi)}=\frac{1}{2}\,e^{i\left(\frac{2\pi n}{N}-2\phi_{0} \right)}\csc^{2}\left(\frac{\pi n}{N}\right), \tag{85}\]
or equivalently:
\[F_{\text{off}}^{(\pi)}=-\frac{e^{i\left(\phi-2\phi_{0}\right)}}{\cos\phi-1}. \tag{86}\]
|
2307.05615
|
Laser light scattering (LLS) to observe plasma impact on the adhesion of
micrometer-sized particles to a surface
|
Laser Light Scattering (LLS) method, combined with a long-distance microscope
was utilized to detect micrometer-sized particles on a smooth substrate. LLS
was capable to detect individual particle release, shrink, or fragmentation
during exposure to a plasma or a gas jet. In-situ monitoring of hundreds of
particles was carried out to investigate the effect of hydrogen plasma exposure
on particle adhesion, morphology, and composition. LLS was calibrated with
monodisperse melamine resin spheres with known sizes of 2.14 um, 2.94 um, and
5.26 um in diameter. The lowest achievable noise level of approximately 3% was
demonstrated for counting 5.26 um spherical melamine particles. The accuracy
for melamine particle size measurements ranged from 50% for 2.14 um particles
to 10% for 5.26 um particles. This scatter was taken as the imprecision of the
method. Size distribution for polydisperse particles with known refractive
index was obtained by interpolating to an effective scattering cross-section of
a sphere using Mie theory. While the Abbe diffraction limit was about 2 um in
our system, the detection limit for Si particles in LLS according to Mie
approximation was assessed to about 3 um, given the limitations of the laser
flux, microscope resolution, camera noise, and particle composition.
Additionally, the gradual changes in forward scattering cross-sections for Si
particles during the exposure to the hydrogen plasma were consistent with Si
etching reported in the literature.
|
D. Shefer, A. Nikipelov, M. van de Kerkhof, V. Banine, J. Beckers
|
2023-07-11T01:40:20Z
|
http://arxiv.org/abs/2307.05615v1
|
Laser light scattering (LLS) to observe plasma impact on the adhesion of micrometer-sized particles to a surface
###### Abstract
Laser Light Scattering (LLS) method, combined with a long-distance microscope was utilized to detect micrometer-sized particles on a smooth substrate. LLS was capable to detect individual particle release, shrink, or fragmentation during exposure to a plasma or a gas jet. In-situ monitoring of hundreds of particles was carried out to investigate the effect of hydrogen plasma exposure on particle adhesion, morphology, and composition. LLS was calibrated with monodisperse melamine resin spheres with known sizes of 2.14, 2.94, and 5.26 in diameter. The lowest achievable noise level of approximately 3% was demonstrated for counting 5.26 spherical melamine particles. The accuracy for melamine particle size measurements ranged from 50% for 2.14 particles to 10% for 5.26. This scatter was taken as the imprecision of the method. Size distribution for polydisperse particles with known refractive index was obtained by interpolating to an effective scattering cross-section of a sphere using Mie theory. While the Abbe diffraction limit was about 2 in our system, the detection limit for Si particles in LLS according to Mie approximation was assessed to about 3 in, given the limitations of the laser flux, microscope resolution, camera noise, and particle composition. Additionally, the gradual changes in forward scattering cross-sections for Si particles during the exposure to the hydrogen plasma were consistent with Si etching reported in the literature.
hydrogen plasma, particles, laser scattering, LLS, silicon
## I Introduction
Under some conditions, plasma exposure is known to cause the release of nanometer and micrometer-sized particles from surfaces.[1] Technologies sensitive to plasma-induced particle release are of special interest. For example, NASA's study of the lunar and Mars surfaces confirmed suspended dust without settling.[2; 3; 4] This effect is attributed to UV or plasma charging and may have a negative impact. For example, the mobility of micrometer-sized particles in plasma presents a challenge to solar panel longevity. In another example, a reticle (integrated circuit photo-mask), used in Extreme Ultraviolet (EUV) lithography is highly sensitive to contamination with particles of 20 and larger.[5; 6; 7] Such particles may deposit on reticles even in the extremely clean environments of an EUV scanner in the presence of EUV-induced plasma.[8] Finally, in nuclear fusion plasma vessels (e.g. in ITER), plasma-facing walls releasing particles may deteriorate the gas mix. Because of tritium gas held in wall materials, dust generation in ITER is a serious concern, both from an erosion aspect and due to possible impurity release into the plasma.[9; 10] With respect to all these applications, the study of the behavior of micrometer-sized particles attached to a surface and interacting with plasma is important. To enable further studies, the development of new in-situ diagnostic tools is highly relevant.
Traditionally used in the semiconductor industry, Laser Light Scattering (LLS) detects single particles on smooth or patterned substrates by analyzing light scattered into different angles from a relatively small illuminated spot (typically, around 10).[11; 12] Particles bigger than 1 scatter most of the light in the forward direction. Hence, a reflective substrate is a convenient method to improve such particle visibility.
With respect to the system of a particle attached to a surface, the particle adheres due to the combination of electrical, van der Waals (vdW), and capillary forces, as well as due to the particle's chemical interaction with the surface. Adhesion depends on the particle's size, composition, and morphology. A change in one of these parameters also affects the forward-scattered light intensity; hence, this can be used as a diagnostic method. In our work, we apply the LLS method, combined with long-distance microscopy, to image micrometer-sized particles. It will be demonstrated that the LLS method can be adapted in order to in-situ observe micrometer-sized particles on a surface placed in plasma or in other stressed conditions such as those caused by a gas jet. The advantage of the LLS method over traditional SEM measurement used in morphological diagnostics is the non-invasive in-situ manner of measuring which directly shows the impact of plasma treatment on particles during exposure.
## II Apparatus and design
Particles were deposited on the metallic side of the substrates; substrates used in all experiments were 1 inch in diameter polished sapphire wafers with 100 nm chromium coating. The mirror-finished wafers enable LLS to be operated in the dark field mode. The chromium coating is known to be robust against hydrogen embrittlement[13] and electrically conductive. The latter is necessary for SEM imaging before or after plasma exposure. Silicon (Si) particles were chosen in this work for the demonstration of the method because of the abundance of scientific literature on silicon including its etching by hydrogen plasma.[14; 15; 16; 17] Melamine particles were selected because of their narrow standard deviation in size (when
purchased commercially from Sigma Aldrich) and matte surface. Properties of the particles used in the experiments are listed in table 1.
The chromium substrates were contaminated with micrometer-sized particles using a Branson sonifier SFX 150 (40 kHz actuated tip). The sonifier disaggregated large clusters of particles by bringing its tip in contact with the edge of contaminated wafers. The average distance between the particles significantly exceeded their size (see Fig. 3), which suppressed the effects of interference and simplified imaging, sizing of particles, and analysis of the interaction with plasma.
A schematic overview of the used setup is depicted in Figure 1. The setup comprised two vacuum chambers (a main chamber for the plasma and gas jet exposures and a load-lock chamber) separated by a VAT gate valve that remained closed during experiments. The main chamber was a 20x20x20 \(cm^{3}\) cube with one of the flanges used for connection to the plasma source and the gas supply. A second flange of this chamber had an integrated window with an anti-reflective coating for LLS imaging. A third flange of this chamber was equipped with Philips vacuum gauges (HPT 200 Pirani/Bayard-Alpert and PPT 200 AR) which were both hydrogen calibrated. The flange with the plasma head also held a stainless steel wafer holder and allowed the swapping of wafers via the load-lock. The ultimate pressure in the vacuum chamber, achieved by a turbo-molecular pump (Pfeiffer THU 200 MP) and a scroll dry pre-pump (Edwards XDS10), was \(10^{-4}\) Pa.
During the experiments with plasma exposures, hydrogen was supplied to the main chamber at 30 sccm, resulting in a steady state pressure in the range of 1-10 Pa (mostly 5 Pa) without throttling the turbo-pump. The hydrogen plasma was driven by an Electron Cyclotron Resonance (ECR) plasma source (Aura-Wave, Sairem) at 100 W of RF power providing \(T_{e}\simeq\) 5 eV, \(E_{i}\simeq\) 15 eV, and ion flux toward the wafer of about \(F\simeq\) 1 A/\(m^{2}\) according to Shirai et al [18]. Under these conditions, the induced hydrogen radical (\(H^{*}\)) flux is expected to be 10 to 100 times higher than the \(H^{+}\) flux due to a \(\sim\)10% chance of \(H^{*}\) association at the stainless steel walls of the main vacuum chamber compared to the 100% chance of \(H^{+}\) ion neutralization at the walls.[19] Moreover, recombination of \(H^{+}_{3}\) ions results in the generation of \(\sim\)2 radicals per event.[20] The selected conditions in this study featured a hundredfold more intense flux and approximately 5 times higher energy of ions compared to EUV-induced plasma[21]. Hence, the exhibited results may be considered as the exposure to EUV plasma afterglow, accelerated at around 100 times.[22; 23]
For typical experiments a sample with particles was brought through the load-lock chamber to the middle of the main chamber (using a manipulator) and mounted vertically, facing the window with an anti-reflecting coating. A pulsed laser (EverGreen EVG00200, 70-200 mJ and 10 ns long pulses at 532 nm with 100-1000x attenuation by a grey filter), illuminated the wafer with a repetition rate of 0.71 Hz (1.4s between pulses). The laser beam, guided by mirrors, was expanded to 0.5 cm in diameter by two plano-convex lenses and entered the chamber through the window at about 10\({}^{\circ}\), reflected from the metal surface of the wafer, exited the chamber at 10\({}^{\circ}\), and was finally directed to a beam dump. The light scattered by particles on the surface was collected by a long-distance microscope (Distamax K2) with a working distance of 180 mm and a fully open aperture (diameter of 5 cm) with a CMOS camera (FLIR Grasshopper3) mounted to it. Pulsed laser illumination was chosen instead of illumination by a CW laser to reduce the blurriness caused by the vacuum pump-induced vibrations transferred to the microscope.
The camera shutter was synchronized (Fig. 2) with the laser pulse by a signal delay generator (Model 577, BNC). Relatively short (140 \(\upmu\)s) camera exposures helped to reduce the impact of the light from the plasma on the image background signal. The camera was configured to save 24-bit images with a resolution of 4,096 x 2,160 pixels. The pixel size was 3.45 x 3.45 \(\upmu\)m\({}^{2}\), the quantum efficiency was 64%, and the dynamic range was 65.15 dB. The maximal camera noise was 40.3 dB. The CMOS matrix size in combination with magnification by Distamax K2 and the distance to the sample (around 18 cm) produced a field of view (FoV) of 3 x 2 mm. This microscope FoV with a fully opened diaphragm was aligned with the illumination laser spot and the contaminated center of the wafer. The following camera settings were used: gain 48, gamma 0, black level 0, balance ratio 1.14, digital zoom - off, picture enhancer - off, full automatic control - off, auto exposure - off, auto white balance - off, black & white compensation - off. The camera's gain had the greatest influence on the recognition of particles in post
Figure 1: Schematic illustration of the used setup.
Figure 2: Schematic illustration of the synchronization timing between the camera and the laser system. The red parts represent the laser pulse durations and the orange parts represent the open time of the camera’s shutter. Numbers 0, 1, 2..N indicate the laser pulses.
processing steps.
The acquired images were analyzed by a self-developed Python script. This script extracted the number of particles, their coordinates, and their total integrated intensities and sizes. The way for the size distribution of the particles was found using Mie theory is discussed below. To minimize the impact of laser beam power density fluctuations, the script applied a running average of 5 over the images, which was found to be an optimal value for the trade-off between the noise level and the time resolution achieved. The averaged total integrated scattering (TIS) of an image was computed by the script by summing the intensities of all pixels.
The main chamber was also equipped with a flushing jet, which exhausted nitrogen gas pulses through a 4 mm tube placed at a 5 mm distance from the wafer and facing its center at 45\({}^{\circ}\). This flushing could be used to remove loosely bound particles from the substrate when the shear force exceeds the vdW force with which the particles are bound to the surface. The pulsed flushing was realized through a quick valve (DVI 005 M Pfeiffer) and a calibrated orifice (1.016 mm, Swagelok) that limited the flow. The pressure in the nitrogen line was measured by a Pfeiffer gauge (CPT 200 DN). The "flushing" jet could reach up to 6 nlm at the peak of the pulse. The main chamber had a bypass line to a volume extension vessel of 100 litters, separated from the main chamber by a VAT HV gate valve. During the flushing experiments, the turbo-pump was switched off and the bypass line was open. During plasma experiments, however, the bypass line remained closed. The extended vessel had its own pre-pump (Leybold SCROLLVAC 10). The sum productivity of the two pre-pumps for flushing experiments resulted in about 5 l/s at 100 Pa. The flushing pulses of 100 ms to 20 s were limited by the pre-pump productivity: long flushing pulses increased the pressure in the main chamber at the rate of 10 Pa in 10 s.
In addition, to ensure the accuracy of the LLS setup calibration for measuring the sizes of silicon particles, a sample with silicon particles was additionally (measured on a similar, but not the same sample) qualified using SEM. The size distribution diagram obtained by SEM in a scanned area of 3x3 mm and analyzed by self-developed software was compared with the size distribution diagram obtained by LLS.
## III Setup calibration
The LLS technique enables monitoring of changes in the number of attached particles (Fig. 3), as well as changes in the size distribution during exposure to plasma and flushing. The Figure shows the stages of image processing of Si particles before and after 6h of exposure to hydrogen plasma. The image clearly shows a change in the number of particles. In order to demonstrate the stability of the optical system, a seven-hour measurement of fixed-size particles (melamine) is used to calibrate the counting of particle numbers (see section III.1). Furthermore, a calibration for obtaining particle size distributions is performed based on Mie theory with a correction for the refractive index (see sections III.2 and III.3). Finally, in section III.4 the calibration of the total substrate scattering will be demonstrated.
### Particle number evaluation
Evaluating the number of particles on the surface is challenging. For example, the resolution of the long-distance microscope is limited by the Abbe diffraction limit determined by the closest distance at which two separate sources of light can be distinguished from one another. This limit is expressed by [24]
\[d\approx\frac{\lambda}{2NA} \tag{1}\]
where \(d\) is the minimum resolvable distance between two sources of scattered light, \(\lambda\) is the wavelength of the laser light (532 nm) and NA is the numerical aperture (which in our configuration equals 0.137). Therefore, the resolution of our system is limited to approximately 1.9 \(\upmu\)m.
The imaging of particles is limited not only by Abbe diffraction but also by the physical vibrations of the optical system, and variations of the particle shape and composition. In our experiments, the influence of camera noise, intensity fluctuations of the laser beam, and laser multimodality were also noted. Due to the limited coverage of these effects in the literature, comparisons were not made. Experimental uncertainties can be evaluated from measurements of scattering light from a stationary sample without disturbances. To enable this evaluation, a 7-hour-long imaging experiment of highly monodisperse 5.26 \(\upmu\)m melamine spheres (see Table 1 with samples) was conducted. Note that in this experiment no flushing or plasma exposure was applied. The results (Fig. 4), demonstrate high laser stability and low counting uncertainty. In this experiment, the laser illumination and camera settings were identical to the experiments with plasma and flushing. It was shown that the dispersion of the number of detected particles was about 3% (which is the lowest achievable noise level) with no long-term trends.
### Size distribution of particles in LLS
Knowing the size distribution of processed particles is important. For instance, if large particles are more subjective to external stress factors, lowering their adhesion, such as those induced by exposure to plasma or a gas jet, the size distribution could shift toward smaller sizes. In another example, if exposure to plasma would lead to a developed surface and, thus, to a higher reflection coefficient of the incident light, the particles under the detection limit would become visible again. The particles that were already above the detection limit would shift toward larger sizes.
The determination of the particle size distribution is even more complicated than the counting of particles. As generally known, CCD and CMOS cameras can be subjected to an effect called "blooming". [25] This blooming means that oversaturated pixels leak excess charge to their neighboring pixels. This
process propagates until it reaches the edge, visibly and virtually enlarging the particle. Illumination of the entire particle requires sufficient illumination, and most of the particles under study scatter light in the flat Top-Hat regime, which means oversaturation of the pixels' capacity. Hence, the detected particle size as a number of bright pixels above the threshold is not consistent with the true particle size. A 2 um particle occupied around 50 bright pixels (about 7 pixels in diameter) on the camera when in FoV. The only invariant in this problem is the integral of the photo-induced electrons in the camera's matrix or, in other words, the scattering efficiency of individual particles.
Additional filtering must be applied before integrating the intensities of the pixels imaging the particles. After averaging the intensities of 5 images of 5 laser shots and applying the threshold value, the script filters tiny features (below 10 bright pixels in size). There are two reasons for this filtering. The first reason is that the high camera gain (max value, 48), used for high sensitivity, produces a few hot pixels that occur even without laser illumination and do not correspond to an actual signal. These hot pixels must be removed. The second reason relates to the presence of particles with sizes close to the detection limit. Due to the fluctuating laser intensity, these detections can appear and disappear from the detection region, significantly enhancing the noise level. Thus, by removing them, we focus on the residual population of particles that can always be identified with high confidence.
\begin{table}
\begin{tabular}{|c|l|l|l|} \hline \(N\)\({}^{\text{a}}\) & Material & Size (μm) & SD (μm) \\ \hline
1 & melamine resin & 2.15 & 0.04 \\
2 & melamine resin & 2.94 & 0.05 \\
3 & melamine resin & 5.26 & 0.08 \\
4 & silicon & 5.00 & - \\ \hline \end{tabular}
\end{table}
Table 1: Samples of particles used in the calibrations and experiments. Size is meant the diameter of the particle, and SD is short for standard deviation. Melamine particles were purchased from micro-Particles GmbH, silicon particles were purchased from US Research Nanomaterials, Inc.
Figure 3: The acquired images of Si particles from the camera (virgin on the top and exposed to a hydrogen plasma for 6 hours on the bottom) and applied recognition filters.
The correct approach would be to look at the scattering intensity of individual particles. As is generally known, particles of several micrometers in size obey Mie scattering theory.[26] The algorithm processing the collected images worked as follows. First, the scripted averaged intensities of 5 captured frames. Second, after applying the threshold, the intensities of images of the particles with an area larger than 10 pixels were integrated. Third, the scattering cross-section of the particle was calculated by multiplying the total intensity by the particle size with a constant, which is a fitting parameter to this model (see Eq. 2). Finally, an equivalent sphere with the same scattering cross-section and a refractive index was calculated using Mie theory, from which the size of the sphere/particle was derived. Therefore, measured scattering cross-sections can be translated into actual particle sizes using the Mie model for the light scattering by an individual particle. For this, a Mie calculator[27] was used to evaluate the effective cross-sections of the particles for different particle sizes (from 0.1 to 7 m). The absorption of light by the particles was not taken into account in the calculations due to a lack of available data. The results of the calculations for particles with a variety of refractive indices \(n\) from 1.87 to 4.15 and the light collected in the NA corresponding to the microscope are plotted in Figure 5.
In the Mie model, a spherical particle is situated in vacuum and emits light in all directions. Particles whose sizes are several times larger than the wavelength of the incident radiation predominantly scatter light forward and backward. We considered a model in which particles are positioned on a reflecting substrate, thus collecting only a portion of the forward and backward scattering into the NA of the microscope (NA = 0.137 for an objective lens with a diameter of 5 cm and a distance of 18 cm from the particles). It is worth noting that near-field effects due to reflection from the substrate were not taken into account. All calculations were performed assuming an isolated particle in vacuum with scattering confined to the chosen NA of the microscope.
This graph shows that the particle's composition (i.e. the particles' refractive index) is more important for bigger sizes. Smaller particles are more sensitive to shape alterations. Our approach is to measure the scattering efficiency for the particles of known size and composition (in our case, monodisperse melamine spheres) as calibration. After this, for any material (i.e. refractive index) of interest, the cross-section of each particle can be translated into the size using the corresponding calibration curve from Figure 5.
### Effective scattering cross-section calibration
In order to use the curves from Figure 5, they have to be calibrated. The measured intensities were fitted with the Mie curve. The results of this fit can be seen in Figure 6. The arrows indicate the measured cross-sections. The blue dashed line indicates the \(I_{o}\) value and can be considered as the detection limit of this method (it is attributed to the camera's noise which is of the same size as the min detected particles). The sizes of the particles were declared rather monodisperse, according to the manufacturer, with only a small standard deviation (see table 1), while the measured intensities had some uncertainty. The scattering cross-sections of the melamine particles were fitted using the formula
\[I_{ec}=(1/\alpha)\cdot A\cdot I_{m}+I_{o} \tag{2}\]
where \(I_{ec}\) is the effective scattering cross-section and \(I_{m}\) is the particle intensity measured by LLS. The constant \(A\) equals 700 and is related to the conversion of the laser intensity to the camera counts (or pixel counts). The constant \(\alpha\) is the intensity correction factor. The applied laser intensity changed from 1x, to 14x and to 20x depending on the size of the particles,i.e. 2.14, 2.94, and 5.26 m particles respectively. Therefore,
Figure 4: Calculated number of particles for a 7-hour-long camera baseline recorded for 5 m melamine spheres (see table 1).
Figure 5: Plot of the results from the Mie scattering model as a function of different sizes and refractive indices (\(n\)) scattered in the selected NA (corresponding to our optical system). The selected \(n\) varied from 1.87 (melamine) to 4.15 (silicon).
for the purpose of laser intensity normalization, the intensity factor \(\alpha\) was taken equal to 1, 14, and 20 for measurements on 5.26, 2.94, and 2.15 \(\upmu\)m-particles respectively. The parameter \(I_{o}\) remained constant for all fits and was taken equal to 8.5 \(\mu m^{2}\). Physically it can be attributed to the losses of higher orders of diffraction, reflections from substrate asperities, and camera noise.
The uncertainty of the cross-sections (and related to it the size uncertainty which was nominated by a supplier) can be considered as error bars of the method. For example, the determination of the size of the 2.14 \(\upmu\)m particles has an uncertainty of about \(\pm 1\)\(\upmu\)m which is 50% of their size. It explains why 2.14 and 2.94 \(\upmu\)m particles appear to have the same scattering cross-sections. At the same time, the determination of the size of the 5.26 \(\upmu\)m particles has an uncertainty of about \(\pm 0.5\)\(\upmu\)m which is only 10% of their size.
### Calibration of the total substrate scattering
In addition to the measurements of the number of particles and the particle size (distribution), another possibility is to look at the total integrated scattering from the field of view of the microscope. Technically, the summed and averaged intensity of all pixels is like an analog signal and, therefore, is more reliable as it avoids any image processing other than thresholding for noise removal.
As mentioned, particles of several micrometers in size - as is the case here - obey Mie scattering theory: the scattered intensity is proportional to the particle cross-section (or to \(r^{2}\) of the particle, where \(r\) is the radius) and depends on multiple parameters such as \(n\), \(k\) and \(D/\lambda\), and the polarization of the incident and collected light.[27] For instance, melamine resins have n = 1.872, k = 0 (extinction coefficient is approximately zero for melamine-based materials in the visible range of wavelengths[28]), \(D/\lambda\) is equal to 4.0, 5.5, 9.9 (for 2.14, 2.94 and 5.26 \(\upmu\)m particles respectively). The incident light in our experiments was polarised perpendicular to the plane made up by the incoming beam, the reflecting beam, and the camera. The reflected light was not measured but expected to remain unchanged for particles significantly exceeding the wavelength of the radiation. A change in one of these parameters can be diagnosed by the TIS approach.
The resolution limit of the TIS can be derived by matching it, again, with the Mie calculations for the given size, reflective index, and NA. The amount of scattering by a single particle was obtained by dividing the TIS by the number of detected particles of fixed size (melamine samples in table 1). The sizes of the particles were taken according to the values declared by the manufacturer. The results of this calibration (Fig. 7) show a perfect match with the previously calibrated scattering cross-sections which proves that imposed filtering, thresholding, and image processing used in the previous subsection do not contribute to the uncertainty in size determination significantly. The good match is explained by testing monodisperse spheres with low standard deviation. When applying the TIS signal for polydisperse particles, the match will be less good. Therefore, it can be concluded that the resolution of the TIS measurements and the effective scattering cross-section of individual particles is the same.
## IV Results for LLS measurements of silicon particles exposed to flushing and plasma
Silicon particles were exposed to a series of external stress factors such as flushing and plasma. The sequence of flushing-1 (10 min), plasma exposure (24 h), and flushing-2 (10 min) was applied to a wafer contaminated with Si particles. The flushing power was selected based on the median considerations. The flow must be strong enough to remove a noticeable amount of particles (exceeding the noise level of about 3% as obtained in the calibration section). Physically, this would imply that the flushing shear force and the average adhesion force are comparable. Flushing removes particles, while adhesion keeps them in place. If a particle remains on the substrate after flushing, it means the adhesion force is equal to or greater than the removal force. The flushing (using nitrogen gas) used in the sequence consisted of 3-second long pulsed exhausts (6 nIm flow) at a frequency of 0.01 Hz (every 100 sec). Each flushing campaign lasted 10 min. Between two flushing campaigns, the samples were exposed to the hydrogen ECR plasma with the parameters described before. The quantification of the results used the calibrations described in the previous section.
The top graph in Figure 8 shows the derived number of particles recorded over the experiment. The types of exposures (flushing or plasma) are mapped in different colors. Baselines (no exposures, only pressure changes) are shown in red, flushing campaigns are shown in green and the plasma exposure is shown in yellow. The plot shows that a significant amount of particles was flushed after the first few pulses. Further flushing appears to be ineffective, meaning that the re
Figure 6: Calculated Mie scattering cross-section for a single melamine (n = 1.872) particle depending on its size (black line). The arrows indicate the fitted measurements of the calibrated melamine particles from table 1 using formula 2. The horizontal blue dashed line indicates the \(I_{0}\) value which is equal to 8.5 \(\mu m^{2}\) and can be considered as the detection limit of this method.
maining particles are attached with a force exceeding the applied shear force. The intermediate part of the experiment, during plasma exposure, clearly shows that the number of particles monotonically decays over the exposure which indicates the effect of plasma exposure on the particles' adhesion. This effect is the quantification of the impact shown in the grabbed images from the camera (Fig. 3). The bottom graph in Figure 8 shows the TIS signal which correlates with the top graph and confirms that the intensity drop correlates with the number of scattering centers. The more rapid decay of the TIS signal compared to that of the number of particles during the first hour of plasma exposure needs more investigation. However, hypothetically, this effect could be explained by the presence of a native oxide shell or an adsorbed water layer around the particles that have different \(n\) and \(k\) (i.e. lower scattering), the oxide shell disappears after the first exposure to hydrogen plasma. After this phase, the scattering is proportional to the number of particles.
The interpretation of the gradual decrease of Si particles during plasma exposure can be the following. First, upon plasma impact, a particle may develop asperities across its surface which reduces the effective vdW force which, in turn, promotes the specific particles to be released.[29] An alternative could be the weakening of the interfacing (binding) atomic layers mechanism, e.g. removal by plasma of intermediate adsorbate layers or removal of water forming hydrogen bridges.[30]. Another possible explanation could be the etching of the particles' material. The silane molecule \(SiH_{4}\) is a formation product of sputtered Si atoms reacting with free hydrogen radicals, and it is volatile under our conditions. If the particles - due to this etching - shrink below the detection limit, they disappear from the sub-set of particles detected by the script, and the number of particles is reduced. The second flushing campaign was not necessary due to the lack of remaining measurable particles. Overall, these measurements show that the particles with the adhesion force exceeding the shear force during the first flushing campaign became loose due to plasma exposure. The results are consistent with literature data about the etching of silicon in hydrogen plasma.[14; 15; 16; 17]
The histograms in Figure 9 show the comparison of the size distributions of Si particles (black bins) after deposition (on the left), after the flushing (in the middle), and after 6h of \(H_{2}\) plasma exposure (on the right). In addition, the size histogram obtained from SEM measurements on a similar (but not the same) sample with virgin Si particles (scanned over an area of 3 mm x 3 mm ) is demonstrated in purple on the left "as-deposited" histogram for comparison. The particle size distribution histograms generated from in-situ laser light scattering (LLS) measurements were derived using the calibration procedure described above. The recorded intensities of Si particles were recalculated into sizes using the black curve from Figure 5 corresponding to silicon. The uncertainty of the method for these particles is the same as for melamine particles. The blue dashed line indicates the detection limit of the system which depends on \(n\). In fact, the detection limit is determined by the size, at which the constant \(I_{o}\) intersects with the Mie calculation curve. For Si particles with \(n\) = 4.15, the detection limit is around 3 \(\upmu\)m.
The histogram of as-deposited particles demonstrates the good matching of mean values in the calibrated LLS measurement results compared to size histograms obtained using SEM. The slight deviation in sizes is explained by the fact that SEM measurements were carried out for a similar sample with Si particles, but not the same (to prevent carbonization of particles in SEM and its influence on LLS measurements). It can also be seen from the plot, that after the first flushing a little fraction of the detected particles has been removed with no measurable difference in size distribution. Despite the fact that flushing scales as \(d^{2}\) and adhesion should scale as \(d\) we
Figure 8: Calculated number of Si particle (on the top) and TIS signal (on the bottom) over time of exposure. Each dot is the average of 5 captured frames. particles above the detection limit.
Figure 7: Calculated Mie scattering cross-section (black line) for a single melamine (\(n\) = 1.872) particle, depending on its size. The red dots (pointed by black arrows) indicate the averaged TIS measurements divided by the number of detected 2.14, 2.94, and 5.26 \(\upmu\)m melamine particles from table 1. The Mie scattering cross-section was fitted using the Eqn. 1 with the same constants. The blue dashed line indicates the fitted constant \(I_{0}\) which is equal to 8.5 \(\upmu\)m\({}^{2}\) and can be considered as the detection limit of this method.
did not see the removal of bigger particles which can be addressed by the importance of other factors, such as size, shape, and roughness. As was mentioned before: this result indicates that the remaining particles have an adhesion force to the surface that exceeds the shear force exerted by the flushing. As is already shown in Figures 3 and 8, the number of particles decays over the duration of hydrogen plasma exposure, while the histograms in Figure 9 show that the particle size distribution has striven down and toward smaller sizes (together with the mean value shown as a red dotted line). As soon as a particle size reduces to the one indicated by the blue line (i.e. the detection limit), the particle disappears from the histogram, as it will not be detected anymore, and from the visibility of the script.
Therefore, the reliability of the recognition software has been tested based on 3 types of measurements:
1. The stability of the number of particle detections was demonstrated in Figure 4 for non-disturbed particles (without stressors like flushing or plasma) on a substrate.
2. The reliability of the obtained size distribution is shown in Figure 8, where the LLS measurements were compared to the SEM data (black bins vs purple bins).
3. The average scattering cross-section of a melamine particle using the TIS signal was compared to individually detected particles and demonstrated a good match in Figures 6 and 7. The TIS was treated as an analog signal for changing the scattering efficiency of particles.
The obtained size histograms indicate that the etching mechanism with shrinking particles beyond the detection limit is the dominant mechanism for Si particle interaction with \(H_{2}\) plasma. As can be seen from the middle and from right histograms, the highest percentage reduction was for the largest particles and the percentage gradually decreased toward the smallest particles. There are two reasons for that: 1) bigger particles shrink and take the place of smaller particles (hence, a relatively constant amount of small particles remained unchanged); 2) etching of Si by chemical sputtering of hydrogen radicals is only possible when accompanied by energetic electrons and ions from plasma breaking Si--Si bonds.[31] In that matter, the etching occurs at the place when particles interact with ions; hence, the particles are more to etch from the top rather than from the sides (it has been also demonstrated in AFM measurements[32]). It explains why the entire histogram does not strive toward the smaller side as a whole.
## V Conclusions
The present study demonstrates the application of LLS, combined with long-distance microscopy, to in-situ characterize the response of micrometer-sized silicon particles on a smooth substrate to hydrogen plasma exposure or to a flushing gas jet. The number of particles, particle size distribution, and total scattering intensity (TIS) measured by laser light scattering (LLS) were calibrated with monodisperse melamine resin spheres. The results indicate that the counting accuracy was approximately 3% for 5.26 \(\upmu\)m melamine spheres. Furthermore, the observed inconsistency in relating the counting of only the bright pixels to the particle's size was attributed to the blooming effect. Therefore, Mie theory was applied to convert the calibrated particle effective scatter cross-sections to the size equivalent. The accuracy of the LLS size measurement was found to be between 50% for 2.14 \(\upmu\)m particles and 10% for 5.26 \(\upmu\)m particles.
Surface-deposited Silicon particles were employed for LLS measurements in order to demonstrate the effectiveness of the method to serve as an in-situ diagnostic to visualize the effect of plasma exposure. The effect of plasma on Si particles is complex and may involve particle size and shape evolution due to chemical or physical sputtering. The in-situ measured counting and size evolution proves the etching of Si is dominant when exposed to \(H_{2}\) plasma. The etching is mostly conducted by hydrogen ions. This is consistent with literature data obtained from SEM measurements. Additionally, SEM measurements conducted on virgin silicon particles demonstrated a high degree of concordance with the size distribution that was calculated using LLS and Mie theory and subsequently plotted.
In conclusion, LLS can be useful as a tool for in-situ measurement of plasma exposure or gas jet flushing, fragmenting, or etching of micrometer-sized particles with a statistical description of adhesion for multiple (100-1000s) particles exposed to the same stressor.
###### Acknowledgements.
The assistance of P. Sanders, A. B. Schrader, J. T. Kohlhepp, and P. Minten in assembling the setup, as well as ASML in financial and scientific support, is gratefully acknowledged.
Figure 9: The black bins of the histograms indicate the number of Si particles measured by LLS after their deposition (on the left), after the first flushing campaign (in the middle), and after 6h of exposure to hydrogen plasma (on the right). The purple bins indicate the size distribution of Si particles measured in SEM after their deposition. The red dashed line indicates the mean value of the black bins. The blue dashed line indicates the detection limit of LLS in our system.
|
2305.04372
|
Searching for dark jets with displaced vertices using weakly supervised
machine learning
|
If "dark quarks" from a confining hidden sector are produced at the LHC, they
will shower and hadronize to dark sector hadrons, which may decay back to
Standard Model particles within the detector, possibly resulting in a
collimated spray of particles resembling a QCD jet. In this work we address
scenarios in which dark hadrons decay with a measurable small displacement,
such that the relevant background is dominated by heavy-flavor jets. Since dark
sector parameters are largely unconstrained, and the precise properties of a
dark QCD-like theory are difficult to compute or simulate reliably in any case,
model-independent, data-based searches for such scenarios are desirable. We
explore a search strategy employing weakly supervised machine learning to
search for anomalous jets with displaced vertices. The method is tested on
several toy signals, demonstrating the feasibility of such a search. Our
approach has potential to outperform simple cut-based methods in some cases and
has the advantage of being more model-independent.
|
Debjyoti Bardhan, Yevgeny Kats, Noam Wunch
|
2023-05-07T20:31:49Z
|
http://arxiv.org/abs/2305.04372v2
|
# Searching for dark jets with displaced vertices
###### Abstract
If "dark quarks" from a confining hidden sector are produced at the LHC, they will shower and hadronize to dark sector hadrons, which may decay back to Standard Model particles within the detector, possibly resulting in a collimated spray of particles resembling a QCD jet. In this work we address scenarios in which dark hadrons decay with a measurable small displacement, such that the relevant background is dominated by heavy-flavor jets. Since dark sector parameters are largely unconstrained, and the precise properties of a dark QCD-like theory are difficult to compute or simulate reliably in any case, model-independent, data-based searches for such scenarios are desirable. We explore a search strategy employing weakly supervised machine learning to search for anomalous jets with displaced vertices. The method is tested on several toy signals, demonstrating the feasibility of such a search. Our approach has potential to outperform simple cut-based methods in some cases and has the advantage of being more model-independent.
## 1 Introduction
Weakly supervised machine learning
* 3 Proposed search
* 3.1 Strategy
* 3.2 Event selection
* 3.2.1 Primary selection
* 3.2.2 Displaced object selection
* 3.3 Standard Model background
* 3.4 Jet features used for classification
* 4 Benchmark datasets
* 4.1 Benchmark hidden sectors
* 4.2 Benchmark datasets
* 5 Example search
* 5.1 Weak jet classifier
* 5.2 Weakly supervised event classifier
* 5.3 Identifying and quantifying an excess
* 6 Summary and conclusions
* A Event generation
* B Neural network architecture
* C Feature distributions
* C.1 \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(5\ {\rm GeV},\ 0.1\ {\rm mm})\)
* C.2 \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(5\ {\rm GeV},\ 0.2\ {\rm mm})\)
* C.3 \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(5\ {\rm GeV},\ 0.3\ {\rm mm})\)
* C.4 \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(10\ {\rm GeV},\ 0.1\ {\rm mm})\)
* C.5 \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(10\ {\rm GeV},\ 0.2\ {\rm mm})\)
* C.6 \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(10\ {\rm GeV},\ 0.3\ {\rm mm})\)
* D Fit procedure
Introduction
The diversity of particles and interactions in the Standard Model (SM), along with the multiple questions that the SM leaves unanswered, make it plausible for additional sectors of particles to exist in nature. One simple possibility is an exotic QCD-like sector [1]. The fermions and gauge bosons of this sector can be charged under a new ("dark") gauge group and neutral under the SM. They are termed dark quarks and dark gluons, in analogy with QCD. If the dark gauge group confines at low energies, the spectrum of this sector will contain composite states neutral under the new gauge group - dark hadrons. One motivation for these models is the possibility that dark hadron species that are stable on cosmological scales may account for dark matter [2; 3; 4; 5]. However, such models are interesting also if they do not play this role.
If some portal (for example, a heavy mediator) couples the SM with the hidden sector, dark quarks can potentially be produced at the LHC. If dark quarks are produced, they will undergo parton showering and hadronization in the dark sector, similar to QCD quarks. Species of dark hadrons that are stable on detector scales will escape the detector leaving a trail of missing energy. On the other hand, some species may be unstable, decaying back to the SM within the detector and forming a peculiar jet of SM particles. In this work we aim to obtain better sensitivity to such types of objects, known as _dark jets_.
The collider signature of dark jets (see ref. [6] for a review) is greatly influenced by dark sector specifics. Many typical models would contain light dark pions \(\pi^{\prime}\), and dark vector mesons \(\rho^{\prime}\) and other hadrons with masses of order the dark sector confinement scale \(\Lambda_{\rm QCD^{\prime}}\). In such scenarios, dark jets can be coarsely characterized by three parameters: the average fraction of momentum carried by stable (invisible) dark hadrons \(r_{\rm inv}\), dark pion mass \(m_{\pi^{\prime}}\), and dark pion lifetime \(\tau_{\pi^{\prime}}\).
In cases with sizable \(r_{\rm inv}\), a key signature will be missing energy \(E\!\!\!/_{T}\) aligned with a jet. This scenario of _semi-visible jets_ was analyzed in refs. [7; 8], where a search program for such models was proposed. Refs. [9; 10; 11] suggested to also make use of jet substructure variables. A search for resonant production of semi-visible jet pairs was conducted by CMS in ref. [12]. It employed the cuts motivated by ref. [8] to probe for models with intermediate \(r_{\rm inv}\) and promptly decaying visible dark hadrons. This search also used a boosted decision tree with jet substructure inputs motivated by refs. [9; 10]. ATLAS has performed a search for non-resonant production of semi-visible jets in ref. [13]. Potential use of supervised deep neural networks (NN) for the classification of prompt, semi-visible jets was studied in ref. [14], while the use of an unsupervised NN, an autoencoder, was considered in ref. [15].
Other types of dark jet scenarios, where missing energy is no longer a dominant signature, are also possible. For example, dark pions with macroscopic flight distances, \(c\tau_{\pi^{\prime}}\) of order \(1-10\) cm, will manifest as highly displaced objects within the jet. A novel reconstruction object, Emerging Jet, has been proposed for the classification of such jets in ref. [16]. A search for such objects, which are jets with few or no tracks originating from the primary vertex, was later conducted by CMS [17]. This search was sensitive to scenarios with large dark pion flight distance, where QCD background is scarce.
Overall, the large number of unknown dark sector parameters (gauge group, con
finement scale, number of dark quark flavors and their masses, additional interactions within the dark sector, type of couplings to the SM and their strength), combined with the difficulty to simulate dark sector showering and hadronization reliably, call for model-independent and simulation-independent searches for anomalous jets. Machine learning (ML), and in particular weakly supervised ML, is a natural tool for such a task.
In this work we propose to employ weakly supervised ML for a largely model-independent, data-based search that would be sensitive to anomalous jets (such as dark jets) containing mildly displaced decays, so that the background is dominated by heavy-flavor jets. We will not assume the anomalous jets to contain missing energy since that case has already been explored a lot in the literature, but will instead rely on the presence of displaced objects. We will assume the anomalous jets to be pair-produced in a decay of a heavy resonance.
The rest of the paper is organized as follows. In section 2 we review the relevant ideas of weakly supervised machine learning. In section 3 we describe the proposed search, examine the most important backgrounds and define the features that will be made available to the NN. In section 4 we define the datasets of signals and background that we use to simulate the search. We present the search simulation in section 5. We discuss the results and state our conclusions in section 6. Appendix A describes the event generation. The details of the neural network classifier are provided in appendix B. Jet feature distributions of all benchmark signals compared with the background distributions are presented in appendix C. The fit procedure used for estimating bump significance is described in appendix D.
## 2 Weakly supervised machine learning
While the most traditional ML approach, that of _fully supervised learning_, can provide very powerful classifiers, using it to search for physics beyond the Standard Model (BSM) requires specifying the exact BSM scenario that is being searched for (and being able to simulate it reliably). This makes fully supervised methods very model-specific. In recent years, methods have been developed which lessen signal model dependence for selecting the test statistic. These methods provide different amounts of model independence, with the common trade-off of model independence vs. signal sensitivity.
An example of a completely model independent test statistic is the output of an autoencoder trained on data (e.g., refs. [18; 19; 20; 21]). This _unsupervised learning_ method, while being completely model agnostic, lacks sensitivity to many signals [22; 23].
A more moderate approach, in the realm of _weakly supervised learning_, requires knowledge of class proportions. In fully supervised training the true class of each training example, e.g. signal/background, is known and provided to the NN. Knowledge of class proportions means only knowing what fraction of training examples belong to each class. Using class proportions alone, a classifier can learn to distinguish between classes, while training directly on the mixed data. In ML literature this method goes by the name Learning from Label Proportions [24; 25]. It was shown to be effective in quark/gluon discrimination, where calculation of flavor proportions is possible [26; 27].
This was extended to cases where label proportions are unknown, with the sole requirement of two event groups that have different signal proportions - this was termed _Classification Without Labels (CWoLa)_[28; 29]. To implement it in a search, one must separate the data to signal- and background- rich groups, based on some property of the signal model. In the case of a signal resonant in some parameter, the signal-rich sample can be obtained from selecting events near the resonance. This method goes by the name _Extended Bumphant_[30] and was implemented by ATLAS in ref. [31].
Another approach, _Tag N' Train_[32], suggests using signal dijet topology to obtain the mixed samples. This method uses _co-training_, with the dataset of dijet events being split into two _views_ of each event, one containing first-jet features and the other containing second-jet features. A classifier is trained to discriminate signal-like first-jets from background-like first-jets. A second classifier is similarly trained on second-jet features. Finally these classifier predictions are combined amounting to an event classifier. Each of the jet classifiers is trained using CWoLa, where signal- and background-rich labels are obtained from some criterion on the other jet in the event. In ref. [32] this criterion was taken to be a cut on the output of an autoencoder trained on the jet. It was shown in [32] that Tag N' Train and Extended Bumphant can be effectively combined in searches for a dijet resonant signal. In the current work, we adapt this last approach to suggest a new search for dark jets.
## 3 Proposed search
We propose a largely model-independent, data-based search that would be sensitive to resonantly pair-produced anomalous jets containing mildly displaced decays.
### Strategy
We first select for dijet events with displaced objects, as will be described in section 3.2. Next we define signal and background regions in dijet invariant mass based on a mediator mass and resonance width hypothesis.
An event classifier is obtained according to the following procedure. As in Tag N' Train [32], each of the two leading jets in each event may be assigned a signal- or background-rich _weak label_ according to some condition on the "other jet" (among the two) in the event. In Tag N' Train, the "other jet" condition was based on an autoencoder output, which is fully signal model independent. We propose to use some model assumption, namely the fact that dark jets will often have more constituents than SM jets.1 Therefore, we choose jet constituent count, \(n_{\rm obj}\), as our weak classifier. The two jets are ordered by descending \(p_{T}\) and labeled \(j_{1}\) and \(j_{2}\). Signal-rich labels are assigned to jets within the signal region for which the other jet constituent count is greater than some chosen threshold \(n_{\rm obj}^{S}\). Background-rich labels are assigned to jets coming from the entire mass range (signal region and sidebands) for which the other jet constituent count is smaller than some chosen threshold \(n_{\rm obj}^{B}\). Using these S/B-rich labels, two classifiers are trained, one on \(j_{1}\)s and the
other on \(j_{2}\)s. The product of the two jet classifier predictions is used as a final event classifier. To avoid inference of events used for training, the data should be split into \(k\)-folds. The preceding steps should be repeated \(k\) times, each time leaving a different fold out of training. The event classifier not trained on a given fold is used to classify the fold events.
The classifier is applied to both signal-region and sideband events, and a cut with efficiency \(\epsilon_{D}\) for the data in that entire mass range is applied on the classifier output. The optimal value of \(\epsilon_{D}\), i.e. most sensitive to signal, is model dependent and therefore several values should be used. After applying the cut, the invariant mass distribution in the sidebands is interpolated into the signal region. The expected event count in the signal region, based on the interpolation, is compared to the measured number of events in the signal region. The significance of the excess is estimated based on Poisson statistics and systematic uncertainties of the interpolation.
The search is to be conducted in the form of a bumphunt in dijet invariant mass, i.e. each mediator mass hypothesis, \(m_{Z^{\prime}}\), is considered separately. Resonance width can either be determined from simulation or scanned over (e.g. as in the BumpHunter[33]).
### Event selection
Event selection for the proposed analysis is performed in two steps: a primary selection for dijet events adhering with trigger limitations, and a more tailored selection for events with displaced objects. Event selection requirements are summarised in table 1.
#### 3.2.1 Primary selection
The main motivation for the primary selection is to adhere with trigger limitations. To ensure this, we follow the cuts of an ATLAS dijet resonance search [34]. First, the two jets are required to have \(p_{T}>150\) GeV and \(|\eta|<2\). Two more event level cuts are applied. The first is based on half the rapidity separation of the leading jets, \(y^{*}=(y_{1}-y_{2})/2\). The absolute value of this observable tends to be smaller for \(s\)-channel processes, such as our resonant signal. To increase signal purity we therefore require \(|y^{*}|<0.8\). The second requirement is a minimal azimuthal separation between leading jets, \(\Delta\phi(j_{1},j_{2})=|\phi_{1}-\phi_{2}|>1\), to prevent excessive overlap between the jets. Finally, a lower bound of 1133 GeV on dijet invariant mass \(m_{jj}\) is required to ensure compliance with the trigger [34].
#### 3.2.2 Displaced object selection
We wish to select for dijet events with displaced objects. The criterion we chose for such events is that at least 20% of the jet transverse momentum should be carried by tracks that are associated with reconstructed displaced vertices. To suppress contributions from long-lived SM hadrons, vertices with 2 tracks and vertex mass close to the \(\Lambda\) or \(K_{S}^{0}\) masses (computed with the appropriate particle identity assumptions for the products) are discarded. A summary of event requirements is given in table 1.
### Standard Model background
Displaced vertices are primarily a signature of events containing heavy flavor (_b_ or _c_) quarks. We therefore expect the leading SM background for our analysis to be dijet events
where both leading jets are of a heavy flavor. To estimate the background magnitude and composition we generated events in four groups: one for each heavy flavor channel \(pp\to bb/cc/bc\) (where each quark may also be an antiquark) and a fourth group containing all other QCD dijet channels. Event generation, including details about detector simulation and vertexing, is described in appendix A. The total background cross section after the selection described in section 3.2 is \(\sim 0.13\) pb. The leading contribution (\(\sim 50\%\)) comes from \(bb\) events. The next dominant background (\(\sim 37\%\)) comes from light or semi-light QCD events, i.e. with less than two final state heavy quarks at the parton level. This group is dominated by events with gluons splitting to heavy quarks, a process that is significant in the hard events under consideration [35]. The remaining groups, \(bc\) and \(cc\), account for 10% and 3% of the events, respectively. The selection efficiencies of the different groups are summarized in table 2.
### Jet features used for classification
The co-training step of the search requires a choice of jet classification model and jet representation. We chose to represent each jet as a list of high-level features as input to a dense NN model. A complete description of the NN model we used is provided in
\begin{table}
\begin{tabular}{|c c c|} \hline group & \(N_{\rm prim}\) & \(N_{\rm pass}\) & \(\epsilon_{\rm DO}\) & \(\sigma_{\rm prim}\) (pb) & \(\sigma\) (pb) \\ \hline \hline \(bb\) & 1066652 & 100551 & 0.094 & 0.71 & 0.067 \\ \hline \(jj\) (“other”) & 671729 & 62 & \(9.2\cdot 10^{-5}\) & 530 & 0.049 \\ \hline \(cc\) & 2059665 & 27069 & 0.013 & 0.98 & 0.013 \\ \hline \(bc\) & 577405 & 9163 & 0.016 & 0.24 & 0.0038 \\ \hline \end{tabular}
\end{table}
Table 2: Selection efficiencies and magnitudes of different SM channels. The cross section after the primary selection, \(\sigma_{\rm prim}\), is derived from the generation level cross section of group events obtained from MadGraph at LO times the efficiency of the primary selection. Displaced object efficiency is presented with respect to events after primary selection, namely \(\epsilon_{\rm DO}=\frac{N_{\rm pass}}{N_{\rm prim}}\) (where \(N_{\rm prim}\) and \(N_{\rm pass}\) refer to the numbers of our MC events). The final available cross section is determined according to \(\sigma=\epsilon_{\rm DO}\sigma_{\rm prim}\).
\begin{table}
\begin{tabular}{|c c|} \hline \(p_{T}^{\rm jet}\) & \(>150\) GeV \\ \(|\eta|^{\rm jet}\) & \(<2\) \\ \(m_{jj}\) & \(>1133\) GeV \\ \(|y^{*}|\) & \(<0.8\) \\ \(\Delta\phi(jj)\) & \(>1\) \\ \(\sum\limits_{\rm disp.vert.}p_{T}^{\rm vertex}/p_{T}^{\rm jet}\) & \(>0.2\) \\ \hline \end{tabular}
\end{table}
Table 1: Event selection summary. Both leading jets (highest \(p_{T}\)) must satisfy \(p_{T}\), \(\eta\), and displaced vertex requirements.
appendix B. More complex representations and architectures, such as an LSTM on lists of vertex features, were also considered. In our testing, these were outperformed by the simple dense architecture and therefore abandoned. This could change as the amount of analysis data grows since more data often favors more complex networks.
Jet features include vertex features chosen to represent the properties of displaced objects within a jet and general jet features that encode complementary jet information. We consider the following vertex features: vertex mass, vertex transverse displacement \(D_{0}\) divided by the boost factor \(\gamma\beta_{T}\), fraction of jet's transverse momentum carried by the vertex tracks, and vertex track count. For the features above, in the case of more than one reconstructed vertex, the median value across reconstructed vertices is used. The boost factor, \(\gamma\beta_{T}\), is computed according to
\[\gamma\beta_{T}=\frac{p_{T}^{\rm vertex}}{m_{\rm vertex}} \tag{1}\]
where \(p_{T}^{\rm vertex}\) is the magnitude of the vector sum of \({\bf p}_{T}\)s of tracks associated to the vertex. Vertex mass is calculated according to
\[m_{\rm vertex}^{2}=\left(\sum_{\rm tracks}\sqrt{{\bf p}_{\rm track}^{2}+m_{ \pi^{\pm}}^{2}}\right)^{2}-\left(\sum_{\rm tracks}{\bf p}_{\rm track}\right)^ {2} \tag{2}\]
i.e. the tracks are assigned the charged pion mass for estimation of their energy, and the sum is over all tracks associated with the vertex. We also supply the total number of reconstructed vertices in the jet, excluding the primary vertex and the number of particle-flow objects in the jet - \(n_{\rm obj}\).
In our toy dark sector models that will be described in the next section, the discrimination power of each of these features varies with dark sector parameters. The dark pion mass \(m_{\pi^{\prime}}\) directly affects \(m_{\rm vertex}\). Increasing dark pion mass also decreases the number of vertices per jet and increases the number of tracks per vertex. The dark pion lifetime \(\tau_{\pi^{\prime}}\) directly affects \((D_{0}/\gamma\beta_{T})^{\rm vertex}\) and also indirectly affects the number of vertices. For larger dark pion lifetime, more displaced vertices are distinguished from the primary vertex and therefore the number of displaced vertices increases.
## 4 Benchmark datasets
While the search is intended to be largely model-independent, it is useful to test the strategy on some examples. Since the detailed physics of confining hidden sectors is not known well and is very model-dependent, and the simulation tools are limited too, we consider a set of simplistic toy models, defined as follows.
### Benchmark hidden sectors
We base our toy models on the scenario that is obtained in the Pythia8 Hidden Valley module for an \(SU(3)\) gauge group with a single quark flavour. We consider fully visible jets, i.e. \(r_{\rm inv}=0\), which manifests as no excessive MET. We consider six combinations of the remaining two parameters, with values
\(\{0.1\;{\rm mm},0.2\;{\rm mm},0.3\;{\rm mm}\}\). Other mass parameters of the dark sector -- confinement scale, constituent dark quark mass, and vector meson mass -- were scaled with \(m_{\pi^{\prime}}\), starting at \(\Lambda_{\rm QCD^{\prime}}=5\;{\rm GeV},\;m_{q^{\prime}}=5\;{\rm GeV}\) and \(m_{\rho^{\prime}}=10.5\;{\rm GeV}\) for \(m_{\pi^{\prime}}=5\;{\rm GeV}\). The probability for creating dark vector mesons is kept at its default value of 0.75. We assume the dark vector mesons to decay promptly and exclusively to dark pion pairs: \(\rho^{\prime}\to\pi^{\prime}\pi^{\prime}\). Our simulated dark pions decay exclusively to SM down quark-antiquark pairs: \(\pi^{\prime}\to d\bar{d}\). Decays to heavier flavor quarks are in principle possible for \(m_{\pi^{\prime}}\) values considered and perhaps even motivated by helicity suppression. However, we found such scenarios less interesting as they produce many additional displaced vertices from the decays of the heavy flavor quarks, amounting to a signal too distinct from QCD background. The benchmark Hidden Valley parameters are summarized in table 3. We test the sensitivity of our proposed search to resonant dijet events produced via \(pp\to Z^{\prime}\to q^{\prime}\bar{q}^{\prime}\) with \(m_{Z^{\prime}}=2\;{\rm TeV}\).
### Benchmark datasets
Our background dataset contains \(106k\) events that passed the full event selection in the dijet invariant mass range \(1400\;{\rm GeV}\leq m_{jj}\leq 2400\;{\rm GeV}\). The majority, \(89k\), are \(bb\) events and the rest, \(17k\), are \(cc\) events. As can be seen in table 2, these two channels combined account for the majority of QCD events that pass selection. We would ideally simulate the entire QCD dijet sample (rather than only \(bb\) and \(cc\)), however this is too computationally expensive for us due to low selection efficiencies of the displaced objects cut. If we rescale the cross section so that the total background cross section is correct, then based on the analysis of section 3.3 this example corresponds to an integrated luminosity of \(\sim 800\;{\rm fb}^{-1}\) available for the analysis.
We will analyze in detail the example of a narrow-width \(Z^{\prime}\) with mass \(m_{Z^{\prime}}=2\;{\rm TeV}\). Motivated by the shape of the resulting dijet invariant mass distribution (see, e.g., figure 5a), whose width is not very model dependent since it is dominated by the experimental resolution, we define the signal region to be the invariant mass range \(m_{jj}\in[1600,2000]\;{\rm GeV}\). We define the sidebands as \(m_{jj}\in[1400,1600)\cup(2000,2400]\;{\rm GeV}\). These boundaries are chosen such that the sidebands and the signal region contain comparable numbers of background events. Approximately 20% of the background events are in the
\begin{table}
\begin{tabular}{|c c|} \hline gauge group & SU(3) \\ \(\Lambda_{\rm QCD^{\prime}}\) & 5 / 10 GeV \\ \(n_{q^{\prime}}\) & 1 \\ \(m_{q^{\prime}}\) & 5 / 10 GeV \\ \(m_{\pi^{\prime}}\) & 5 / 10 GeV \\ \(m_{\rho^{\prime}}\) & 10.5 / 21 GeV \\ \(c\tau_{\pi^{\prime}}\) & 0.1 / 0.2 / 0.3 mm \\ \(r_{\rm inv}\) & 0 \\ \hline \end{tabular}
\end{table}
Table 3: Hidden Valley parameters used for the six benchmark signal configurations.
signal region. Signals, one of the six hidden sector configurations described in section 4.1, are injected to this background. Signal size, which we will vary, will be presented in terms of signal fraction \(f_{S}=N_{S}/(N_{B}+N_{S})\), where \(N_{B}\) and \(N_{S}\) and the background and signal event counts in the entire mass range (signal region and sidebands) after event selection.
The feature distributions for the different benchmark signals (and the background) are provided in appendix C. The most discriminating features for this set of benchmarks are object multiplicity, vertex count, and transverse momentum fraction. One can also see that as dark pion displacement increases, from 0.1 mm to 0.3 mm, the number of signal vertices increases because more dark pions decay outside the primary vertex resolution. Vertex mass is a stronger discriminator for the 10 GeV dark pion mass in comparison to the 5 GeV case.
## 5 Example search
In this section we present results of an example search conducted on a simulated benchmark dataset. We provide a detailed account for the case of a dark sector with \(m_{\pi^{\prime}}=10\) GeV and \(c\tau_{\pi^{\prime}}=0.2\) mm with a signal fraction \(f_{S}=0.5\%\), where the number of signal events is \(N_{S}=N_{B}\cdot\frac{f_{S}}{1-f_{S}}=530\) events. We provide aggregated results for different signal fractions of all other benchmark signals.
### Weak jet classifier
We used the number of particle-flow objects, \(n_{\rm obj}\), of each jet to assign a signal-rich or background-rich (weak) label for the other jet, as described in section 3.1. Background-like threshold, \(n_{\rm obj}^{B}\), was taken to be the 25% (lower) quantile for number of particle-flow objects. Signal-like threshold, \(n_{\rm obj}^{S}\), was taken to be the 75% (upper) quantile for number of particle-flow objects (amounting to \(n_{\rm obj}^{B}=n_{\rm obj}^{S}\)). The softer cut on signal event multiplicity is complemented by the invariant mass region selection so that after both cuts the signal- and background-rich labels are approximately balanced. The thresholds were chosen after trying a number of alternatives and finding that results aren't very sensitive to this choice. Using tighter signal- and background-rich thresholds amounts to a higher effective signal fraction for training. This comes at the cost of less data available for training. A quantification of this tradeoff is left for future works.
Constituent count thresholds corresponding to the chosen quantiles are \(n_{\rm obj}^{\rm thresh}=24\) for cuts on \(j_{1}\) multiplicity and \(n_{\rm obj}^{\rm thresh}=25\) for cuts on \(j_{2}\) multiplicity. The difference stems from a slightly higher object multiplicity for second jets. These values were unaffected by the small signal fractions considered and are therefore the same for all signals and all signal fractions. From cutting on \(j_{1}\) and \(j_{2}\) multiplicities, and requiring that signal-rich jets come from signal-region events, we obtain two background-rich and two signal-rich samples.
In the 0.5% signal fraction case of the \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(10\ {\rm GeV},\ 0.2\ {\rm mm})\) signal, these cuts leave 26292 (23906) background-rich jets and 27824 (28432) signal-rich jets from cuts on \(j_{2}\) (\(j_{1}\)) multiplicities. From an initial 0.5% signal fraction in the entire dataset, the enriched signal fractions are 1.59% (1.56%) in the signal-rich samples and 0% (0%) in the background-rich samples from cuts on \(j_{2}\) (\(j_{1}\)) multiplicities.
### Weakly supervised event classifier
After applying the weak cuts to the 0.5% signal fraction case of the signal with \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(10\ \mathrm{GeV},\ 0.2\ \mathrm{mm})\), 51% (49%) of jets were assigned weak labels according to \(n_{\mathrm{obj}}^{j_{1}}\) (\(n_{\mathrm{obj}}^{j_{2}}\)). Of these events, 10% are put aside for validation to avoid over-fitting. A classifier, described in appendix B, is trained to distinguish between the remaining 46% (44%) of events using \(j_{1}\) (\(j_{2}\)) features and the weak labels. After weak-label assignment and putting aside of the validation set, 48704 jets and 47104 jets are available for training \(j_{1}\) and \(j_{2}\) classifiers, respectively. The classifiers were trained for 100 epochs. Learning curves are presented in figure 9. To evaluate the classifiers performance, a new dataset of \(35k\) signal and \(35k\) background events was generated. This means to represent classifier performance on the original event sample trained with \(k\) folding, which we don't simulate explicitly. NN outputs and ROCs for the 0.5% signal fraction case of the \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(10\ \mathrm{GeV},\ 0.2\ \mathrm{mm})\) signal are shown in figures 1 and 2. ROCs comparing discrimination of classifiers trained on different signal fractions are shown in figure 3. As expected, classifier performance deteriorates as \(f_{S}\) is decreased. Still, even at \(f_{S}=0.1\%\), the classifier is very powerful. However, going to much lower signal fractions is not relevant because they will not be detectable eventually in the bump hunt procedure that is discussed in the next subsection. ROCs comparing the outcomes for the different benchmark signals with 0.5% signal fraction are shown in figure 4.
### Identifying and quantifying an excess
Our null hypothesis, which we will confirm in the following, is that the dijet invariant mass distribution of the background after the NN cut is still well described by a smoothly decreasing function. We construct the following test statistic to probe for deviations from this hypothesis due to a possible signal. We bin the events (with a bin size of 50 GeV in our example) and fit the sidebands to the following three-parameter function
\[\frac{dN}{dm_{jj}}=p_{0}\frac{(1-m_{jj}/\sqrt{s})^{p_{1}}}{(m_{jj}/\sqrt{s})^{ p_{2}}}, \tag{1}\]
Figure 1: NN output distributions for \(j_{1}\), \(j_{2}\), and combined classifiers, for the scenario with \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(10\ \mathrm{GeV},\ 0.2\ \mathrm{mm})\), \(f_{S}=0.5\%\).
also used in ATLAS [36] and CMS [37]. The fit parameters \(p_{i}\) were constrained to positive values. We estimate the number of expected events in the signal region using the fit and compare it to the measured number of events in the signal region. Our test statistic is the excess
\[t=\frac{N_{\rm meas}^{\rm sig.reg.}-N_{\rm exp}^{\rm sig.reg.}}{\sqrt{\sigma_{ \rm meas}^{2}+\sigma_{\rm exp}^{2}}}\;. \tag{23}\]
The uncertainty in the measured counts is estimated by Poisson statistics as \(\sigma_{\rm meas}^{2}=N_{\rm exp}^{\rm sig.reg.}\). The uncertainty in the expected counts is obtained by linearly propagating parameter fit uncertainties. Further details of this procedure are provided in appendix D. We obtain test statistic values for different cut efficiencies of the NN. To avoid training many classifiers, as would be done in the \(k\)-fold procedure described in section 3.1, we use the entire \(106k\) event dataset for semi-supervised training and continue with inference on a new (same size) dataset. This is similar to the \(k\)-fold procedure for a large enough \(k\).
Let us now exemplify the search with one realization of a background and signal
Figure 2: Solid curves are ROCs for NN jet classifiers trained using co-training, and the event classifier obtained from their product, for the scenario with \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(10\ {\rm GeV},\ 0.2\ {\rm mm})\), \(f_{S}=0.5\%\). Dashed curves are ROCs for the weak jet classifiers (constituent count of each jet) and the event classifier obtained from their sum. Shaded areas signify \(1\sigma\) statistical uncertainty, where for a cut leaving \(N_{B}\) background events out of a total \(N_{B}^{0}\) background events, we used \(\sigma=N_{B}^{0}/N_{B}^{2}\cdot\sqrt{N_{B}}\). The curves were terminated at \(N_{B}<10\ (1/\epsilon_{B}=3500)\).
sample. Invariant mass distributions subject to NN cuts of varying efficiency are presented in figure 5 for the \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(10~{}{\rm GeV},~{}0.2~{}{\rm mm})\) signal with \(f_{S}=0.5\%\). The invariant mass spectrum of the entire dataset after the event selection described in section 3.2 is shown in figure 4(a). The test statistic significance prior to any further cut is \(-0.72\sigma\). The invariant mass spectra after applying the NN cuts with \(\epsilon_{D}=2\%\), \(1\%\), \(0.6\%\) are shown in figures 4(b), 4(c), 4(d), respectively. Apart from the significances obtained with the sidebands fit, the information tables at the bottom of these plots show also the significance estimates that would be obtained from the naive calculation of \(n_{S}/\sqrt{n_{B}}\) in the signal region. As expected, since it does not account for the statistical fluctuations in the sidebands and for the potential contributions in the sidebands due to the signal tails, the naive estimate of the significance is unrealistic, and it is crucial to simulate the sidebands fit like we did.
\(\epsilon_{D}\) considered, which reassures us that no large, spurious, bump is carved in the analysis. However, a similar significance trend was observed in a second background realization we tested, suggesting that some \(\sim+1\sigma\) bias exists in our significance estimation. A more detailed study, which would involve generating a large number of background realizations, will be needed to quantify the size of this apparent bias more precisely. Additionally, one could explore whether the bias could be reduced by using a different fitting function. More sophisticated methods to reduce sculpting (e.g., along the lines of ref. [38]) could also be explored.
While the weakly supervised machine learning method outlined here is working well, it is interesting to ask how it performs relative to simpler methods. An obvious comparison in the case of our benchmark models is to cutting on the jet constituent multiplicity variable. Instead of the loose cut on the multiplicity that was used in the ML approach for producing the weak labels, we apply tight cuts on the sum of the multiplicities of the two jets and use the same sidebands fit procedure. Table 4 summarises the bump significance obtained when cutting on the output of the weakly supervised event classifier and when cutting on the sum of object multiplicities of both jets, for all benchmark signals. The \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(5\ {\rm GeV},\ 0.3\ {\rm mm})\) signal was discovered at \(f_{S}=0.25\%\) and all the rest were discovered at \(f_{S}=0.1\%\). For these signal fractions it was usually the case that cutting on multiplicity slightly outperformed the NN. (Note, however, that significance values like \(5\sigma\) and higher
Figure 4: ROCs comparing classifier discrimination for different benchmark signals. All classifiers were trained with \(f_{S}=0.5\%\).
are somewhat uncertain because they assume the fluctuations due to the fit uncertainties to remain Gaussian far on the tails. Also, the bias discussed in the previous paragraph needs to be quantified for both methods. Therefore, small differences should not be taken too seriously.) The exception is the \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(5\text{ GeV},\ 0.3\text{ mm})\) signal that was discovered with higher significance by the NN at \(f_{S}=0.25\%\).
Figure 5: Invariant mass spectrum of events passing NN cut with varying selection efficiency \(\epsilon_{D}\). The sidebands are shaded. The significance in the “true” rows corresponds to true \(n_{S}/\sqrt{n_{B}}\) within the signal region. The significance in the “sideband fit” rows is obtained from the fit parameters according to eq. (10).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(f_{S}\) & \(m_{\pi^{\prime}}\) & \(c\tau_{\pi^{\prime}}\) & \(\max\sigma_{\rm NN}\) & \(\max\sigma_{n_{\rm obj}}\) \\ \hline
0.1\% & 5 GeV & 0.1 mm & 5.4 \(\sigma\) & 7.2 \(\sigma\) \\
0.1\% & 5 GeV & 0.2 mm & 5.2 \(\sigma\) & 5.9 \(\sigma\) \\
0.1\% & 5 GeV & 0.3 mm & 4.3 \(\sigma\) & 4.8 \(\sigma\) \\
0.25\% & 5 GeV & 0.3 mm & 9.6 \(\sigma\) & 8.6 \(\sigma\) \\
0.1\% & 10 GeV & 0.1 mm & 5.4 \(\sigma\) & 6.8 \(\sigma\) \\
0.1\% & 10 GeV & 0.2 mm & 4.9 \(\sigma\) & 7.2 \(\sigma\) \\
0.1\% & 10 GeV & 0.3 mm & 4 \(\sigma\) & 5.6 \(\sigma\) \\ \hline \end{tabular}
\end{table}
Table 4: Comparison between object multiplicity cut and weakly supervised NN cut. The columns \(\max\sigma_{\rm NN}\) and \(\max\sigma_{n_{\rm obj}}\) correspond with the maximum bump significance across selection efficiencies \(\epsilon_{D}\) for cuts on NN output and cuts on the sum of object multiplicities of both jets, respectively. For this we consider 10 values of \(\epsilon_{D}\) spaced log-uniformly in the range \([0.001,0.1]\).
Figure 6: Bump significance as a function of selection efficiency at four signal fractions for \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(10~{}{\rm GeV},~{}0.2~{}{\rm mm})\) signal.
## 6 Summary and conclusions
A hidden ("dark") confining sector may reveal itself at the LHC in the form of anomalous jets, dubbed dark jets, whose properties are very model dependent. In this work we considered dark sectors with dark hadron lifetimes similar to heavy-flavor QCD quarks. A main feature of jets arising from such a sector is displaced vertices from the decays of dark hadrons. We propose using the features of reconstructed vertices to further capture the properties of the displaced objects. The dark sector scenarios we consider are complementary to the ones considered in most of the papers on the subject, which assume the presence of missing energy or very large vertex displacements or don't take advantage of displaced vertices.
The wealth of data collected at the LHC offers an opportunity to harness machine learning to discriminate BSM from SM signatures. A traditional approach to doing so is using MC simulations of signal (or a mix of signals) events and of SM events to train a NN. This paradigm has drawbacks. There are large uncertainties in simulating events, introduced by modeling uncertainties of nonperturbative QCD processes (and in our case also those of the dark confining sector) as well as detector modeling. Another drawback is a lack of generality which translates to reduced sensitivity (if any) to signals not used for training. This is a problem when sensitivity to a wide range of signals is required. Dark sector details are largely unconstrained, allowing for a wide range of dark jet signatures.
Figure 7: Bump significance as a function of selection efficiency for the six benchmark signals at \(f_{S}=0.5\%\).
In this work we propose using the weakly supervised method Tag N' Train in searches for dark jets with displaced vertices. Tag N' Train is a weakly supervised method to obtain a dijet event classifier. The procedure starts with a weak jet classifier. We propose using a cut on jet constituent multiplicity for this stage. This choice makes use of the fact that many dark sector models produce high multiplicity jets. Using the weak labels obtained from the weak classifier, two classifiers are trained, one for each of the two leading jets in the event. We use dense NN supplied with displaced vertex features, including: number of displaced vertices, vertex transverse displacement, vertex mass, number of associated tracks, and the fraction of transverse momentum carried by the vertex out of total jet transverse momentum. Jet constituent multiplicity was also supplied.
We tested this procedure on simulated events with a set of toy dark sector scenarios. We showed that the vertex features can be good discriminators between heavy flavor quark jets and dark jets. We demonstrated a concrete example of a search for resonant dark jet pairs with displaced vertices. The search is conducted in the form of a bumphunt where different mass hypotheses are tested separately. We presented a detailed analysis of the example of a 2 TeV resonance. The resonance mass hypothesis was incorporated in the weak classifier - only jets coming from events within the signal region in invariant mass were candidates to be assigned the signal-rich label. After training the NNs and applying them to simulated data, the significance of the bump was estimated for different NN selection efficiencies. The semi-supervised classifier succeeded in learning from auxiliary features specific to the signal that was present in the data for signal fractions as small as 0.1%.
However, at least for the range of examples we examined, the sensitivity of our machine learning method turned out to be comparable (with the details of the comparison depending on the model) to what can be achieved by using the object multiplicity variable, which by itself is a search that has never been done and is worth pursuing. One cause of the NN not offering a big advantage is the low signal fractions. The discrimination power of CWoLa often deteriorates with decreasing signal fraction while a cut on multiplicity is unaffected. The effective signal fraction can always be increased by tightening the thresholds of the weak classifier. However, this comes at the cost of less events available for training the NN. Therefore, this method might improve as more data is collected and available for analysis.
It could be interesting to extend this method to dark sectors with promptly decaying dark hadrons, where a very different set of features and different backgrounds will be relevant. Another interesting direction would be to consider non-resonant dijet production, where the Tag N' Train method naturally remains applicable.
###### Acknowledgments.
We have greatly benefited from numerous conversations with Hugues Beauchesne, whose insights have contributed significantly to this work. This research was supported in part by the Israel Science Foundation (grants no. 780/17 and 1666/22) and the United States - Israel Binational Science Foundation (grant no. 2018257). The work of DB was also supported by the Kreitman Postdoctoral Fellowship and National Postdoctoral Fellowship (NPDF), SERB, PDF/2021/002206, Government of India.
Event generation
Parton level events at collider energy of 13 TeV were generated using MadGraph5[39] with the NN23LO1[40] PDF set. A massive \(Z^{\prime}\) mediator with couplings to SM quarks and dark quarks was implemented using the UFO files from [8]. Some parton-level cuts, softer than the eventual selection cuts of section 3.2, were applied in MadGraph to save computation time for the background: jet \(p_{T}>100\) GeV, \(m_{jj}>1\) TeV, jet \(|\eta|<3\), and \(\Delta R(j_{1},j_{2})>1\). Showering and hadronization were simulated using Pythia8[41]. Dark-sector showering was done using Pythia8's Hidden Valley module [42]. Detector simulation was conducted with Delphes 3[43] using ATLAS detector card with added track smearing according to [44]. Jets were reconstructed from calorimeter deposits using the anti-\(k_{T}\) algorithm [45] with jet radius \(R=0.7\). Particle-Flow2 constituents were then assigned to jets based on their angular distance (\(\Delta R\)) from the axes of the reconstructed jets. Vertices were reconstructed with Adaptive Vertex Fitting algorithm (AVR) [46] using default parameters (\(\sigma_{\rm cut,p}=2\), \(\sigma_{\rm cut,s}=6\), and \(w_{\rm min}=0.5\)), implemented in the RAVE toolkit [47]. All event tracks were used for primary vertex reconstruction while only tracks belonging to a given jet were used to find secondary vertices.
Footnote 2: Particle Flow is an algorithm to reconstruct track and calorimeter tower measurements into a list of electrons, muons, charged hadrons, neutral hadrons and photons.
## Appendix B Neural network architecture
We use a dense neural network architecture built and trained using Keras [48] with TensorFlow [49] backend. The network has 4 hidden layers with 32, 16, 16, 4 nodes, respectively. These parameters were coarsely optimized to avoid over/under fitting. The first hidden layer activation is Leaky Rectified Linear Unit (LeakyReLU). For the remaining three layers Exponential Linear Units (ELU) were used. A sigmoid function was applied to the output. Each hidden layer except the last was followed by a dropout layer with a rate of 0.1. A network summary is provided in figure 8. Binary cross-entropy loss function and Adam optimizer were used for training. Each input feature was globally shifted and scaled according to the sample mean and standard deviation of the training data. These scale and shift values are saved. When new data is to be inferred by the classifier it is scaled and shifted by the same values. Examples of learning curves are shown in figure 9.
Figure 8: NN summary.
Figure 9: Learning curve of \(j_{1}\) (left) and \(j_{2}\) (right) classifiers training on \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(10\ \mathrm{GeV},\ 0.2\ \mathrm{mm})\) signal with \(f_{S}=0.5\%\). Since the validation set is evaluated without use of the dropout layers it is not surprising the validation set loss is smaller than the training set loss.
Feature distributions
This appendix presents the feature distributions for the benchmark signals and the background based on events in the mass region \(m_{jj}\in[1400,2400]\) GeV after event selection.
### \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(5\ \mathrm{GeV},\ 0.1\ \mathrm{mm})\)
Figure 10: Distributions of: vertex displacement \(D_{0}/\gamma\beta_{T}\), vertex mass \(m_{\mathrm{vertex}}\), vertex transverse momentum fraction \(p_{T}^{\mathrm{vertex}}/p_{T}^{\mathrm{jet}}\), number of tracks associated to vertex \(n_{\mathrm{tracks}}\), total number of jet constituents \(n_{\mathrm{obj}}\), and total number of reconstructed displaced vertices \(n_{\mathrm{vertices}}\) (not including primary vertex). If more than one vertex is reconstructed in a given jet the median value for vertex features is taken. The steps in the distribution of \(p_{T}^{\mathrm{vertex}}/p_{T}^{\mathrm{jet}}\) are an artifact of requiring the sum of this variable over all jet vertices to be greater than 0.2. Jets with only one displaced vertex, which are the majority of background jets, are constrained to values greater than 0.2. Jets with two displaced vertices are constrained to a median greater than 0.1, etc.
### \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(5\ {\rm GeV},\ 0.2\ {\rm mm})\)
Figure 11: Distributions of: vertex displacement \(D_{0}/\gamma\beta_{T}\), vertex mass \(m_{\rm vertex}\), vertex transverse momentum fraction \(p_{T}^{\rm vertex}/p_{T}^{\rm jet}\), number of tracks associated to vertex \(n_{\rm tracks}\), total number of jet constituents \(n_{\rm obj}\), and total number of reconstructed displaced vertices \(n_{\rm vertices}\) (not including primary vertex). If more than one vertex is reconstructed in a given jet the median value for vertex features is taken. The steps in the distribution of \(p_{T}^{\rm vertex}/p_{T}^{\rm jet}\) are an artifact of requiring the sum of this variable over all jet vertices to be greater than 0.2. Jets with only one displaced vertex, which are the majority of background jets, are constrained to values greater than 0.2. Jets with two displaced vertices are constrained to a median greater than 0.1, etc.
### \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(5\ {\rm GeV},\ 0.3\ {\rm mm})\)
Figure 12: Distributions of: vertex displacement \(D_{0}/\gamma\beta_{T}\), vertex mass \(m_{\rm vertex}\), vertex transverse momentum fraction \(p_{T}^{\rm vertex}/p_{T}^{\rm jet}\), number of tracks associated to vertex \(n_{\rm tracks}\), total number of jet constituents \(n_{\rm obj}\), and total number of reconstructed displaced vertices \(n_{\rm vertices}\) (not including primary vertex). If more than one vertex is reconstructed in a given jet the median value for vertex features is taken. The steps in the distribution of \(p_{T}^{\rm vertex}/p_{T}^{\rm jet}\) are an artifact of requiring the sum of this variable over all jet vertices to be greater than 0.2. Jets with only one displaced vertex, which are the majority of background jets, are constrained to values greater than 0.2. Jets with two displaced vertices are constrained to a median greater than 0.1, etc.
### \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(10\ {\rm GeV},\ 0.1\ {\rm mm})\)
Figure 13: Distributions of: vertex displacement \(D_{0}/\gamma\beta_{T}\), vertex mass \(m_{\rm vertex}\), vertex transverse momentum fraction \(p_{T}^{\rm vertex}/p_{T}^{\rm jet}\), number of tracks associated to vertex \(n_{\rm tracks}\), total number of jet constituents \(n_{\rm obj}\), and total number of reconstructed displaced vertices \(n_{\rm vertices}\) (not including primary vertex). If more than one vertex is reconstructed in a given jet the median value for vertex features is taken. The steps in the distribution of \(p_{T}^{\rm vertex}/p_{T}^{\rm jet}\) are an artifact of requiring the sum of this variable over all jet vertices to be greater than 0.2. Jets with only one displaced vertex, which are the majority of background jets, are constrained to values greater than 0.2. Jets with two displaced vertices are constrained to a median greater than 0.1, etc.
### \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(10\ {\rm GeV},\ 0.2\ {\rm mm})\)
Figure 14: Distributions of: vertex displacement \(D_{0}/\gamma\beta_{T}\), vertex mass \(m_{\rm vertex}\), vertex transverse momentum fraction \(p_{T}^{\rm vertex}/p_{T}^{\rm jet}\), number of tracks associated to vertex \(n_{\rm tracks}\), total number of jet constituents \(n_{\rm obj}\), and total number of reconstructed displaced vertices \(n_{\rm vertices}\) (not including primary vertex). If more than one vertex is reconstructed in a given jet the median value for vertex features is taken. The steps in the distribution of \(p_{T}^{\rm vertex}/p_{T}^{\rm jet}\) are an artifact of requiring the sum of this variable over all jet vertices to be greater than 0.2. Jets with only one displaced vertex, which are the majority of background jets, are constrained to values greater than 0.2. Jets with two displaced vertices are constrained to a median greater than 0.1, etc.
### \((m_{\pi^{\prime}},c\tau_{\pi^{\prime}})=(10\ {\rm GeV},\ 0.3\ {\rm mm})\)
Figure 15: Distributions of: vertex displacement \(D_{0}/\gamma\beta_{T}\), vertex mass \(m_{\rm vertex}\), vertex transverse momentum fraction \(p_{T}^{\rm vertex}/p_{T}^{\rm jet}\), number of tracks associated to vertex \(n_{\rm tracks}\), total number of jet constituents \(n_{\rm obj}\), and total number of reconstructed displaced vertices \(n_{\rm vertices}\) (not including primary vertex). If more than one vertex is reconstructed in a given jet the median value for vertex features is taken. The steps in the distribution of \(p_{T}^{\rm vertex}/p_{T}^{\rm jet}\) are an artifact of requiring the sum of this variable over all jet vertices to be greater than 0.2. Jets with only one displaced vertex, which are the majority of background jets, are constrained to values greater than 0.2. Jets with two displaced vertices are constrained to a median greater than 0.1, etc.
Fit procedure
The sidebands were fit using scipy.curve_fit, scipy's [50] implementation of non-linear least squares fit. The fit optimizes the cost function \(L=\mathbf{r}^{T}\mathbf{r}\) where \(\mathbf{r}_{i}\) is the residual in the \(i\)'th bin divided by the uncertainty in measured bin counts. The bin count uncertainty in a bin with \(n\) counts was taken to be \(\sqrt{n}\) according to Poisson statistics. The statistical uncertainty of expected counts in the signal region was estimated according to
\[\begin{split}\sigma_{\text{exp}}^{2}=\text{Var}\left(\sum_{x\in \text{sig.reg.}}N(x,\mathbf{p})\right)\approx\text{Var}\left(\sum_{x\in\text{ sig.reg.}}\frac{dN}{d\mathbf{p}}(x,\hat{\mathbf{p}})\cdot(\mathbf{p}-\hat{ \mathbf{p}})\right)\\ =\left(\sum_{x\in\text{sig.reg.}}\frac{dN}{d\mathbf{p}}\right)^{T }\mathbf{Cov}\left(\sum_{x\in\text{sig.reg.}}\frac{dN}{d\mathbf{p}}\right), \end{split} \tag{10}\]
where \(N(x,\mathbf{p})\) is the fit function from eq. (10) multiplied by the bin size, \(\mathbf{p}\) is a random variable vector of fit function parameters, and \(\mathbf{\hat{p}}\) are the estimated parameters. The covariance matrix for the parameters is estimated by
\[\mathbf{Cov}=\frac{L}{m-n}(\mathbf{J}^{T}\mathbf{J})^{-1}, \tag{11}\]
where \(L\) is the cost function at \(\mathbf{\hat{p}}\), \(m\) is the number of points used for the fit, \(n\) is the number of parameters (\(=3\)), and \(\mathbf{J}\) is the Jacobian of \(\mathbf{r}\) with respect to the parameters, evaluated at \(\mathbf{\hat{p}}\).
|
2304.13527
|
Pulsed CW laser for long-term spectroscopic measurements at high power
in deep-UV
|
We present a novel technique for in-vacuum cavity-enhanced UV spectroscopy
that allows nearly continuous measurements over several days, minimizing mirror
degradation caused by high-power UV radiation. Our method relies on pulsing of
the cavity's internal power, which increases the UV intensity to maximum only
for short periods when the studied atom is within the cavity mode volume while
keeping the average power low to prevent mirror degradation. Additionally, this
method significantly decreases laser-induced background on charged particle
detectors. The described 244 nm laser system is designed for 1S-2S two-photon
CW spectroscopy of muonium in the Mu-MASS project. It was tested to provide
intracavity powers above 20 W, requiring maintenance only a few times a day.
The pulsing technique demonstrates minimal impact on the radiation frequency,
with no observed shifts exceeding 15 kHz. Our approach represents a promising
new technique for high-precision spectroscopy of atoms in harsh UV environments
and demonstrates the feasibility of CW spectroscopy of muonium.
|
Nikita Zhadnov, Artem Golovizin, Irene Cortinovis, Ben Ohayon, Lucas de Sousa Borges, Gianluca Janka, Paolo Crivelli
|
2023-04-26T13:02:35Z
|
http://arxiv.org/abs/2304.13527v1
|
# Pulsed CW laser for long-term spectroscopic measurements at high power in deep-UV
###### Abstract
We present a novel technique for in-vacuum cavity-enhanced UV spectroscopy that allows nearly continuous measurements over several days, minimizing mirror degradation caused by high-power UV radiation. Our method relies on pulsing of the cavity's internal power, which increases the UV intensity to maximum only for short periods when the studied atom is within the cavity mode volume while keeping the average power low to prevent mirror degradation. Additionally, this method significantly decreases laser-induced background on charged particle detectors. The described 244 nm laser system is designed for 1S-2S two-photon CW spectroscopy of muonium in the Mu-MASS project. It was tested to provide intracavity powers above 20 W, requiring maintenance only a few times a day. The pulsing technique demonstrates minimal impact on the radiation frequency, with no observed shifts exceeding 15 kHz. Our approach represents a promising new technique for high-precision spectroscopy of atoms in harsh UV environments and demonstrates the feasibility of CW spectroscopy of muonium.
## 1 Introduction
High intensity deep-ultraviolet (deep-UV) continuous wave (CW) lasers open up new opportunities for light-matter interaction for testing fundamental physical theories and developments in applied science [1, 2]. Outperforming pulsed lasers in frequency stability, they are an essential tool for precision spectroscopy of transitions from the ground state in hydrogen [3, 4, 5] and its isotopes [6], antihydrogen [7] and muonium [8, 9, 10]. Deep-UV CW lasers allow laser cooling of hydrogen [11], mercury ions and atoms [12, 13, 14], cadmium [15], AIF [16], AlCl [17] and HgF [18]. Progress of UV laser technologies is beneficial for development of new optical atomic clocks [13, 15, 19, 20] and single photon excitation of Rydberg states [21, 22]. Collinear resonance ionization spectroscopy using CW lasers [23, 24] is at the forefront of measuring nuclear properties, such as shape, spin, and moments, through hyperfine structure analysis. Studying certain atomic species with this technique requires the use of deep-UV laser light for their excitation [25, 26].
Currently, the highest power deep-UV laser systems are capable of generating up to 2 W of CW light [17, 27]. Moreover, a number of experiments such as the laser cooling of hydrogen [11] and muonium spectroscopy [10] require tens of watts of such radiation, which can be obtained by amplification in an optical cavity. A big challenge that appears at such intensities of UV light, especially in vacuum, is the degradation of optical components: mirrors, crystals, lenses, and windows [28, 29, 30].
Cavity mirrors exposed to maximum power suffer the most from degradation. The speed of degradation is known to increase with the UV light power which optical surfaces experience. Previous research [10] demonstrated that an ultra-high vacuum (UHV, \(10^{-8}\) mbar) Fabry-Perot cavity, with oxide multi-layer coatings mirrors (HfO\({}_{2}\)/Al\({}_{2}\)O\({}_{3}\)) on a SiO\({}_{2}\) substrate, experiences more than a two-fold decrease of finesse in one hour at 5 W of intracavity power. Fluoride-coated mirrors (MgF\({}_{2}\)/LaF\({}_{3}\)) on CaF\({}_{2}\) substrates demonstrate a much more stable behavior: having
slightly worse optical performance, they can maintain up to 10 W of laser power in UHV on one-hour timescales. Nevertheless, more than these record-breaking characteristics are required in some applications. One possible way to maintain the optical quality of the mirrors is to operate them in an oxygen atmosphere. This approach demonstrated continuous operation for more than 4 hours at the power of 16 W at 10\({}^{-3}\) mbar of oxygen [10]. However, most precision spectroscopy applications demand UHV conditions, which are extremely hard to combine with this technique. A compromise solution may be periodic conditioning of the mirrors with oxygen in the presence of UV light. This requires breaks in the experiment but allow for restoring optical characteristics between measurement periods. The most likely cause of the degradation of fluoride mirrors is hydrocarbon contamination. During recovery, UV light generates ozone and atomic oxygen and decomposes hydrocarbon contaminants into components. The latter reacts with oxygen atoms and form simpler volatile molecules desorbed from the surface. Unfortunately, working with more than 10 W of intracavity power escalates degradation and requires spending more time on mirror recovery than on actual measurements.
Another important issue relates to resonance ionisation spectroscopy [8]. In such experiments, charged particles (ions, muons) are most often detected using microchannel plates (MCP). These devices are sensitive to deep-UV photons. Since it is very problematic to provide complete protection of an MCP from scattered light without reduction of its sensitive area, this effect inevitably leads to a decrease in signal-to-noise ratio.
The laser system presented in this article was developed for the Mu-MASS project aiming at high-precision CW laser spectroscopy of 1S-2S transition in muonium. In this experiment, muons, coming from the low energy muons (LEM) beamline at PSI [31], hit a mesoporous thin film silica target. Around 60 % are converted into muonium atoms and 40 % diffuse back into vacuum before decaying [32]. Some of those atoms pass through a standing wave of 244 nm light created with an enhancement cavity and can be excited to the 2S state. The 2S atoms are then photoionised with a 355 nm pulsed laser and the resulting muon is detected by an MCP (for details see [33]). Due to this transition's low two-photon excitation cross-section value and the small number of available muonium atoms, having an excitation rate of 1 event per hour requires 25 W of laser power on resonance. Therefore, to collect a statistically significant dataset, the high-power laser has to be stable for several days. The possibility of continuous conditioning of the mirrors for the Mu-MASS project is limited by the fact that oxygen contamination induces degradation of the LEM moderator [34]. Even though differential pumping was implemented between the cavity mirrors and the moderator zone, a reduction of the muon flux by a factor of two was measured in a timescale of a few minutes. Overcoming this obstacle would require impractical and complicated additions to the Mu-MASS vacuum system, involving multiple stages of differential pumping. This article introduces a new approach to enable long-term, high-power laser spectroscopic measurements in the deep-UV region for Mu-MASS and similar experiments requiring clean UHV conditions.
## 2 Methods
The experiment was conducted using the Mu-MASS laser setup, as shown on Fig. 1. The setup consists of several key components, including a high-power infrared Yb fiber amplifier, a CLBO crystal-based UV second harmonic generator, and a vacuum enhancement cavity. These components have been previously described in [10, 11, 27, 35]. For this experiment, fluoride mirrors were chosen for both the input and output couplers of the enhancement cavity due to their durability in UV. To the best of our knowledge, this laser system represents the most powerful 244 nm CW laser source currently available.
Spectroscopic measurements often work in repetitive mode: the atoms or molecules of interest are prepared, interact with laser radiation and one measures their excitation efficiency. Usually, it requires the laser light to be available on demand or periodically [3, 8, 24]. Therefore, to slow
down the effect of UV-induced degradation without sacrificing the ultimate laser power one can deliver the light only for the necessary measurement period, avoiding constant irradiation of optical components. A similar technique was used for visible laser for the CRIS (collinear resonance ionization spectroscopy) project [24], where the laser power was pulsed with a Pockels cell. As CRIS works with a bunched beam, the pulsing of CW light is used to obtain higher spectroscopic resolution while suppressing the background. The use of cavities for amplification and second harmonic generation creates challenges for the pulsing technique: it does not allow decreasing the laser power below the level necessary to maintain the cavities' length in resonance with the laser.
A laser power control system was implemented using a quartz crystal-based AOM (Fig. 1) that is designed for high light intensities. Pulsing the AOM RF amplitude lets us quickly adjust the zero-order beam intensity at 488 nm, with a base-to-peak power level ratio of more than 10. By aligning the AOM to maximize the first diffraction order (with a diffraction efficiency of over \(>90\%\)), we direct the zero order to the UV SHG and stop the first order with a beam dump. Bragg diffraction of light on a sound wave leads to spoilage of the zero-order mode shape by "eating away" the intensity in the central part of the beam more than at the edges. This effect is advantageous in reducing light coupling to the SHG not only by direct power reduction but also by worse mode matching. Using this technique and taking into account the nonlinear efficiency of second harmonic generation, we are able to maintain a baseline power of UV of just a few mW, which can peak up to 1.2 W when the AOM is not active. The optimal
Figure 1: Experimental setup for muonium spectroscopy. ECDL - external cavity diode laser, TA - tapered amplifier, SHG - second harmonic generator, LBO - lithium triborate, PBS - polarising beam splitter, AOM - acousto-optic modulator, BD - beam dump, EOM - electro-optic modulator, L1-L5 - mode matching lenses, PD - photodetector, PM - power monitor, CLBO - caesium lithium borate, IC - input coupler, OC - output coupler, HR - high reflector, PZT - piezoelectric transducer stack. Toptica laser assembly generates 4.5 W of 488 nm light, which goes through AOM and then converted to 244 nm using a SHG cavity with a CLBO crystal. The resulting UV light, reaching up to 1.2 W, is coupled to an enhancement cavity inside the ultrahigh vacuum chamber.
pulse duration was determined by computer simulation of muonium formation and its escape from the target to maximize the probability of 1S-2S excitation. The resulting value is around 1 microsecond. To keep the SHG and the enhancement cavity in resonance with the laser wavelength, Pound-Drever-Hall locking scheme [36] is used. The bandwidths of the feedback loops are constrained by the first resonance frequencies of the piezoelectric transducers utilized to regulate the positions of the mirrors. The enhancement cavity uses a center-of-mass piezo transducer mount [37], which provides the widest feedback bandwidth, however still less than 100 kHz. As these cavity-locking systems are slow to react to 1 \(\mu\)s-scale pulsing of laser power, they act as low-pass filters for such fast disturbances, allowing the cavities to remain locked throughout the measurement. In the described laser system, even few-tens-of-microseconds-long pulses do not cause the cavities to unlock.
The accurate control of delays in the optical and radio wave lines is crucial for the successful implementation of the described method. Figure 2 illustrates a single laser power pulse at three different points in the optical setup: at the zero-order of the AOM, after the UV SHG, and inside the enhancement cavity (see Figure 1). Intracavity power was determined from the transmitted light power measurement, taking into account the transmission coefficient of the output coupler. To minimize the AOM delay, we positioned the laser beam as close as possible to the ultrasonic emitter on the edge of the AOM crystal, right up to the point where the beam starts to cut into the crystal. Due to the delay and the time constants of the cavities, it takes 1 \(\mu\)s for the UV light to reach 95% of its peak power level after the AOM RF amplitude pulse. Much shorter delays, rise times and higher extinction ratios can be achieved with the use of
Figure 2: The laser power pulse is measured at three positions in the optical scheme: (a) in the zero-order right after the AOM, (b) incoming to the enhancement cavity, and (c) after transmission through the enhancement cavity. The RF pulse, which triggers the AOM, occurs at \(t=0\) s, and the delay of the laser pulse is caused by the travel time of the sound wave in the AOM crystal and the exponential behavior of power increase after passing through the SHG cavity and the enhancement cavity, with finesses of 150 and 200 (non-degraded), respectively. Eventually, the intracavity UV power response is characterized by a delay of 460 ns, followed by an exponential rise with a time constant of 190 ns.
AOM [24]. In that case, the power rise time would be limited by the cavities.
A triggering rate of 5 kHz (typical for the LEM muon beam at PSI), results in an average duty cycle of \(5\times 10^{-3}\) for 1 \(\mu\)s long laser pulses. With a base UV power of 5 mW and a peak power of 1.2 W, this enables the average power to be reduced by more than 100 times compared to the peak power. In UHV conditions, even this huge reduction in the UV load on the mirrors does not allow overcoming the degradation effect completely, but should dramatically reduce its rate. A second interesting feature, which should be provided by fast AOM power control, is the opportunity to suppress photon-induced background signals on MCP detectors.
## 3 Results and discussion
The performance of the laser system was monitored during the first attempt of muonium CW spectroscopy. The lasers and the vacuum chamber were transported from ETH Zurich to PSI. The task for the laser system during the measurements was to work stably at high peak power for several days.
To maintain a high enough enhancement factor in the course of UV laser light operation, the cavity mirrors need recovery from hydrocarbon contamination [10]. During the recovery process, the vacuum chamber of LEM was shut off to protect the moderator, and oxygen was pumped in through the needle valves near the mirrors (Fig. 1) up to the pressure of \(10^{-2}\) mbar. At an intracavity power of about 1 W the enhancement recovery process has a time constant of 2.5 minutes (Fig. 3). The entire procedure including oxygen evacuation down to \(10^{-6}\) mbar was taking almost half an hour, which is fast enough to not significantly affect the duration of the measurements.
The UV SHG output radiation was operated at the base power of 5 mW, responding to each trigger signal with a pulsed increase in power up to more than 1 W. The average trigger rate during the experiment was 2.5 kHz. The incoming and transmitted laser peak power were logged during the whole measurement to monitor the real-time cavity enhancement. A fragment of these measurements is presented in Fig. 4. The experiment showed that with an initial intracavity
Figure 3: The cavity’s enhancement factor during oxygen conditioning was determined by measuring the cavity transmission. The conditioning was performed with 1 W of intracavity UV light and an oxygen pressure of \(10^{-2}\) mbar. The recovery process had a time constant of 2.5 minutes. The fluctuations on the graph are due to the influence of the oxygen flow from the needle valves.
pulse power of 25 watts, the resonator degrades to 20 watts in approximately 5 hours and requires recovery (Fig. 4). A notable observation is that there were no signs of degradation on the output coupler of the UV SHG during the pulsing regime, which had been a concern in the past.
We should mention that the enhancement factor and UV SHG output power in [10] is larger than in this work. During the measurement, the enhancement was worsened by vibrations that disturb the coupling of laser light into the cavity mode. The piezo's ability to suppress vibrations will be improved by shortening the cavity and placing it vertically. The second harmonic generator suffered from dust contamination and misalignment due to temperature changes, showing slightly worse behavior than optimal. Dust protection and periodic alignment help to maintain its stable operation.
The laser system was operating for about 80 hours during five days of measurements. Breaks in work were mainly due to interruptions in the operation of the muon beam, background noise tests, and other measurements in which the laser was not involved. It is worth noting that the installation of the laser setup in the LEM beamline took around 40 hours, and even during the days of measurement, misalignments due to laser transportation were improved over the course of the measurement.
During propagation through a transparent medium, a laser pulse can acquire a chirp due to the effects of chromatic dispersion, nonlinearities, or rapid, laser-induced changes of the
Figure 4: Fragment of intracavity and UV SHG output peak power measurements. At \(t=2\,\mathrm{h}\) the quality of the enhancement cavity lock was improved, and at \(t=8\,\mathrm{h}\) mirror recovery started. In the first graph, the percentage of uptime is shown. Fluctuations of the peak power are caused by imperfect cavity lock quality and vibrations. The descending intracavity power is due to mirror degradation and a decrease of UV generation efficiency.
refractive index. To analyze this effect, we assembled an additional setup for UV light frequency measurement. This includes an alternative beam path where the laser light does not experience pulsing. One watt of blue light produced by the Toptica box is split off with a polarizing beamsplitter (see Fig. 1 after LBO SHG) and directed to another AOM, which provides detuning, and then to a single-pass BBO crystal to generate 244 nm light. The two beams are combined and directed to a photodiode, which produces a beat signal at a doubled detuning frequency. In our experiment, this was \(124.445\pm 0.002\,\mathrm{MHz}\). Since we are interested in frequency changes smaller than the muonium 1S-2S transition linewidth (144 kHz), a single laser pulse waveform does not provide enough data due to the spectral limit set by its short duration. Furthermore, the silicon photodetector used in the deep-UV region has a very moderate sensitivity, and the second harmonic produced by the BBO crystal has low intensity, which reduced the signal-to-noise ratio. To collect a statistically significant sample, we analyzed 2000 waveforms. We took the first 750 ns of each pulse, filtered them with a band-pass Fourier filter, and calculated the average frequency by measuring the difference between neighboring zero-crossings of the sine signal at the beat frequency. This gave us a normally-distributed sample of two thousand instantaneous frequencies, which were used to obtain an average value of \(124.445\pm 0.006\,\mathrm{MHz}\). This allows us to conclude with a 95% confidence level that no frequency shift above 15 kHz is introduced by the pulsing scheme.
The control of AOM power not only enables pulsing of UV intensity but also allows for an almost complete turn-off of the UV light by maximizing AOM diffraction efficiency (Fig. 5(a)). This means that the UV light intensity inside the vacuum chamber can be decreased by at least a factor of 10 in comparison to the base level for a few microseconds after the pulse, creating a reduced background time window for less noisy particle detection. To reach the MCP detector, which is removed at a distance of half a meter from the enhancement cavity, photons need to pass through a 90-degree bend in the vacuum tube [33]. For extra protection of the detectors
Figure 5: (a) Power pulse inside the enhancement cavity followed by a dip. (b) The temporal distribution of the number of counts detected by the MCP detector. The region where the radiation power is minimal is highlighted. For the description of the Mu-MASS experiment see [33].
from scattered laser radiation, we installed four light-absorbing screens (Acktar MaxiBlack foil) with axial holes in the vacuum chamber. These screens isolate the mirror area from the central chamber (Fig. 1). The signal of the MCP was measured for 12 hours with 1 \(\mu\)s intracavity UV peak power of 800 mW and a following 3 \(\mu\)s long minimum-power time window. Turning the radiation off made it possible to reduce the background rate by a factor of two, as shown in Figure 5(b): from \(7.5\pm 0.3\) Hz to \(3.1\pm 0.4\) Hz.
## 4 Conclusion
In summary, this study introduces a new and innovative method for precision continuous wave laser ultraviolet spectroscopy that addresses the challenge of optical surface degradation commonly encountered in such experiments. On-demand CW UV laser light pulsing enables the use of cavity-enhanced radiation in ultrahigh vacuum conditions, with a power output range of \(20-25\) W. Spectroscopic measurements with this technique can last at least a week. To maintain a high enhancement coefficient, mirror recovery with oxygen conditioning is only required 4-5 times a day, which has minimal impact on the measurement process. Precise control over UV radiation power is beneficial for resonance ionization spectroscopy, as it allows for rapid turn-offs of the laser radiation to minimize UV pickup by MCP detectors during the registration time window.
The presented laser system was designed for 1S-2S spectroscopy of muonium in the Mu-MASS project. With minor improvements, the pulsed CW laser system demonstrates the feasibility of muonium continuous-wave spectroscopy and paves the way for further studies of this unique atomic system.
Funding.This work is supported by the ERC consolidator grant 818053-Mu-MASS (P.C.) and the Swiss National Science Foundation under grant 197346 (PC). B.O. acknowledges support from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 101019414.
Acknowledgments.We would like to acknowledge Zak Burkley for his essential contribution of the laser system. We would like to thank Dylan Yost for the very useful discussions.
Disclosures.The authors declare no conflicts of interest.
Data Availability Statement.Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
|
2302.07804
|
Uncertainty quantification in coastal aquifers using the multilevel
Monte Carlo method
|
We consider a class of density-driven flow problems. We are particularly
interested in the problem of the salinization of coastal aquifers. We consider
the Henry saltwater intrusion problem with uncertain porosity, permeability,
and recharge parameters as a test case. The reason for the presence of
uncertainties is the lack of knowledge, inaccurate measurements, and inability
to measure parameters at each spatial or time location. This problem is
nonlinear and time-dependent. The solution is the salt mass fraction, which is
uncertain and changes in time. Uncertainties in porosity, permeability,
recharge, and mass fraction are modeled using random fields. This work
investigates the applicability of the well-known multilevel Monte Carlo (MLMC)
method for such problems. The MLMC method can reduce the total computational
and storage costs. Moreover, the MLMC method runs multiple scenarios on
different spatial and time meshes and then estimates the mean value of the mass
fraction. The parallelization is performed in both the physical space and
stochastic space. To solve every deterministic scenario, we run the parallel
multigrid solver ug4 in a black-box fashion. We use the solution obtained from
the quasi-Monte Carlo method as a reference solution.
|
Alexander Litvinenko, Dmitry Logashenko, Raul Tempone, Ekaterina Vasilyeva, Gabriel Wittum
|
2023-02-13T00:12:55Z
|
http://arxiv.org/abs/2302.07804v1
|
# Uncertainty quantification in coastal aquifers using the multilevel Monte Carlo method
###### Abstract
We consider a class of density-driven flow problems. We are particularly interested in the problem of the salinization of coastal aquifers. We consider the Henry saltwater intrusion problem with uncertain porosity, permeability, and recharge parameters as a test case. The reason for the presence of uncertainties is the lack of knowledge, inaccurate measurements, and inability to measure parameters at each spatial or time location. This problem is nonlinear and time-dependent. The solution is the salt mass fraction, which is uncertain and changes in time. Uncertainties in porosity, permeability, recharge, and mass fraction are modeled using random fields. This work investigates the applicability of the well-known multilevel Monte Carlo (MLMC) method for such problems. The MLMC method can reduce the total computational and storage costs. Moreover, the MLMC method runs multiple scenarios on different spatial and time meshes and then estimates the mean value of the mass fraction. The parallelization is performed in both the physical space and stochastic space. To solve every deterministic scenario, we run the parallel multigrid solver ug4 in a black-box fashion. We use the solution obtained from the quasi-Monte Carlo method as a reference solution.
**Keywords:** uncertainty quantification, ug4, multigrid, density-driven flow, reservoir, groundwater, salt formations
## 1 Introduction
Saltwater intrusion occurs when sea levels rise and saltwater moves onto the land. Usually, this occurs during storms, high tides, droughts, or when saltwater penetrates freshwater aquifers and raises the groundwater table. Since groundwater is an essential nutrition and irrigation resource, its salinization may lead to catastrophic consequences. Many acres of farmland may be lost because they can become too wet or salty to grow crops. Therefore, accurate modeling of different scenarios of saline flow is essential [1, 65] to help farmers and researchers develop strategies to improve the soil quality and decrease saltwater intrusion effects.
Saline flow is density-driven and described by a system of time-dependent nonlinear partial differential equations (PDEs). It features convection dominance and can demonstrate very complicated behavior [74].
As a specific model, we consider a Henry-like problem with uncertain permeability and porosity. These parameters may strongly affect the flow and transport of salt. The original Henry saltwater intrusion problem was introduced by H.R. Henry in the 1960s (cf. [35]). The Henry problem became a benchmark for numerical solvers for groundwater flow (cf. [74, 67, 66, 18]. In [61], the authors use the generalized polynomial chaos expansion approximation to investigate how incomplete knowledge of the system properties influences the assessment of global quantities. Particularly, they estimated the propagation of input uncertainties into a few dimensionless scalar parameters.
The hydrogeological formations typically have complicated and heterogeneous structures. These formations may consist of a few layers of porous media with various porosity and permeability coefficients (cf. [59, 64]). Measurements of the layer positions and their thicknesses are only possible up to some error, and for the materials inside the layers, the average parameters are typically assumed. Thus, these layers are excellent
candidates to be modeled by random fields. Further, due to the nonlinearities in the problem, averaging the parameters does not necessarily lead to the correct mathematical expectation of the solution.
To model uncertainties, we use random fields. Uncertainties in the input data propagate through the model and make the solution (e.g., the mass fraction) uncertain. An accurate estimation of the output uncertainties can facilitate a better understanding of the problem, better decisions, and improved control and design of the experiment.
The following questions can be answered:
1. How long can a particular drinking water well be used (i.e., when will the mass fraction of the salt exceed a critical threshold)?
2. What regions have especially high uncertainty?
3. What is the probability that the salt concentration is higher than a threshold at a certain spatial location and time point?
4. What is the average scenario (and its variations)?
5. What are the extreme scenarios?
6. How do the uncertainties change over time?
Many techniques can quantify uncertainties. A classical method is Monte Carlo (MC) sampling. Although it is dimension-independent, it converges very slowly and requires many samples. This method may not be affordable for time-consuming simulations. Nevertheless, even up-to-date techniques, such as surrogate models and stochastic collocation, require a few hundred to a few thousand time-consuming simulations and assume a certain smoothness by the quantity of interest (QoI).
Another class of methods is the class of perturbation methods [17]. The idea is to decompose the QoI with respect to (w.r.t.) random parameters in a Taylor series. The higher-order terms can be neglected for small perturbations, simplifying the analysis and numerics. These methods assume that random perturbations are small (e.g., up to 5% of the mean, depending on the problem). For larger perturbations, these methods usually do not work.
There are quite a few studies where authors model uncertainties in reservoirs (cf. [10, 72]). Reconnecting stochastic methods with hydrogeological applications was accomplished in [7], where the authors analyzed a collaboration between academics and water suppliers in Germany and made recommendations regarding optimization and risk assessment. The fundamentals of stochastic hydrogeology and an overview of stochastic tools and accounting for uncertainty are described in [63].
The review [70] deals with hydrogeologic applications of recent advances in uncertainty quantification, probabilistic risk assessment, and decision-making under uncertainty. The author reviewed probabilistic risk assessment methods in hydrogeology under parametric, geologic, and model uncertainties. Density-driven vertical transport of saltwater through the freshwater lens on the island of Baltrum (Germany) is modeled in [57].
In [39], the authors examined the implications of transgression for a range of seawater intrusion scenarios based on simplified coastal freshwater aquifer settings. They stated that vertical intrusion during transgressions could involve density-driven convective processes, causing substantially greater amounts of seawater to enter the aquifer and create more extensive intrusion than horizontal seawater intrusion in the absence of transgression.
The methods to compute the desired statistics of the QoI are direct integration methods, such as the MC, quasi-MC (QMC) and collocation methods and surrogate-based (generalized polynomial chaos approximation and stochastic Galerkin [21, 3, 29, 22]) methods. Direct methods compute statistics directly by sampling uncertain input coefficients and solving the corresponding PDEs, whereas the surrogate-based method computes a cheap functional (polynomial, exponential, or trigonometrical) approximation of the QoI. Examples of the surrogate-based methods are radial basis functions [45, 8, 46, 30], sparse polynomials [12, 6, 19], and polynomial chaos expansion [49, 15, 75]. Sparse grid methods to integrate high-dimensional integrals are considered in [68, 9, 31, 38, 51, 26, 52, 15, 56]. An idea to generate goal-oriented adaptive spatial grids and use them in the multilevel MC (MLMC) framework was presented in [20, 5].
The quantification of uncertainties in stochastic PDEs could be a significant challenge due to a) the large possible number of involved random variables and b) the high cost of each deterministic solution of the governed PDE. The MC quadrature and its variance-reduced variants have a dimension-independent error convergence rate \(\mathcal{O}(N^{-\frac{1}{2}})\), and the QMC has the worst-case rate \(\mathcal{O}(\log^{M}(N)N^{-1})\), where \(N\) is the number of samples, and \(M\) indicates the dimension of the stochastic space [47]. The MC method is not affected by the dimension of the integration domain, such as collocations on sparse or full grid methods [2, 50]. A numerical comparison of other QMC sequences is presented in [58].
Construction of a cheap generalized polynomial chaos expansion-based surrogate model [76, 42, 41] is an alternative to the MC method. Some well-known functions, such as the multivariate Legendre, Hermite, Chebyshev, or Laguerre functions, have been taken as a basis [53, 76]. Surrogate models have pros and cons. The pros are that the model can be easily sampled once constructed, and all samples are almost free (much cheaper than sampling the original stochastic PDE). For some problem settings, sampling is unnecessary because the solution can be computed analytically (e.g., computing an integral of a polynomial). The nontrivial part of surrogate models is to define how many coefficients are needed and how accurately they should be computed. Another difficulty is that not every function can be approximated well by a polynomial. The MLMC methods do not have such limitations.
This work is structured as follows. Section 2 describes the Henry problem and numerical methods to solve it. The well-known MLMC method is reviewed in Section 3. Next, Section 4 details the numerical results, which include the numerical analysis of the Henry problem, computing different statistics, the performance of the MLMC method, and the practical performance of the parallel ug4 solver for the Henry problem [35, 67] with uncertain coefficients. Finally, we conclude this work with a discussion in Section 5.
**Our contribution:** We investigate the propagation of uncertainties in the Henry-like problem. Assuming the porosity, permeability, and recharge are uncertain, we estimate the uncertainties in the density-driven flow. To reduce the high computing complexity, we applied the existing MLMC technique. We use the multigrid ug4 software library as a black-box solver, allowing us to solve the Henry problem and others (see more in [65]). We run all MLMC random simulations in parallel. To the best of our knowledge, we are unaware of any other studies where Henry's problem [35, 67] was solved using MLMC methods with uncertain porosity, permeability, and recharge parameters.
## 2 Henry Problem with Uncertain Porosity and Permeability
### Problem setting
In coastal aquifers, salty seawater intruding on the formation on one side (the seaside) displaces the pure water due to water recharge from land sources and precipitation from the other side. Due to its higher density, seawater mainly penetrates along the bottom of the aquifer. This process can achieve a steady state but may be time-dependent due to the periodicity of the recharge or controlling the pumping rate from the wells. An accurate simulation of the salinization is vital for the prediction of water resource availability. However, the accuracy of such predictions strongly depends on the hydrogeological parameters of the formation and the geometry of the computational domain, denoted by \(\mathcal{D}\).
The aquifer \(\mathcal{D}\subset\mathbb{R}^{d}\), \(d\in\{2,3\}\), can be modeled as an immobile porous matrix filled with liquid phase--a solution of salt in water. Due to the nonhomogeneous density distribution, gravitation induces the motion of the liquid phase. This motion transports the salt, which is otherwise subject to molecular diffusion.
A straightforward but very demonstrative model of coastal aquifers is the so-called Henry problem, first considered in [35]. In this two-dimensional setting, the aquifer is represented by a rectangular domain \(\mathcal{D}=[0,2]\times[-1,0]\) [m\({}^{2}\)] entirely saturated with the liquid phase (Fig. 1). The salty seawater intrudes from the right
side, and pure water is recharged from the left. The top and bottom are considered impermeable. Analogous settings with partially saturated domains are considered in [69].
The mass conservation laws for the entire liquid phase and salt yield the following equations
\[\partial_{t}(\phi\rho) + \nabla\cdot(\rho\mathbf{q})=0, \tag{1}\] \[\partial_{t}(\phi\rho c) + \nabla\cdot(\rho c\mathbf{q}-\rho\mathbf{D}\nabla c)=0, \tag{2}\]
where \(\phi:\mathcal{D}\rightarrow\mathbb{R}\) denotes the porosity, \(\mathbf{K}:\mathcal{D}\rightarrow\mathbb{R}^{d\times d}\) represents the permeability, \(c(t,\mathbf{x}):[0,+\infty)\times\mathcal{D}\rightarrow[0,1]\) is the mass fraction of the salt (or of the brine) in the solution, \(\rho=\rho(c)\) indicates the density of the liquid phase, and \(\mathbf{D}(t,\mathbf{x}):[0,+\infty)\times\mathcal{D}\rightarrow\mathbb{R}^ {d\times d}\) denotes the molecular diffusion and mechanical dispersion tensor. For the velocity \(\mathbf{q}(t,\mathbf{x}):[0,+\infty)\times\mathcal{D}\rightarrow\mathbb{R}^ {d}\), we assume Darcy's law:
\[\mathbf{q}=-\frac{\mathbf{K}}{\mu}(\nabla p-\rho\mathbf{g}), \tag{3}\]
where \(p=p(t,\mathbf{x}):[0,+\infty)\times\mathcal{D}\rightarrow\mathbb{R}\) is the hydrostatic pressure, \(\mu=\mu(c)\) denotes the viscosity of the liquid phase, and \(\mathbf{g}=(0,\ldots,0,-g)^{T}\in\mathbb{R}^{d}\) represents the gravity vector. Inserting (3) into (1-2) results in a system of two time-dependent PDEs in the unknowns \(c\) and \(p\). This system should be closed with boundary conditions for \(c\) and \(p\) and an initial condition for \(c\).
Following the classical setting in [35], for this variant of the Henry problem, we set
\[\rho(c)=\rho_{0}+(\rho_{1}-\rho_{0})c,\qquad\qquad\mu=\text{const} \tag{4}\]
and
\[\mathbf{D}=\phi D\mathbf{I} \tag{5}\]
with a constant scalar \(D\in\mathbb{R}\), and the identity matrix \(\mathbf{I}\in\mathbb{R}^{d\times d}\). Furthermore, we assume the isotropic permeability
\[\mathbf{K}=K\mathbf{I},\qquad K\in\mathbb{R}.\]
This setting is consistent with the problem setting in [74]. However, we do not assume the Boussinesq approximation and keep the density variable for all terms. For the initial conditions, we set
\[c|_{t=0}=0. \tag{6}\]
The boundary conditions are presented in Fig. 1(a). On the right side of the domain, we impose Dirichlet conditions for the \(c\) and \(p\) variables that represent the adjacent seawater aquifer:
\[c|_{x=2}=1,\qquad p|_{x=2}=-\rho_{1}gy. \tag{7}\]
On the left side, we prescribe the inflow of fresh water:
\[c|_{x=0}=0,\qquad\left.\rho\mathbf{q}\cdot\mathbf{e}_{x}\right|_{x=0}=\hat{q} _{\text{in}}, \tag{8}\]
where \(\mathbf{e}_{x}=(1,0)^{\top}\), and \(\hat{q}_{\text{in}}\) is a constant. For the classical formulation of the Henry problem, this value was set to \(\hat{q}_{\text{in}}=6.6\cdot 10^{-2}\) kg/s in [74] or \(\hat{q}_{\text{in}}=3.3\cdot 10^{-2}\) kg/s in [67, 66]. The Neumann zero boundary conditions are imposed on the upper and lower sides of \(\mathcal{D}\).
An example of \(c(t,\mathbf{x})\) and \(\mathbf{q}(t,\mathbf{x})\) for the parameters from Table 1 is presented in Fig. 1(right). The dark red color corresponds to \(c=1\), and dark blue corresponds to \(c=0\). Due to its higher density, the saltwater intrudes
\begin{table}
\begin{tabular}{|l|l|l|} \hline Parameter & Values and Units & Description \\ \hline \(\mathbb{E}\left[\phi\right]\) & \(0.35\) [-] & mean value of porosity \\ \hline \(D\) & \(18.8571\cdot 10^{-6}\) [m\({}^{2}\cdot\) s\({}^{-1}\)] & diffusion coefficient in the medium \\ \hline \(\mathbf{K}\) & \(1.020408\cdot 10^{-9}\) [m\({}^{2}\)] & permeability of the medium \\ \hline \(g\) & \(9.8\) [m\(\cdot\) s\({}^{-2}\)] & gravity \\ \hline \(\rho_{0}\) & \(1000\) [kg\(\cdot\) m\({}^{-3}\)] & density of pure water \\ \hline \(\rho_{1}\) & \(1024.99\) [kg\(\cdot\) m\({}^{-3}\)] & density of brine \\ \hline \(\mu\) & \(10^{-3}\) [kg\(\cdot\) m\({}^{-1}\cdot\) s\({}^{-1}\)] & viscosity \\ \hline \end{tabular}
\end{table}
Table 1: Parameters of the considered density-driven flow problem
into the aquifer in the lower right part. It is pushed back by the lighter pure water coming from the left. This process induces a vortex in the flow in the lower right corner of the domain. The saltwater flows in at the lower part of the right boundary and deviates to the top and right, back to the seaside, forming a salt triangle. This flow does not transport the salt to the left part of the domain. The salt propagates further to the left due to diffusion and dispersion and is washed out by the recharge. In the classical formulation, this salt triangle initially increases over time but achieves a steady state (cf. [74, 67, 66]). However, the initial nonstationary phase may take significant time. Investigating this phase is especially important to understand the system behavior when changing the recharge. For this, in addition to the mean and variance, we consider the mass fraction at 12 points (listed below) and an integral value--the total amount of pure water (as in Eq. 23). The list of chosen points follows:
\[\{(x,y)_{i=1,\ldots,12}\}=\{(1.10,-0.95),(1.35,-0.95),(1.60,-0.95), (1.85,-0.95),(1.10,-0.75),(1.35,-0.75), \tag{9}\] \[\qquad\qquad\qquad(1.60,-0.75),(1.85,-0.75),(1.10,-0.50),(1.35,-0.50),(1.60,-0.50),(1.85,-0.50).\}\]
The motivation is to consider points where the concentration variation is considerable. In addition, the mass fraction \(c\) at each point \(\mathbf{x}\) is a function of time.
These spatial points may help track salinity changes over time in groundwater wells and understand which areas in the aquifer are most vulnerable. Farmers can use this information to take action, such as decreasing salinity or adapting strategies by planting salt-tolerant crops.
### Modeling porosity, permeability, and mass fraction
The primary sources of uncertainty are the hydrogeological properties of the porous medium--porosity (\(\phi\)) and permeability (\(\mathbf{K}\)) fields of the solid phase--and the freshwater recharge flux \(\hat{q}_{x}\) through the left boundary. The QoIs are related to the mass fraction \(c\), a function of \(\phi\), \(\mathbf{K}\), and the recharge. We model the uncertain \(\phi\) using a random field and assume \(\mathbf{K}\) to be isotropic and dependent on \(\phi\):
\[\mathbf{K}=K\mathbf{I},\qquad K=K(\phi)\in\mathbb{R}. \tag{10}\]
The distribution of \(\phi(\mathbf{x},\boldsymbol{\xi})\), \(\mathbf{x}\in\mathcal{D}\), is determined by a set of stochastic parameters \(\boldsymbol{\xi}=(\xi_{1},\ldots,\xi_{M},...)\). Each component \(\xi_{i}\) is a random variable depending on a random event \(\omega\). For concision, we skip \(\omega\) and write \(\boldsymbol{\xi}:=\boldsymbol{\xi}(\omega)\).
The dependence in Eq. (10) is specific for every material. We refer to [54, 55, 16] for a detailed discussion. In the proposed model, we use a Kozeny-Carman-like dependence
\[K(\phi)=\kappa_{KC}\cdot\frac{\phi^{3}}{1-\phi^{2}}, \tag{11}\]
where the scaling factor \(\kappa_{KC}\) is chosen to satisfy the equality \(K(\mathbb{E}\left[\phi\right])\mathbf{I}=E(\mathbf{K})\), resembling the parameters of the standard Henry problem. The inflow flux is kept constant across the left boundary but depends on the stochastic variable \(q_{\mathrm{in}}\). We also assume that the inflow flux is independent of \(\phi\) and \(\mathbf{K}\).
### Numerical methods for the deterministic problem
The system (1-2) is numerically solved in the domain \(\mathcal{D}\times[0,T]\), where the symbol \(\times\) denotes the Cartesian product. After the discretization of \(\mathcal{D}\) using quadrilaterals of size \(h\), we obtain \(\mathcal{D}_{h}\). Equations (1-2) are discretized using a vertex-centered finite-volume scheme with a "consistent velocity" for the approximation of Darcy's law (3), as presented in [23, 24, 25]. The degrees of freedom associated with \(\mathcal{D}_{h}\) are denoted by \(n\). There are two degrees of freedom per grid vertex: one for the mass fraction and another for the pressure. We
Figure 1: (left) Computational domain \(\mathcal{D}:=[0,2]\times[-1,0]\). (Right) One realization of the mass fraction \(c(t,\mathbf{x})\) and the streamlines of the velocity field \(\mathbf{q}\) for the undisturbed Henry problem at \(t=6016\) s.
use the implicit Euler method with a fixed time step \(\tau\) for time discretization. The number of the computed time steps is \(r=T/\tau\).
We use partial upwind for the convective terms (cf. [23]). Therefore, the discretization error is of the second order w.r.t. the spatial mesh size \(h\). However, the diffusion in (2) is minimal compared with the velocity. For the grids in the numerical experiments, the observed reduction of the discretization error after grid refinement corresponds to the first order. Thus, we assume the first-order dependence of the discretization error w.r.t. \(h\), which is consistent with the numerical experiments. Furthermore, the Euler method provides the first-order dependence of the discretization error w.r.t. \(\tau\).
The implicit time-stepping scheme provides unconditional stability but requires the solution to an extensive nonlinear algebraic system of the discretized equations with \(n\) unknowns in every time step. The Newton method is used to solve this system. Linear systems inside the Newton iteration are solved using the BiCGStab method (cf. [4]) preconditioned with the geometric multigrid method (V-cycle, cf. [32]). In the multigrid cycle, the ILU\({}_{\beta}\)-smoothers [33] and Gaussian elimination are used as the coarse grid solver.
To construct the spatial grid hierarchy \(\mathcal{D}_{0},\mathcal{D}_{1},\ldots,\mathcal{D}_{L}\), we start with a coarse grid consisting of \(512\) grid elements (quadrilaterals) and \(n_{0}=1122\) degrees of freedom. This grid is regularly refined to obtain all other grid levels. After every spatial grid refinement, the number of grid elements is multiplied by a factor of four. Consequently, the number of degrees of freedom is increased by a factor of four (i.e., \(n_{\ell}\approx n_{0}\cdot 2^{d\ell}\), \(d=2\); see Table 2). This hierarchy is used in the geometric multigrid preconditioner and MLMC method. We also construct the temporal grid hierarchy \(\mathcal{T}_{0},\mathcal{T}_{1},\ldots,\mathcal{T}_{L}\). The time step on each temporal grid is denoted by \(\tau_{\ell}\) with \(\tau_{\ell+1}=\frac{1}{2}\tau_{\ell}\). The number of time steps on the \(\ell\)th grid (level) is \(r_{\ell+1}=2r_{\ell}\) and \(r_{\ell}=r_{0}2^{d}\), where \(r_{0}\) is the number of grid points on \(\mathcal{T}_{0}\). On the \(\ell\)th level, the MLMC uses the grid \(\mathcal{D}_{\ell}\times\mathcal{T}_{\ell}\). Up to six spatial and time grids were used in the numerical experiments.
In the context of this work, it is critical to estimate the numerical complexity of the deterministic solver w.r.t. the grid level \(\ell\). The most time-consuming part of the simulation is the solution of the discretized nonlinear system. Typically, it is challenging to predict the number of Newton iterations in every time step, but in the numerical experiments, two iterations were sufficient to achieve the prescribed accuracy. Accordingly, the linear solver was called at most two times per time step. Furthermore, the convergence rate of the geometric multigrid method does not depend on the mesh size (cf. [33]). Hence, the computation complexity of one time step is \(\mathcal{O}(n_{\ell})\), where \(n_{\ell}\) is the number of the degrees of freedom on the grid level \(\ell\). Therefore, the overall numerical cost of the computation of one scenario on grid level \(\ell\) for \(r_{\ell}\) time steps is
\[s_{\ell}=\mathcal{O}(n_{\ell}r_{\ell}),\quad s_{\ell}\propto s_{\ell-1}2^{(d +1)},\quad d=2. \tag{12}\]
## 3 Multilevel Monte Carlo
Various numerical methods can quantify uncertainty, and every method has pros and cons. For example, the classical MC method converges slowly and requires numerous samples. To reduce the total computing cost, we apply the MLMC method, which is a natural idea because the deterministic solver uses a multigrid method (see Section 2.3). The MLMC method efficiently combines samples from various levels. Further, we repeat the main idea of the MLMC method. A more in-depth description of these techniques is found in [13, 14, 27, 28, 34, 71, 44].
We let \(\boldsymbol{\xi}(\omega)\) and \(g(\boldsymbol{\xi})=g(\boldsymbol{\xi}(\omega))\) represent a vector of random variables and the QoI, respectively, where \(\omega\) is a random event. The MLMC method aims to approximate the expected value \(\mathbb{E}\left[g\right]\) with an optimal computational cost. In this work, \(g\) could be \(c(t,\mathbf{x},\boldsymbol{\xi})\) in the whole domain or at a point \((t,\mathbf{x})\) or an integral over a subdomain. The MLMC method constructs a telescoping sum, defined over a sequence of spatial and temporal meshes, \(\ell=0,\ldots,L\), as described next, to achieve this goal. Moreover, \(g\), numerically evaluated on level \(\ell\), is denoted by \(g_{h_{\ell},\tau_{\ell},\ell}\) or, for simplicity, by just \(g_{\ell}\), where \(h_{\ell}\) and \(\tau_{\ell}\) are the discretization steps in space and time on level \(\ell\). Further, we assume that \(\mathbb{E}\left[g_{h,\tau}\right]\rightarrow\mathbb{E}\left[g\right]\) as \(h\to 0\) and \(\tau\to 0\).
Furthermore, \(s_{0}\) is the computing cost to evaluate one realization of \(g_{0}\) (the most expensive one from all realizations). Similarly, \(s_{\ell}\) denotes the computing cost of evaluating \(g_{\ell}-g_{\ell-1}\). For simplicity, we assume that \(s_{\ell}\) for \(g_{\ell}-g_{\ell-1}\) is almost the same as \(s_{\ell}\) for \(g_{\ell}\). The number of iterations is variable; thus, the cost of computing a sample of \(g_{\ell}-g_{\ell-1}\) may fluctuate for various realizations.
For a better understanding, we consider a two-level MLMC (cf. [28]) and estimate the optimal number of needed samples on both levels. The two-level MLMC has only two meshes: a coarse one and a fine one. The QoI \(\mathbb{E}\left[g\right]\) can be approximated on the fine mesh by \(\mathbb{E}\left[g_{1}\right]\) and on the coarse mesh by \(\mathbb{E}\left[g_{0}\right]\). Furthermore,
\[\mathbb{E}\left[g_{1}\right]=\mathbb{E}\left[g_{0}\right]+\mathbb{E}\left[g_{1 }-g_{0}\right]\approx m_{0}^{-1}\sum_{i=1}^{m_{0}}g_{0}^{(i)}+m_{1}^{-1}\sum_{j= 1}^{m_{1}}(g_{1}^{(j)}-g_{0}^{(j)}), \tag{13}\]
where \(g_{1}^{(j)}-g_{0}^{(j)}:=g_{1}(\boldsymbol{\xi}_{j})-g_{0}(\boldsymbol{\xi}_{j})\), \(\boldsymbol{\xi}_{j}\) is a random vector, and \(m_{0}\) and \(m_{1}\) represent the numbers of quadrature points (numbers of samples/realizations) on the coarse and fine meshes, respectively. The total computational
cost of evaluation (13) is \(m_{0}s_{0}+m_{1}s_{1}\). The variances of \(g_{0}\) and \(g_{1}-g_{0}\) are denoted by \(V_{0}\) and \(V_{1}\), and the total variance is obtained by \(V_{0}/m_{0}+V_{1}/m_{1}\), assuming that \(g_{0}^{(i)}\) and \(g_{1}^{(j)}-g_{0}^{(j)}\) use independent samples. By solving an auxiliary minimization problem, the variance is minimal if \(m_{1}=m_{0}\cdot\frac{\sqrt{V_{1}/s_{1}}}{\sqrt{V_{0}/s_{0}}}\). Thus, with the estimates of the variances and \(m_{0}\), we can estimate \(m_{1}\).
The idea presented above can be extended to a case with multiple levels. Thus, we can find (quasi-) optimal numbers of samples \(m_{0},m_{1},\ldots,m_{L}\). The MLMC method calculates \(\mathbb{E}\left[g_{L}\right]\approx\mathbb{E}\left[g\right]\) using the following telescopic sum:
\[\mathbb{E}\left[g_{L}\right] =\mathbb{E}\left[g_{0}\right]+\sum_{\ell=1}^{L}\mathbb{E}\left[g _{\ell}-g_{\ell-1}\right] \tag{14}\] \[\approx m_{0}^{-1}\sum_{i=1}^{m_{0}}g_{0}^{(0,i)}+\sum_{\ell=1} ^{L}\left(m_{\ell}^{-1}\sum_{i=1}^{m_{\ell}}(g_{\ell}^{(\ell,i)}-g_{\ell-1}^{( \ell,i)})\right). \tag{15}\]
In the above equation, level \(\ell\) in the superscript \((\ell,i)\) indicates that independent samples are used at each correction level. As \(\ell\) increases, the variance of \(g_{\ell}-g_{\ell-1}\) decreases. Thus, the total computational cost can be reduced by taking fewer samples on finer meshes.
We recall that \(h_{\ell}=h_{0}\cdot 2^{-2\ell}\) and \(\tau_{\ell}=\tau_{0}\cdot 2^{-\ell}\). We assume that the average cost of generating one sample of \(g_{\ell}\) (the cost of one deterministic simulation for one random realization) is
\[s_{\ell}=\mathcal{O}(n_{\ell}r_{\ell})=\mathcal{O}(h_{\ell}^{-1}\tau_{\ell}^{ -1})=\mathcal{O}\left(\frac{1}{h_{0}\tau_{0}}2^{2\ell}2^{\ell}\right)=\mathcal{ O}\left(\frac{1}{h_{0}\tau_{0}}2^{3\ell}\right)=\mathcal{O}\left(\frac{1}{h_{0} \tau_{0}}2^{(d+1)\ell\gamma}\right), \tag{16}\]
where \(d=2\) is the spatial dimension, and \(\gamma=1\) is determined by the computational complexity of the deterministic solver (ug4).
We let \(V_{\ell}\) be the variance of one sample of \(g_{\ell}-g_{\ell-1}\). Then, the total cost and variance of the multilevel estimator in Eq. (14) are \(\sum_{\ell=0}^{L}m_{\ell}s_{\ell}\) and \(\sum_{\ell=0}^{L}V_{\ell}/m_{\ell}\), respectively. For a fixed variance, the cost is minimized by choosing \(m_{\ell}\) to minimize the following functional for some value of the Lagrange multiplier \(\mu^{2}\):
\[F(m_{0},\ldots,m_{L}):=\sum_{\ell=0}^{L}m_{\ell}s_{\ell}+\mu^{2}\frac{V_{\ell} }{m_{\ell}}. \tag{17}\]
To determine \(m_{\ell}\), we take the derivatives w.r.t. \(m_{\ell}\) and set them equal to zero:
\[\frac{\partial F(m_{0},\ldots,m_{L})}{\partial m_{\ell}}:=s_{\ell}-\mu^{2} \frac{V_{\ell}}{m_{\ell}^{2}}=0.\]
After solving the obtained equations, we obtain
\[m_{\ell}^{2}=\mu^{2}\frac{V_{\ell}}{s_{\ell}}\quad\text{and}\quad m_{\ell}= \mu\sqrt{\frac{V_{\ell}}{s_{\ell}}}.\]
To achieve an overall variance of \(\varepsilon^{2}\), that is,
\[\sum_{\ell=0}^{L}V_{\ell}/m_{\ell}=\varepsilon^{2},\]
we substitute \(m_{\ell}\) with the computed \(m_{\ell}=\mu\sqrt{\frac{V_{\ell}}{s_{\ell}}}\), and obtain
\[\sum_{\ell=0}^{L}\frac{V_{\ell}}{\mu\sqrt{\frac{V_{\ell}}{s_{\ell}}}}= \varepsilon^{2}.\]
From the last equation, we obtain
\[\mu=\varepsilon^{-2}\sum_{\ell=0}^{L}\sqrt{V_{\ell}s_{\ell}},\quad\text{and}\]
\[m_{\ell}=\varepsilon^{-2}\sqrt{\frac{V_{\ell}}{s_{\ell}}}\sum_{i=0}^{L}\sqrt{ V_{i}s_{i}}. \tag{18}\]
The total computational cost is \(S:=\varepsilon^{-2}\left(\sum_{\ell=0}^{L}\sqrt{V_{\ell}s_{\ell}}\right)^{2}\) (for further analysis of this sum, see [28], p.4).
**Definition 1**: _We let_
\[\mathbb{E}\left[Y_{\ell}\right]:=\begin{cases}\mathbb{E}\left[g_{0} \right],&\ell=0\\ \mathbb{E}\left[g_{\ell}-g_{\ell-1}\right],&\ell>0\end{cases}. \tag{19}\]
_In addition, \(Y:=\sum_{\ell=0}^{L}Y_{\ell}\) denotes a multilevel estimator of \(\mathbb{E}\left[g\right]\) based on \(L+1\) levels and \(m_{\ell}\) independent samples on level \(\ell\), where \(\ell=0,\ldots,L\). Moreover, \(Y_{\ell}=m_{\ell}^{-1}\sum_{i=1}^{m_{\ell}}(g_{\ell}^{(\ell,\ell)}-g_{\ell-1}^ {(\ell,i)})\), where \(g_{-1}\equiv 0\)._
_The standard theory indicates that \(\mathbb{E}\left[Y\right]=\mathbb{E}\left[g_{L}\right]\), \(\mathbb{V}\left[Y_{\ell}\right]=\sum_{\ell=0}^{L}m_{\ell}^{-1}V_{\ell}\), and \(V_{\ell}\equiv\mathbb{V}\left[g_{\ell}-g_{\ell-1}\right]\)._
The mean squared error (MSE) is used to measure the quality of the multilevel estimator:
\[\text{MSE}:=\mathbb{E}\left[\left(Y-\mathbb{E}\left[g\right] \right)^{2}\right]=\mathbb{V}\left[Y\right]+\left(\mathbb{E}\left[Y\right]- \mathbb{E}\left[g\right]\right)^{2}. \tag{20}\]
To obtain an MSE smaller than \(\varepsilon^{2}\), we ensure that both \(\mathbb{V}\left[Y\right]\) and \(\left(\mathbb{E}\left[Y\right]-\mathbb{E}\left[g\right]\right)^{2}=\left( \mathbb{E}\left[g_{L}-g\right]\right)^{2}\) are smaller than \(\varepsilon^{2}/2\). Combining this idea with a geometric sequence of levels in which the cost increases exponentially with the level while the weak error \(\mathbb{E}\left[g_{L}-g\right]\) and multilevel correction variance \(V_{\ell}\) decrease exponentially leads to the following theorem (cf. Theorem 1, p. 6 in [28]):
**Theorem 2**: _We let \(d\) denote the problem dimension. Suppose positive constants \(\alpha,\beta,\gamma>0\) exist such that \(\alpha\geq\frac{1}{2}\text{min}(\beta,\gamma d)\), and_
\[\left|\mathbb{E}\left[g_{\ell}-g\right]\right| \leq c_{1}2^{-\alpha\ell} \tag{21a}\] \[V_{\ell} \leq c_{2}2^{-\beta\ell}\] (21b) \[S_{\ell} \leq c_{3}2^{d\gamma\ell}. \tag{21c}\]
_Then, for any accuracy \(\varepsilon<e^{-1}\), a constant \(c_{4}>0\) and a sequence of realizations \(\{m_{\ell}\}_{\ell=0}^{L}\) exist, such that_
\[\text{MSE}:=\mathbb{E}\left[\left(Y-\mathbb{E}\left[g\right] \right)^{2}\right]<\varepsilon^{2},\]
_and the computational cost is_
\[S=\begin{cases}\mathcal{O}(\varepsilon^{-2}),&\beta>d\gamma\\ \mathcal{O}(\varepsilon^{-2})\left(\log(\varepsilon)\right)^{2},&\beta=d\gamma \\ \mathcal{O}(\varepsilon^{-\left(2+\frac{d\gamma-\beta}{\alpha}\right)}),&\beta <d\gamma.\end{cases} \tag{22}\]
This theorem (see also [37, 36, 11, 13, 27]) indicates that, even in the worst-case scenario, the MLMC algorithm has a lower computational cost than that of the traditional (single-level) MC method, which scales as \(\mathcal{O}(\varepsilon^{-2-d\gamma/\alpha})\). Furthermore, in the best-case scenario presented above, the computational cost of the MLMC algorithm scales as \(\mathcal{O}\left(\varepsilon^{-2}\right)\).
Using preliminary tests, we can estimate the convergence rates \(\alpha\) for the mean (the so-called weak convergence) and \(\beta\) for the variance (the so-called strong convergence). In addition, \(\alpha\) is strongly connected to the order of the discretization error (see Section 2.3), which equals \(1\), and precise estimates of parameters \(\alpha\) and \(\beta\) are crucial to distribute the computational effort optimally.
## 4 Numerical Experiments
The goal is to reduce the total computational cost of stochastic simulations. We use the MLMC method to compute the mean value of various QoIs, such as \(c\) in the whole domain, \(c\) at a point, or an integral value (we call it the freshwater integral):
\[Q_{FW}(t,\omega):=\int_{\mathbf{x}\in\mathcal{D}}I(c(t,\mathbf{x },\omega)\leq 0.012178)d\mathbf{x}, \tag{23}\]
where \(I(\cdot)\) is the indicator function identifying a subdomain \(\left\{\mathbf{x}:\ c(t,\mathbf{x},\omega)\leq 0.012178\right\}\), meaning the mass of the fresh water at a time \(t\). Each simulation may contain up to \(n=0.5\cdot 10^{6}\) spatial mesh points and a few thousand time steps (\(r=6016\) on the finest mesh).
**Software and parallelization:** The computations presented in this work were performed using the ug4 simulation software toolbox ([https://github.com/ug4/ughub.wiki.git](https://github.com/ug4/ughub.wiki.git)) [60, 73]. This software has been applied for subsurface flow simulations of real-world aquifers (cf. [65]). The toolbox was parallelized using MPI, and the presented results were obtained on the Shaheen II cluster provided by the King Abdullah University of Science and Technology. Every sample was computed on 32 cores of a separate cluster node. Each simulation
(scenario) was localized to one node to reduce the communication time between nodes. All scenarios were concurrently computed on different nodes. A similar approach was used in [41; 42]. Simulations were performed on different meshes; thus, the computation time of each simulation varied over a wide range (see Table 2).
**Porosity and recharge:** We assume two horizontal layers: \(y\in(-0.75,0]\) (the upper layer) and \(y\in[-1,-0.75]\) (the lower layer). The porosity inside each layer is uncertain and is modeled as in Eq. (24):
\[\phi(\mathbf{x},\boldsymbol{\xi}) =0.35\cdot(1+0.15(\xi_{2}\cos(\pi x/2)+\xi_{2}\sin(2\pi y)+\xi_{1 }\cos(2\pi x)))\cdot C_{0}(\xi_{1}), \tag{24}\] \[\text{where}\;\;C_{0}(\xi_{1}) =\left\{\begin{array}{ll}1+0.2\xi_{1}&\text{if}\;y<-0.75\\ 1-0.2\xi_{1}&\text{if}\;y\geq-0.75,\end{array}\right. \tag{25}\]
Additionally, the recharge flux is also uncertain and is equal to
\[\hat{q}_{in}=-6.6\cdot 10^{-2}(1+0.5\cdot\xi_{3}), \tag{26}\]
where \(\xi_{1}\), \(\xi_{2}\), and \(\xi_{3}\) are sampled independently and uniformly in \([-1,1]\). Figure 2 depicts a random realization of the porosity random field \(\phi(\boldsymbol{\xi})\) (left) and the corresponding solution \(c(t,\mathbf{x},\boldsymbol{\xi})=c(t,\phi(\boldsymbol{\xi}))\) at \(t=T\) (right). Additionally, four isolines \(\{\mathbf{x}:\;|c(t,\phi(\boldsymbol{\xi}))-\overline{c}(t)|=0.1\cdot i\}\), \(i=1,2,3,4\), are presented on the right. The isolines demonstrate the absolute value of the difference between the computed realization \(c(t,\phi(\boldsymbol{\xi}))\) and the expected value \(\overline{c}(t)\). These computations were performed for \(\boldsymbol{\xi}=\boldsymbol{\xi}^{*}=(-0.5898,-0.7257,-0.9616)\) and \(t=T=6016\) s.
The mean and variance of the mass fraction are provided in Fig. 3 on the left and right, respectively. The expectation takes values from \([0,1]\), and the variance range is \([0,0.05]\). The areas with high variance (dark red) indicate regions with high variability/uncertainty. Such regions may need additional attention from specialists (e.g., placement of additional sensors). Additionally, the right image displays five contour lines \(\{\mathbf{x}:\;\text{Var}[c](t,\mathbf{x})=0.01\cdot i\}\), \(i=1..5\), \(t=T=6016\).
We observed that the variability (uncertainty) of the mass fraction might vary from one grid point to another. At some points (dark blue regions), the solution does not change. At other points (white-yellow regions), the variability is very low or high (dark red regions). In regions with high uncertainty, refining the mesh and applying the MLMC method make sense.
Before we run the MLMC method, we first examine the solution \(c(t,\mathbf{x})\) at 12 preselected points (see Eq. (9)). Figure 4 includes 12 subfigures. Each subfigure presents 600 QMC realizations of \(c(t,\mathbf{x})\) and five quantiles
depicted by dotted lines. The dotted line at the bottom indicates the quantile \(0.025\). The following dotted line is the quantile \(0.25\), and the dotted line on the top indicates the quantile \(0.975\). All five quantiles from the bottom to the top are \(0.025\), \(0.25\), \(0.50\), \(0.75\), and \(0.975\), respectively. We observe that \(c\) at the final point \(t=T\) varies considerably.
**Example.** In Fig. 5, we demonstrate the probability density function (pdf) of \(t^{*}(\omega)=\min_{t}\{t:\ Q_{FW}(t,\omega)<1.2\}\) (left), and the pdf of \(t^{*}(\omega)=\min_{t}\{t:\ Q_{FW}(t,\omega)<1.7\}\) (right). On average, after approximately 29 time steps (on the left) and six time steps (on the right), the volume of the fresh water becomes less than 1.2 and 1.7, respectively. The initial volume of the fresh water was 2.0.
All 600 realisations of \(Q_{FW}(t)\) are depicted in Fig. 6 The time is along the \(x\)-axis, \(t\in[\tau,48\tau]\). Additionally, five quantiles are represented by dotted curves from the bottom to the top and are 0.025, 0.25, 0.50, 0.75, and 0.975, respectively.
**Example.** Figure 7 (left) displays the evolution of the pdf of \(c(t,\mathbf{x},\omega)\) at a fixed point \(\mathbf{x}=(1.85,-0.95)\) in time \(t=\{3\tau,\ldots,48\tau\}\). From left to right, the farthest left (blue) pdf corresponds to \(t=3\tau\), the second curve from the left (red) corresponds to \(t=4\tau\), and so on. In the beginning, \(t=3\tau\), and the mass fraction \(c\) is low, about 0.15 on average. Then, with time, \(c\) increases and, at \(t=T=48\tau\), is approximately equal to 1.
**Example.** The next Qq1 is the earliest time moment when \(c(t,\mathbf{x})\), at fixed \(\mathbf{x}=(1.85,-0.95)\), becomes smaller than the threshold value 0.9 (maximum is 1.0). Figure 7 (right) presents its pdf. On average, after \(t\approx 10\) time steps, the mass fraction becomes smaller than 0.9, but 40 time steps are needed in some scenarios.
Figure 5: The pdf of the earliest time point when the freshwater integral \(Q_{FW}\) becomes smaller than 1.2 (left) and 1.7 (right). The \(x\)-axis represents time points.
Figure 6: Six hundred realizations of \(Q_{FW}(t)\). The \(x\)-axis represents time \(t=1\tau,\ldots,48\tau\); dotted curves denote five quantiles: 0.025, 0.25, 0.50, 0.75, and 0.975 from the bottom to the top.
Next, we research how \(g_{t}-g_{t-1}\) depends on the time and level. All graphics in Fig. 8 display 100 realizations of the differences between solutions computed on two neighbor meshes for every time point \(t_{i}\), \(i=1\ldots 48\) (along the \(x\)-axis). The top left graphic indicates the differences between the mass fractions computed on Levels 1 and 0. The other graphics reveal the same, but for Levels 2 and 1, 3 and 2, 4 and 3, and 5 and 4, respectively. The largest value decreases from \(2.5\cdot 10^{-2}\) (top left) to \(5\cdot 10^{-4}\). Considerable variability is observed for \(t\in[3,7]\) and \(t\in[8,25]\). Starting with \(t\approx 30\), the variability between solutions decreases and stabilizes. From these five graphics, we can estimate that the maximal amplitude decreases by a factor \(\approx 2\), at 0.015, 0.008, 0.004, 0.0015, and 0.0008. However, it is challenging to make a similar statement about each time point \(t\). This observation makes it difficult to estimate the weak and strong convergence rates and the optimal number of samples correctly on each mesh level. They are different for each time \(t\) (and for each \(\mathbf{x}\)). For some time points, the solution is smooth and requires only a few levels and a few samples on each level. For other points with substantial dynamics, the numbers of levels and samples are higher.
Figure 7: (left) Evolution of the pdf of \(c(t,\mathbf{x})\) for \(t=\{3\tau,\ldots,48\tau\}\). (right) The pdf of the earliest time point when \(c(t,\mathbf{x})<0.9\) (\(\mathbf{x}=(1.85,-0.95)\) is fixed).
Figure 8: Differences between mass fractions \(c\) computed at the point \((1.60,-0.95)\) on levels a) 1 and 0, b) 2 and 1 (first row), c) 3 and 2, d) 4 and 3 (second row), and e) 5 and 4 (third row) for 100 realizations (\(x\)-axis represents time).
Because \(g_{\ell}-g_{\ell-1}\) is random, we visualize its mean and variance. Figure 9 demonstrates the mean (left) and variance (right) of the differences in concentrations \(g_{\ell}-g_{\ell-1}\), \(\ell=1,\ldots,5\). On the left, the amplitude decreases when \(\ell\) increases. A slight exception is the blue line for \(t\approx 9,10,11\) (right). A possible explanation is that the solutions \(g_{0}\) or \(g_{1}\) are insufficiently accurate. The right image presents how the amplitude of the variances decays. This decay is necessary for the successful work of the MLMC method. We also observe a potential issue; the weak and strong convergence rates vary for various time points \(t\). Thus, determining the optimal number of samples \(m_{\ell}\) for each level is not possible (only suboptimal).
At the beginning \(t=0\), the variability is zero and starts to increase. We observe changes during a specific time interval, and then the process starts to stabilize after \(\approx 45\) time steps. The variability is either unchanging from level to level or decreases.
Table 2 contains average computing times, which are necessary to estimate the number of samples \(m_{\ell}\) at each level \(\ell\). The fourth column contains the average computing time, and the fifth and sixth columns contain the shortest and longest computing times. The computing time for each simulation varies depending on the number of iterations, which depends on the porosity and permeability. We observed that, after \(\approx 6016\) s, the solution is almost unchanging; thus, we restrict this to only \(t\in[0,T]\), where \(T=6016\). For example, if the number of time steps is \(r_{\ell}=188\) (Level 0 in Table 2), then the time step \(\tau=\frac{T}{r_{\ell}}=\frac{6016}{188}=32\) s.
The time step \(\tau\) is adaptive and changing from \(\tau=\frac{6016}{128}=32\) s (very coarse mesh) to \(\tau=\frac{6016}{6016}=1\) s (finest mesh). Starting with level \(\ell=2\), the average time increases by a factor of eight. These numerical tests confirm the theory in Eq. (12), stating that the numerical solver is linear w.r.t. \(n_{\ell}\) and \(r_{\ell}\).
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Level \(\ell\)} & \multirow{2}{*}{\(n_{\ell}\)} & \multirow{2}{*}{\(r_{\ell}\)} & \multicolumn{3}{c|}{Computing times (\(s_{\ell}\))} \\ \cline{3-8} & & & average & min. & max. \\ \hline
0 & 1122 & 188 & 32 & 1.15 & 0.88 & 1.33 \\ \hline
1 & 4290 & 376 & 16 & 4.1 & 3.4 & 4.87 \\ \hline
2 & 16770 & 752 & 8 & 19.6 & 17.6 & 22 \\ \hline
3 & 66306 & 1504 & 4 & 136.0 & 128 & 144 \\ \hline
4 & 263682 & 3008 & 2 & 1004.0 & 891 & 1032 \\ \hline
5 & 1051650 & 6016 & 1 & 8138.0 & 6430 & 8480 \\ \hline \end{tabular}
\end{table}
Table 2: Number of degrees of freedom \(n_{\ell}\), number of time steps \(r_{\ell}\), step size in time \(\tau_{\ell}\), average, minimal, and maximal computing times on each level \(\ell\).
Figure 9: (left) Mean and (right) variance of the differences \(g_{\ell}-g_{\ell-1}\) vs. time, computed on various levels at the point \((1.60,-0.95)\).
With estimates for each level, we can estimate the rates of \(\alpha\) and \(\beta\) (Eqs. (21a)-(21b)) in weak and strong convergences.
The slope in Fig. 10 can be used to estimate the rates of the weak (left) and strong (right) convergences. The differences are indicated on the horizontal axis.
We use computed variances \(V_{\ell}\) and computing times (work) \(s_{\ell}\) from Table 2 to estimate the optimal number of samples \(m_{\ell}\) and compute the telescopic sum from Eq. (15) to approximate the expectation.
Table 3 lists \(m_{\ell}\) for a given total variance \(\varepsilon^{2}\):
After the telescopic sum is computed, we can compare the results with the QMC results. Figure 11 depicts the decay of the absolute (left) and relative (right) errors vs. levels along the \(x\)-axes. The 'true' solution was computed using the QMC method on the finest mesh level \(L=5\).
Figure 11: Decay of the absolute (left) and relative (right) errors between the mean values computed on a fine mesh via QMC and on a hierarchy of meshes via MLMC at the fixed point \((t,x,y)=(12,1.60,-0.95)\). \(x\)-axis contains the mesh levels.
Figure 10: Weak (left) and strong (right) convergences computed for Levels 1 and 0, 2 and 1, 3 and 2, 4 and 3, and 5 and 4 (horizontal axis) at the fixed point \((t,x,y)=(14,1.60,-0.95)\).
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline level, \(\ell\) & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline \(s_{\ell}\) & 1.156 & 4.113 & 20.382 & 139.0 & 993.0 & 8053.0 \\ \(V_{\ell}\) & 1.4e-5 & 0.2e-5 & 0.5e-6 & 0.1e-6 & 0.5e-7 & 1e-7 \\ \hline \(m_{\ell}(\epsilon^{2}\)=5e-6) & 35 & 7 & 2 & 1 & 1 & 1 \\ \hline \(m_{\ell}(\epsilon^{2}\)=1e-6) & 172 & 35 & 8 & 2 & 1 & 1 \\ \hline \(m_{\ell}(\epsilon^{2}\)=5e-7) & 343 & 69 & 16 & 3 & 1 & 1 \\ \hline \(m_{\ell}(\epsilon^{2}\)=1e-7) & 1714 & 344 & 78 & 14 & 4 & 2 \\ \hline \end{tabular}
\end{table}
Table 3: Number of samples \(m_{\ell}\) computed using Eq. (18) as a function of the total variance \(\epsilon^{2}\).
Conclusion
We investigated the applicability and efficiency of the MLMC approach for the Henry-like problem with uncertain porosity, permeability, and recharge. These uncertain parameters were modeled by random fields with three independent random variables. The numerical solution for each random realization was obtained using the well-known ug4 parallel multigrid solver. The number of required random samples on each level was estimated by computing the decay of the variances and computational costs for each level. These estimates depend on the minimization function in Eq. (17).
We also computed the expected value and variance of the mass fraction in the whole domain, the evolution of the pdfs, the solutions at a few preselected points \((t,\mathbf{x})\), and the time evolution of the freshwater integral value. We have found that some QoIs require only 2-3 of the coarsest mesh levels, and samples from finer meshes would not significantly improve the result. Note that a different type of porosity in Eq. (24) may lead to a different conclusion.
The results show that the MLMC method is faster than the QMC method at the finest mesh. Thus, sampling at different mesh levels makes sense and helps to reduce the overall computational cost.
**Limitations.** 1. It may happen that the QoIs computed on different grid levels are the same (for the given random input parameters). In this case the standard (Q)MC on a coarse mesh will be sufficient. 2. The time dependence is challenging. The optimal number of samples depends on the point \((t,\mathbf{x})\) and may be small for some points and large for others. 3. Twenty-four hours may not be sufficient to compute the solution at the sixth mesh level.
**Future work.** Our model of porosity in Eq. (24) is quite simple. It would be beneficial to consider a more complicated/multiscale/realistic model with more random variables. A more advanced version of MLMC may give better estimates of the number of levels \(L\) and the number of samples on each level \(m_{\ell}\). Another hot topic is data assimilation and the identification of unknown parameters [40, 43, 48, 62]. Known experimental data and measurements of porosity, permeability, velocity or mass fraction could be used to minimise uncertainties.
## Acknowledgments
We thank the KAUST HPC support team for assistance with Shaheen II. This work was supported by the Alexander von Humboldt foundation.
|
2310.08926
|
Singular integrals in uniformly convex spaces
|
We consider the action of finitely truncated singular integral operators on
functions taking values in a Banach space. Such operators are bounded for any
Banach space, but we show a quantitative improvement over the trivial bound in
any space with an equivalent uniformly convex norm. This answers a question
asked by Naor and the author, who previously proved the result in the important
special case of the finite Hilbert transforms.
The proof, which splits the operator into a cancellative part and two
paraproducts, follows the broad outline of similar results for genuinely
singular (non-truncated) operators in the narrower class of UMD spaces. Thus we
revisit and survey the recent techniques behind such results, but our precise
setting, the main theorem, and some aspects of its proof, are new.
Curiously, it turns out that the paraproducts admit somewhat better bounds
than the full operator. In a large class of spaces other than UMD, they remain
bounded even without the finite truncations.
|
Tuomas Hytönen
|
2023-10-13T07:50:05Z
|
http://arxiv.org/abs/2310.08926v1
|
# Singular integrals in uniformly convex spaces
###### Abstract.
We consider the action of finitely truncated singular integral operators on functions taking values in a Banach space. Such operators are bounded for any Banach space, but we show a quantitative improvement over the trivial bound in any space with an equivalent uniformly convex norm. This answers a question asked by Naor and the author, who previously proved the result in the important special case of the finite Hilbert transforms.
The proof, which splits the operator into a cancellative part and two paraproducts, follows the broad outline of similar results for genuinely singular (non-truncated) operators in the narrower class of UMD spaces. Thus we revisit and survey the recent techniques behind such results, but our precise setting, the main theorem, and some aspects of its proof, are new.
Curiously, it turns out that the paraproducts admit somewhat better bounds than the full operator. In a large class of spaces other than UMD, they remain bounded even without the finite truncations.
Key words and phrases:Singular integral, uniformly convex space, martingale type 2020 Mathematics Subject Classification: Primary: 46E40; Secondary: 42B20, 60G46 The author was supported by the Academy of Finland through project No. 346314 (Finnish Centre of Excellence in Randomness and Structures "FiRST").
of \(\mathcal{X}\). However, an intermediate behaviour \(\|Hf\|_{\ell_{s}^{N}(\mathcal{X})}\lesssim(\log N)^{\theta}\|f\|_{\ell_{s}^{N}( \mathcal{X})}\) is also possible; this phenomenon was found, and its geometric implications explored, by Naor and the author [11].
Motivated by this, in the present work, our aim is to obtain similar improvements over the trivial bound for a general class of singular integrals of finite type when acting on functions taking values in a space that admits a uniformly convex norm. Actually, we will not be make direct use of uniform convexity as such, but rather resort to the rich theory of its equivalent formulations. Recall that the spaces with an equivalent uniformly convex norm are precisely those with an equivalent uniformly smooth norm, and further the same as the super-reflexive ones [6]. They can further be given an equivalent \(p\)-uniformly smooth or \(q\)-uniformly convex norm [17], or, what is most relevant for the present needs, they satisfy related martingale inequalities known as martingale type and cotype [17, 19]:
**1.1 Definition**.: A Banach space \(\mathcal{X}\) is said to have _martingale type_\(p\in(1,2]\), if for some (equivalently, all) \(s\in(1,\infty)\) and for every probability space (equivalently, every \(\sigma\)-finite measure space) \((E,\mathcal{E},\mu)\), every martingale \((f_{n})_{n=0}^{N}\) of arbitrary finite length in \(L_{s}(\mu;\mathcal{X})\) satisfies
\[\|f_{N}\|_{L_{s}(\mu;\mathcal{X})}\lesssim\left\|\left(\|f_{0}\|_{\mathcal{X} }^{p}+\sum_{n=1}^{N}\|f_{n}-f_{n-1}\|_{\mathcal{X}}^{p}\right)^{1/p}\right\|_ {L_{s}(\mu)},\]
where the implied constant may depend at most on \(\mathcal{X}\), \(p\), and \(s\).
A Banach space \(\mathcal{X}\) is said to have _martingale cotype_\(q\in[2,\infty)\) if, for the same quantities as above, there holds
\[\left\|\left(\|f_{0}\|_{\mathcal{X}}^{q}+\sum_{n=1}^{N}\|f_{n}-f_{n-1}\|_{ \mathcal{X}}^{q}\right)^{1/q}\right\|_{L_{s}(\mu)}\lesssim\|f_{N}\|_{L_{s}(\mu ;\mathcal{X})}.\]
In [19, p. 221], the case \(s=p\) is taken as the definition for martingale type \(p\), and the case \(s=q\) for martingale cotype \(q\), but the equivalence with other values of \(s\) is observed immediately afterwards. See also [20, Chapter 10] for more information on these and related conditions. The definition is usually formulated for probability spaces, but the equivalence with any \(\sigma\)-finite measure space follows easily (approximating arbitrary martingales by ones supported on a set of finite measure, and multiplying the measure by a constant to achieve a probability space). See [13, Chapter 3] for a systematic development of martingale theory in the \(\sigma\)-finite setting, and in particular [13, Section 3.5.d] for martingale type and cotype.
Representations of singular integrals with the help of martingales are known in various contexts, and this provides the link to our present aims. Our main result can be formulated as follows. We refer the reader to Section 2 for relevant definitions.
**1.2 Theorem**.: _Let \((E,d,\mu)\) be a doubling metric measure space, and let \(Tf(u)=\int_{E}K(u,v)f(v)d\mu(v)\) be an integral operator whose kernel \(K\) satisfies the Calderon-Zygmund standard estimates and the additional finiteness property_
\[K(u,v)=0\quad\text{unless}\quad r\leq d(u,v)<R.\]
_Let \(n:=1+\log(R/r)\)._
1. _If_ \(\mathcal{X}\) _is any Banach space and_ \(s\in[1,\infty]\)_, then_ \(\|Tf\|_{L_{s}(\mu;\mathcal{X})}\lesssim n\|f\|_{L_{s}(\mu;\mathcal{X})}\)
2. _If_ \(s\in(1,\infty)\) _and_ \(T\) _is bounded on the scalar-valued_ \(L_{s}(\mu)\) _with_ \(\|Tf\|_{L_{s}(\mu)}\lesssim\|f\|_{L_{s}(\mu)}\)_, and if_ \(\mathcal{X}\) _has an equivalent uniformly convex norm, then for some_ \(\theta\in[0,1)\)_, we have_ \(\|Tf\|_{L_{s}(\mu;\mathcal{X})}\lesssim n^{\theta}\|f\|_{L_{s}(\mu;\mathcal{X})}\)_._
3. _If_ \(1<p\leq s\leq q<\infty\) _and_ \(\mathcal{X}\) _has martingale type_ \(p\) _and martingale cotype_ \(q\)_, then one can take_ \(\theta=1/p-1/q\)_._
Theorem 1.2 applies in particular to the case when \(E=\{1,\ldots,N\}\), \(d\) is the usual distance, \(\mu\) is the counting measure, and \(T\) is the finite Hilbert transform. Then one can take \(r=1\) and \(R=N\). This important case of Theorem 1.2 was obtained in [11], where it is also shown that (3) is the best possible statement in general. The question, answered by Theorem 1.2, of extending this particular case to general Calderon-Zygmund operators was also raised in [11].
The proof of this special case used a transference between the finite Hilbert transform on \(\{1,\ldots,N\}\) and truncations of the Hilbert transform on \(\mathbb{R}\), as well as Petermichl's dyadic martingale representation of the latter [16]. The possibility of working directly with the original operator, making use of the more general representation theorems of singular integrals developed in [14] and, in the context of abstract spaces, in [15], was observed there as an alternative. The present paper implements this programme in detail. By now, several variants of the dyadic representation theorem of [14] are available in the literature. The present approach is perhaps closest to the one designed in [8].
## 2. Set-up
A metric measure space \((E,d,\mu)\) is said to be doubling \(\mu\) is.a Borel measure on \(E\) and the balls satisfy
\[\mu(B(u,2t))\lesssim\mu(B(u,t)).\]
We denote \(V(u,t):=\mu(B(u,t))\) and \(V(u,v):=V(u,d(u,v))\). It follows from doubling that \(V(u,v)\asymp V(v,u)\).
Let \(\dot{E}^{2}:=\{(u,v)\in E\times E:u\neq v\}\). Let \(\omega:[0,1]\to[0,\infty)\) be continuous, non-decreasing and doubling in the sense that \(\omega(2t)\lesssim\omega(t)\). A function \(K:\dot{E}^{2}\to\mathbb{C}\) is called an \(\omega\)-standard kernel if
\[|K(u,v)|\lesssim\frac{1}{V(u,v)}\]
for all \((u,v)\in\dot{E}^{2}\), and
\[|K(u,v)-K(u,w)|+|K(v,u)-K(w,u)|\lesssim\omega\Big{(}\frac{d(v,w)}{d(u,v)} \Big{)}\frac{1}{V(u,v)}\]
for all \((u,v)\in\dot{E}^{2}\) and \(w\in E\) with \(d(v,w)\leq\frac{1}{2}d(u,v)\) (hence also \((u,w)\in\dot{E}^{2}\)).
Some control of \(\omega\) is usually required, and we define
\[\|\omega\|_{\mathrm{Dini}^{\nu}}:=\int_{0}^{1}\omega(t)\Big{(}1+\log\frac{1}{t }\Big{)}^{\nu}\frac{dt}{t}.\]
Much of the classical theory of singular integrals is valid under the standard Dini condition with \(\nu=0\), but slightly stronger conditions on this scale are also often required. In Theorem 1.2 (and many other results in the area), it suffices to take \(\nu=1\). In the course of the proof, we will see that somewhat less is actually enough, but we will not insist too much in this at this point. Many results in the literature
are formulated for \(\omega(t)=t^{\delta}\), which are easily seen to satisfy the Dini conditions with any \(\nu\geq 0\).
In this paper, we will impose the following additional assumption: for some \(0<r<R<\infty\),
\[K(u,v)=0\quad\text{unless}\quad r\leq d(u,v)<R. \tag{2.1}\]
This assumption effectively kills the singularity of the kernel \(K(u,v)\), making the \(L_{s}(\mu)\) boundedness of the related integral operator
\[Tf(s)=\int_{E}K(u,v)f(t)d\mu(t)\]
qualitatively trivial:
**2.2 Proposition**.: _Under the above assumptions,_
\[\int_{E}|K(u,v)|d\mu(u)+\int_{E}|K(v,u)|d\mu(u)\lesssim 1+\log\frac{R}{r}=:n,\]
_hence_
\[\|Tf\|_{L_{s}(\mu;\mathcal{X})}\lesssim n\cdot\|f\|_{L_{s}(\mu;\mathcal{X})} \quad\text{for all}\quad s\in[1,\infty],\quad f\in L_{s}(\mu;\mathcal{X}), \tag{2.3}\]
_where \(\mathcal{X}\) is an arbitrary Banach space._
Proof.: \[\int_{E}|K(u,v)|d\mu(u)\leq\sum_{j:r\leq 2^{j}\leq\frac{1}{2}R}\int_{2^{j} \leq|u-v|<2^{j+1}}|K(u,v)|d\mu(u),\]
where
\[|K(u,v)|\leq\frac{1}{\lambda(u,d(u,v))}\lesssim\frac{1}{\lambda(v,d(u,v))} \leq\frac{1}{\lambda(v,2^{j})},\]
thus
\[\int_{2^{j}\leq|u-v|<2^{j+1}}|K(u,v)|d\mu(u)\lesssim\frac{\mu(B(v,2^{j+1}))}{ \lambda(v,2^{j})}\leq\frac{\lambda(v,2^{j+1})}{\lambda(v,2^{j})}\lesssim 1,\]
and hence
\[\int_{E}|K(u,v)|d\mu(u)\lesssim\sum_{j:r\leq 2^{j}\leq\frac{1}{2}R}1\lesssim 1+ \log\frac{R}{r}.\]
The estimate of \(K(v,u)\) follows by symmetry of the assumptions, and the estimate of \(Tf\) is then standard.
But the point we wish to address is conditions under which we can beat the trivial bound (2.3).
The notation
\[n:=1+\log\frac{R}{r}\]
is motivated by the key example of \(E=\{1,\ldots,2^{n}\}\). Since \(1\leq d(u,v)<2^{n}\) for all \((u,v)\in\dot{E}^{2}\), the assumption (2.1) is automatically satisfied with \(r=1\), \(R=2^{n}\), hence \(1+\log R/r=1+n\log 2\asymp n\). Another basic example arises from usual (smooth) truncations of a standard kernel.
## 3. Dyadic cubes
A construction of dyadic cubes in general doubling metric spaces is first due to Christ [5]. Rather than just one system, we will require an ensemble of dyadic systems equipped with a suitable probability measure, leading to a notion of a random dyadic system. In the generality of doubling metric (even quasi-metric) spaces, a first such construction is from [10] with subsequent variants in [1, 9, 12, 15]. We will quote some results from [1], specialising them to a metric space (thus taking \(A_{0}=1\) for the quasi-triangle constant featuring in [1].)
There are reference points \(x_{\alpha}^{k}\), where \(k\in\mathbb{Z}\) and \(\alpha\in\mathscr{A}_{k}\), some countable index sets. There is parameter space \(\Omega=\prod_{k\in\mathbb{Z}}\Omega_{k}\), where each \(\Omega_{k}\) is a copy of the same finite set \(\Omega_{0}\). Thus, there is a natural probability measure on \(\Omega\).
For every \(\omega\in\Omega\), there is a partial order relation \(\leq_{\omega}\) among pairs \((k,\alpha)\) and \((\ell,\gamma)\). Within a fixed level, we declare that \((k,\alpha)\leq_{\omega}(k,\beta)\) if and only if \(\alpha=\beta\). The restriction of the relation \(\leq_{\omega}\) between levels \(k\) and \(k+1\) depends only on the component \(\omega_{k}\). The relation between arbitrary levels \(\ell\geq k\) is obtained via extension by transitivity.
There are also new random reference points \(z_{\alpha}^{k}=z_{\alpha}^{k}(\omega_{k})\) such that \(d(z_{\alpha}^{k}(\omega),x_{\alpha}^{k})<2\delta^{k}\). (This follows from [1, Eq. (2.1)] and the definition of \(z_{\alpha}^{k}\) further down the same page.)
By [1, Theorem 2.11], there are open and closed sets \(\bar{Q}_{\alpha}^{k}(\omega)\) and \(\bar{Q}_{\alpha}^{k}(\omega)\) that satisfy
\[B(z_{\alpha}^{k},\tfrac{1}{6}\delta^{k})\subseteq\bar{Q}_{\alpha}^{k}(\omega) \subseteq\bar{Q}_{\alpha}^{k}(\omega)\subseteq B(z_{\alpha}^{k},6\delta^{k}).\]
We refer to these sets as "(dyadic) cubes". Here \(\delta\in(0,1)\) is a small parameter that indicates the ratio of the scales of two consecutive generations of the dyadic system. In \(\mathbb{R}^{d}\), one typically takes \(\delta=\tfrac{1}{2}\). These cubes satisfy the disjointness
\[\bar{Q}_{\alpha}^{k}(\omega)\cap\bar{Q}_{\beta}^{k}(\omega)=\varnothing\quad (\alpha\neq\beta)\]
and the covering properties
\[E=\bigcup_{\alpha}\bar{Q}_{\alpha}^{k}(\omega),\quad\bar{Q}_{\alpha}^{k}( \omega)=\bigcup_{\beta:(k+1,\beta)\leq_{\omega}(k,\alpha)}\bar{Q}_{\beta}^{k}( \omega).\]
While these properties are valid for each fixed \(\omega\in\Omega\), we additionally have the following probabilistic properties:
\[\begin{split}\mathbb{P}_{\omega}\Big{(}u\in\bigcup_{\alpha} \partial_{\varepsilon}Q_{\alpha}^{k}(\omega)\Big{)}&\leq C \varepsilon^{\eta},\\ \partial_{\varepsilon}Q_{\alpha}^{k}(\omega)&:=\{u\in \bar{Q}_{\alpha}^{k}(\omega):d(u,(\bar{Q}_{\alpha}^{k}(\omega))^{c})< \varepsilon\delta^{k}\},\end{split} \tag{3.1}\]
for some fixed \(0<\eta\leq 1\leq C<\infty\) and all \(\varepsilon>0\), and in particular the negligible boundary:
\[\mathbb{P}_{\omega}\Big{(}x\in\bigcup_{k,\alpha}\partial Q_{\alpha}^{k}( \omega)\Big{)}\leq C\varepsilon^{\eta},\quad\partial Q_{\alpha}^{k}(\omega):= \bar{Q}_{\alpha}^{k}(\omega)\setminus\tilde{Q}_{\alpha}^{k}(\omega).\]
The sets \(\tilde{Q}_{\alpha}^{k}(\omega)\) and \(\tilde{Q}_{\alpha}^{k}(\omega)\) (and hence also \(\partial_{\varepsilon}Q_{\alpha}^{k}(\omega)\) and \(\partial Q_{\alpha}^{k}(\omega)\)) depend only on \((\omega_{i})_{i=k}^{\infty}.\) We will also need to understand the following event:
**3.2 Lemma**.: _Let us fix \(\varepsilon>0\) so small that the bound in (3.1) satisfies \(C\varepsilon^{\eta}\leq\tfrac{1}{2}\). Then there exists an \(m_{0}\in\mathbb{Z}_{+}\) such that we have the following: For some
\(k\in\mathbb{Z}\) and \(m\geq m_{0}\), let \(x_{\beta}^{k+m},x_{\gamma}^{k+m}\) be two reference points at level \(k+m\) with \(d(x_{\beta}^{k+m},x_{\gamma}^{k+m})\leq\frac{1}{2}\varepsilon\delta^{k}\). Consider the random event_
\[A :=\{\omega\in\Omega:\exists\alpha\text{ such that }(k+m,\beta)\leq_{ \omega}(k,\alpha)\text{ and }(k+m,\gamma)\leq_{\omega}(k,\alpha)\}\] \[=\{\omega\in\Omega:\exists\alpha\text{ such that }\bar{Q}_{\beta}^{k+m}( \omega)\subseteq\bar{Q}_{\alpha}^{k}(\omega)\text{ and }\bar{Q}_{\gamma}^{k+m}(\omega)\subseteq\bar{Q}_{ \alpha}^{k}(\omega)\}\]
_Then_
1. \(A\) _depends only on the components_ \((\omega_{k+i})_{i=0}^{m-1}\)_, and_
2. \(\mathbb{P}(A)\geq\frac{1}{2}\)_,_
Proof.: Claim (1) is immediate from the facts that the restriction of the relation \(\leq_{\omega}\) between levels \(k\) and \(k+1\) depends only on the component \(\omega_{k}\), and that in general it is defined via extension by transitivity.
When \(\varepsilon>0\) is chosen as stated, with probability at least \(\frac{1}{2}\), the point \(x_{\beta}^{k+m}\) is not contained in any \(\partial_{\varepsilon}Q_{\alpha}^{k}(\omega)\). In particular, choosing \(\alpha\) so that \(x_{\beta}^{k+m}\in\bar{Q}_{\alpha}^{k}(\omega)\) (which must exist by the covering property), it follows that \(d(x_{\beta}^{k+m},(\bar{Q}_{\alpha}^{k}(\omega))^{c})\geq\varepsilon\delta^{k}\). Now suppose that \(d(x_{\gamma}^{k+m},x_{\beta}^{k+m})\leq\frac{1}{2}\varepsilon\delta^{k}\). Then
\[d(z_{\gamma}^{k+m}(\omega),(\tilde{Q}_{\alpha}^{k}(\omega))^{c}) \geq d(x_{\beta}^{k+m},(\tilde{Q}_{\alpha}^{k}(\omega))^{c})-d(z_{ \gamma}^{k+m}(\omega),x_{\gamma}^{k+m})\] \[\qquad-d(x_{\gamma}^{k+m},x_{\beta}^{k+m})\] \[>\varepsilon\delta^{k}-2\delta^{k+m}-\frac{1}{2}\varepsilon\delta ^{k}\geq(\frac{1}{2}\varepsilon-\delta^{m_{0}})\delta^{k}>0\]
provided that \(m_{0}\) is large enough. Hence \(z_{\gamma}^{k+m}(\omega)\in\tilde{Q}_{\alpha}^{k}(\omega)\), and thus \((k+m,\gamma)\leq_{\omega}(k,\alpha)\).
In the sequel, we will drop the somewhat heavy notation above, and denote a typical \(Q_{\alpha}^{k}(\omega)\) simply by \(Q\). We let \(\mathscr{D}_{k}=\mathscr{D}_{k}(\omega):=\{Q_{\alpha}^{k}(\omega):\alpha\in \mathscr{A}_{k}\}\) be the dyadic cubes of generation \(k\). We also write \(\ell(Q):=\delta^{k}\) if \(Q\in\mathscr{D}_{k}\). We denote by \(B_{Q}:=B(z_{\alpha}^{k},6\delta^{k})\) the (non-random!) ball that contains \(Q=Q_{\alpha}^{k}(\omega)\).
## 4. A dyadic decomposition of finite kernels
With the dyadic cubes defined above, we have the corresponding averaging (or conditional expectation; hence the notation) operators
\[\langle f\rangle_{Q}:=\frac{1}{\mu(Q)}\int_{Q}fd\mu,\quad\mathbb{E}_{Q}f:=1_{ Q}\langle f\rangle_{Q},\quad\mathbb{E}_{i}f:=\sum_{Q\in\mathscr{D}_{i}}\mathbb{E}_{Q}f\]
and their differences
\[\mathbb{D}_{i}f:=\mathbb{E}_{i+1}f-\mathbb{E}_{i}f=\sum_{Q\in\mathscr{D}_{i}} \mathbb{D}_{Q}f,\quad\mathbb{D}_{Q}f=\sum_{P\in\operatorname{ch}(Q)}\mathbb{E }_{P}f-\mathbb{E}_{Q}f,\]
where, for \(Q\in\mathscr{D}_{i}\), we denote by
\[\operatorname{ch}(Q):=\{P\in\mathscr{D}_{i+1}:P\subseteq Q\}\]
the collection of its children, which has some cardinality between \(1\) and a fixed finite \(M\) depending only on the geometry of \((E,d)\).
For any \(-\infty<\sigma<\tau<\infty\), we can decompose
\[f=\mathbb{E}_{\sigma}f+\sum_{i=\sigma}^{\tau}\mathbb{D}_{i}f+\mathbb{F}_{\tau} f,\quad\mathbb{F}_{\tau}f:=(f-\mathbb{E}_{\tau})f=\sum_{Q\in\mathscr{D}_{\tau}}1_{ Q}(f-\langle f\rangle_{Q})\]
**4.1 Lemma**.: _If \(\tau\) is large enough so that \(\delta^{\tau}\ll r\), then_
\[\|T\mathbb{F}_{\tau}f\|_{L_{s}(\mu)}\lesssim\|\omega\|_{\mathrm{Dini}}\|f\|_{L_{ s}(\mu)},\qquad\|\omega\|_{\mathrm{Dini}}:=\int_{0}^{1}\omega(t)\frac{dt}{t}.\]
Proof.: Since \(\int_{Q}(f(v)-\langle f\rangle_{Q})d\mu(v)=0\), we have
\[T\mathbb{F}_{\tau}f(u) =\sum_{Q\in\mathscr{D}_{\tau}}\int_{Q}K(u,v)(f(v)-\langle f\rangle _{Q})d\mu(v)\] \[=\sum_{Q\in\mathscr{D}_{\tau}}\int_{Q}(K(u,v)-K(u,z_{Q}))(f(v)- \langle f\rangle_{Q})d\mu(v).\]
If \(d(u,v)\leq r\) for all \(v\in Q\) (thus also for \(v=z_{Q}\)), this \(Q\) does not contribute to the sum above. If \(d(u,v)>r\) for some \(v\in Q\), then \(d(u,z_{Q})\geq d(u,v)-d(v,z_{Q})\gtrsim r\) if \(\ell(Q)=\delta^{\tau}\ll r\). Then
\[\int_{Q} |(K(u,v)-K(u,z_{Q}))(f(v)-\langle f\rangle_{Q})|d\mu(v)\] \[\lesssim\int_{Q}\omega\Big{(}\frac{d(v,z_{Q})}{d(u,z_{Q})}\Big{)} \frac{1}{\lambda(u,d(u,z_{Q}))}|f(v)-\langle f\rangle_{Q}|d\mu(v)\] \[\lesssim\omega\Big{(}\frac{r}{d(u,z_{Q})}\Big{)}\frac{1}{\lambda (u,d(u,z_{Q}))}\int_{Q}|f(v)|d\mu(v)\] \[\lesssim\int_{Q}\omega\Big{(}\frac{r}{d(u,v)+r}\Big{)}\frac{1}{ \lambda(u,d(u,v)+r)}|f(v)|d\mu(v)\] \[=:\int_{Q}K_{r}(u,v)|f(v)|d\mu(v)\]
Hence, summing over \(Q\in\mathscr{D}_{\tau}\),
\[|T\mathbb{F}_{\tau}f(u)|\lesssim\int_{E}K_{r}(u,v)|f(v)|d\mu(v).\]
The kernel here satisfies
\[\int_{E}K_{r}(u,v)d\mu(v)\] \[\leq\int_{B(u,r)}\omega(1)\frac{1}{\lambda(u,r)}d\mu(v)+\sum_{j=0 }^{\infty}\int_{2^{j}r\leq|u-v|<2^{j+1}r}\omega(2^{-j})\frac{1}{\lambda(u,2^{ j}r)}d\mu(v)\] \[\leq\omega(1)\frac{1}{\lambda(u,r)}\mu(B(u,r))+\sum_{j=0}^{\infty }\omega(2^{-j})\frac{1}{\lambda(u,2^{j}r)}\mu(B(u,2^{j+1}r))\] \[\lesssim\sum_{j=0}^{\infty}\omega(2^{-j})\asymp\int_{0}^{1}\omega (t)\frac{dt}{t},\]
and a similar bound for \(\int_{E}K_{r}(v,u)d\mu(v)\) by symmetry; hence
\[\Big{\|}u\mapsto\int_{E}K_{r}(u,v)|f(v)|d\mu(v)\Big{\|}_{L_{s}(\mu)}\lesssim\| \omega\|_{\mathrm{Dini}}\|f\|_{L_{s}(\mu)}\]
by standard considerations.
**4.2 Lemma**.: _If \(\sigma\) is small enough so that \(\delta^{\sigma}\gtrsim R\), then_
\[\|T\mathbb{E}_{\sigma}f\|_{L_{s}(\mu;\mathcal{X})}\lesssim\sup_{P\in\mathscr{D}_{ \sigma}}\frac{\|T(1_{P})\|_{L_{s}(\mu)}}{\mu(P)^{1/s}}\|f\|_{L_{s}(\mu;\mathcal{ X})}.\]
_Thus \(T\mathbb{E}_{\sigma}\) is bounded on \(L_{s}(\mu;\mathcal{X})\) assuming the cube testing condition that the supremum above is finite._
Proof.: If \(v\in Q\in\mathscr{D}_{\sigma}\) and \(u\in E\), then \(K(u,v)\) is non-zero only if \(d(u,v)<R\lesssim\ell(Q)\), and hence only if \(u\in cB_{Q}\) for some constant \(c\). Thus
\[T\mathbb{E}_{\sigma}f(u)=\sum_{Q\in\mathscr{D}_{\sigma}}\int_{Q}K(u,v)d\mu(v) \langle f\rangle_{Q}=\sum_{Q\in\mathscr{D}_{\sigma}}1_{cB_{Q}}(u)T(1_{Q})(u) \langle f\rangle_{Q}.\]
By the geometric doubling property, the balls \(cB_{Q}\) have bounded overlap, and hence
\[\|T\mathbb{E}_{\sigma}f\|_{L_{s}(\mu)} \lesssim\Big{(}\sum_{Q\in\mathscr{D}_{\sigma}}\|1_{cB_{Q}}T(1_{Q} )\langle f\rangle_{Q}\|_{L_{s}(\mu;\mathcal{X})}^{s}\Big{)}^{1/s}\] \[\leq\sup_{P\in\mathscr{D}_{\sigma}}\frac{\|T(1_{P})\|_{L_{s}(\mu )}}{\mu(P)^{1/s}}\Big{(}\sum_{Q\in\mathscr{D}_{\sigma}}\mu(Q)\|\langle f \rangle_{Q}\|_{\mathcal{X}}^{s}\Big{)}^{1/s}\] \[\leq\sup_{P\in\mathscr{D}_{\sigma}}\frac{\|T(1_{P})\|_{L_{s}(\mu )}}{\mu(P)^{1/s}}\|f\|_{L_{s}(\mu;\mathcal{X})}.\]
To prove the boundedness of \(T\) on \(L_{s}(\mu;\mathcal{X})\), we consider the pairing
\[\langle Tf,g\rangle,\quad f\in L_{s}(\mu;\mathcal{X}),\quad g\in L_{s^{\prime} }(\mu;\mathcal{X}^{\prime}).\]
This can be expanded as
\[\langle Tf,g\rangle=\langle T\mathbb{E}_{\sigma}f,g\rangle+\sum_{i=\sigma}^{ \tau}\langle T\mathbb{D}_{i}f,g\rangle+\langle T\mathbb{F}_{\tau}f,g\rangle.\]
We already dealt with the first and the last terms in Lemmas 4.1 and 4.2. In the sum, expanding also \(g\), we obtain
\[\sum_{i=\sigma}^{\tau}\langle T\mathbb{D}_{i}f,g\rangle=\sum_{i=\sigma}^{\tau }\langle T\mathbb{D}_{i}f,\mathbb{E}_{\sigma}g\rangle+\sum_{i,j=\sigma}^{\tau }\langle T\mathbb{D}_{i}f,\mathbb{D}_{j}g\rangle+\sum_{i=\sigma}^{\tau} \langle T\mathbb{D}_{i}f,\mathbb{F}_{\tau}g\rangle \tag{4.3}\]
The double sum in (4.3) can be reorganised as
\[\sum_{i,j=\sigma}^{\tau}\langle T\mathbb{D}_{i}f,\mathbb{D}_{j}g \rangle=\sum_{\begin{subarray}{c}i,j=a\\ i=j\end{subarray}}^{b}\langle T\mathbb{D}_{i}f,\mathbb{D}_{j}g\rangle+\sum_{ \begin{subarray}{c}i,j=a\\ i<j\end{subarray}}^{b}\langle T\mathbb{D}_{i}f,\mathbb{D}_{j}g\rangle+\sum_{ \begin{subarray}{c}i,j=a\\ i>j\end{subarray}}^{b}\langle T\mathbb{D}_{i}f,\mathbb{D}_{j}g\rangle\] \[=\sum_{i=a}^{b}\langle T\mathbb{D}_{i}f,\mathbb{D}_{i}g\rangle+ \sum_{j=\sigma}^{\tau}\langle T(\mathbb{E}_{j}f-\mathbb{E}_{\sigma}f),\mathbb{D }_{j}g\rangle+\sum_{i=\sigma}^{\tau}\langle T\mathbb{D}_{i}f,\mathbb{E}_{i}g- \mathbb{E}_{\sigma}g\rangle\] \[=\sum_{i=\sigma}^{\tau}\Big{(}\langle T\mathbb{D}_{i}f,\mathbb{D} _{i}g\rangle+\langle T\mathbb{E}_{i}f,\mathbb{D}_{i}g\rangle+\langle T \mathbb{D}_{i}f,\mathbb{E}_{i}g\rangle\Big{)}\] \[\qquad\quad-\sum_{i=\sigma}^{\tau}\langle T\mathbb{E}_{\sigma}f, \mathbb{D}_{j}g\rangle-\sum_{i=\sigma}^{\tau}\langle T\mathbb{D}_{i},\mathbb{ E}_{\sigma}g\rangle.\]
Substituting back to (4.3), the last term right above cancels out with the first term in (4.3), leaving
\[\sum_{i=\sigma}^{\tau}\langle T\mathbb{D}_{i}f,g\rangle=\sum_{i= \sigma}^{\tau}\left(\langle T\mathbb{D}_{i}f,\mathbb{D}_{i}g\rangle+\langle T \mathbb{E}_{i}f,\mathbb{D}_{i}g\rangle+\langle T\mathbb{D}_{i}f,\mathbb{E}_{i}g \rangle\right)\] \[\qquad\qquad\qquad\qquad-\sum_{i=\sigma}^{\tau}\langle T\mathbb{E }_{\sigma}f,\mathbb{D}_{j}g\rangle+\sum_{i=\sigma}^{\tau}\langle T\mathbb{D}_{i }f,\mathbb{F}_{\tau}g\rangle.\]
Here
\[\sum_{i=\sigma}^{\tau}\langle T\mathbb{E}_{\sigma}f,\mathbb{D}_{j}g\rangle= \langle T\mathbb{E}_{\sigma}f,\mathbb{E}_{\tau}g-\mathbb{E}_{\sigma}g\rangle\]
is controlled by Lemma 4.2 and the contractivity of \(\mathbb{E}_{i}\), while
\[\sum_{i=\sigma}^{\tau}\langle T\mathbb{D}_{i}f,\mathbb{F}_{\tau}g\rangle= \langle\mathbb{E}_{\tau}-\mathbb{E}_{\sigma}f,T^{*}\mathbb{F}_{\tau}g\rangle\]
is controlled by an application of Lemma 4.1 on the dual side, observing the symmetry of the assumptions.
Altogether, we find that
\[\left|\langle Tf,g\rangle-\sum_{i=\sigma}^{\tau}\left(\langle T \mathbb{D}_{i}f,\mathbb{D}_{i}g\rangle+\langle T\mathbb{E}_{i}f,\mathbb{D}_{i }g\rangle+\langle T\mathbb{D}_{i}f,\mathbb{E}_{i}g\rangle\right)\right|\] \[\qquad\lesssim\Big{(}\sup_{P\in\mathscr{D}_{\sigma}}\frac{\|T(1 _{P})\|_{L_{s}(\mu)}}{\mu(P)^{1/s}}+\|\omega\|_{\rm{Dini}}\Big{)}\|f\|_{L_{s}( \mu;\mathcal{X})}\|g\|_{L_{s^{\prime}}(\mu;\mathcal{X}^{\prime})}.\]
This is valid for any Banach space \(\mathcal{X}\), and hence the core is estimating the sum on the left. For the previous estimate, we need \(\delta^{\tau}\ll r\) and \(\delta^{\sigma}\gtrsim R\), where the implied constants are absolute. Thus we may take \(\tau\asymp\log\frac{1}{r}\) and \(\sigma\asymp\log\frac{1}{R}\) so that the length of the sum is
\[1+(\tau-\sigma)\asymp 1+\log\frac{R}{r}=n.\]
Proving a uniform bound for each term and applying the triangle inequality would recover the trivial estimate, which is linear in \(n\), and which we already obtained by more elementary considerations. We wish to make use of the above decomposition to beat this trivial bound.
## 5. Estimates for Haar coefficients
The difference operators \(\mathbb{D}_{i}\) have the piecewise local expansion
\[\mathbb{D}_{i}f=\sum_{Q\in\mathscr{D}_{i}}\mathbb{D}_{Q}f,\]
where each \(\mathbb{D}_{Q}\), in turn, can be expanded in terms of a bounded number of "Haar" functions associated with \(Q\); they are constant on the dyadic children of \(Q\):
\[\mathbb{D}_{Q}f=\sum_{Q=1}^{m_{Q}-1}\langle f,h_{Q}^{\alpha}\rangle h_{Q}^{ \alpha}.\]
Here \(m_{Q}\in\{1,\ldots,M\}\) is the number of dyadic children of \(Q\). In general it can happen that \(m_{Q}=1\), in which case the sum above is empty, and \(\mathbb{D}_{Q}=0\). Denoting \(h_{Q}^{0}:=\mu(Q)^{-1/2}1_{Q}\), we obtain similar formulas for the averaging operators
\[\mathbb{E}_{i}f=\sum_{Q\in\mathscr{D}_{i}}\mathbb{E}_{Q}f,\quad\mathbb{E}_{Q}f =\langle f\rangle_{Q}1_{Q}=\langle f,h_{Q}^{0}\rangle h_{Q}^{0}.\]
Thus each of the three types of terms \(\langle T\mathbb{D}_{i}f,\mathbb{D}_{i}g\rangle\), \(\langle T\mathbb{E}_{i}f,\mathbb{D}_{i}g\rangle\), \(\langle T\mathbb{D}_{i}f,\mathbb{E}_{i}g\rangle\) can be expanded in terms of a bounded number of sums of the type
\[\sum_{P,Q\in\mathscr{D}_{i}}\chi_{\alpha,\beta}\langle Th_{P}^{\alpha},h_{Q}^{ \beta}\rangle\langle f,h_{P}^{\alpha}\rangle\langle g,h_{Q}^{\beta}\rangle,\]
where \(\chi_{\alpha,\beta}\in\{0,1\}\). Since there is at most one \(\mathbb{E}_{i}\), at most one of \(\alpha\) and \(\beta\) can be \(0\) in any given sum of this type.
Estimates for the Haar coefficients \(\langle Th_{P}^{\alpha},h_{Q}^{\beta}\rangle\) are well known in the Euclidean setting. The extension to doubling metric spaces depends on the fact that a certain Hardy inequality remains valid for the dyadic cubes in this situation, as observed by Auscher and Routin [2]. We only need the following special case of their [2, Lemma 2.4]: If \(P,Q\in\mathscr{D}\) are disjoint, then
\[\int_{P}\int_{Q}\frac{1}{V(u,v)}d\mu(u)d\mu(v)\lesssim\mu(P)^{1/s}\mu(Q)^{1/s^{ \prime}} \tag{5.1}\]
for \(s\in(1,\infty)\). Then
\[\langle Th_{P}^{\alpha},h_{Q}^{\beta}\rangle=\sum_{\begin{subarray}{c}P^{ \prime}\in\operatorname{ch}(P)\\ Q^{\prime}\in\operatorname{ch}(Q)\end{subarray}}\langle T1_{P^{\prime}},1_{Q^ {\prime}}\rangle\langle h_{P}^{\alpha}\rangle_{P^{\prime}}\langle h_{Q}^{ \beta}\rangle_{Q^{\prime}}.\]
If \(\ell(P)=\ell(Q)\) and \(P^{\prime}\neq Q^{\prime}\), then \(P^{\prime}\cap Q^{\prime}=\varnothing\), and (5.1) applies to give
\[|\langle T1_{P^{\prime}},1_{Q^{\prime}}\rangle| =\Big{|}\int_{P^{\prime}}\int_{Q^{\prime}}K(u,v)dudv\Big{|}\] \[\lesssim\int_{P^{\prime}}\int_{Q^{\prime}}\frac{1}{V(u,v)}dudv \lesssim\mu(P^{\prime})^{1/2}\mu(Q^{\prime})^{1/2}.\]
Thus
\[\Big{|}\sum_{\begin{subarray}{c}P^{\prime}\in\operatorname{ch}(P )\\ Q^{\prime}\in\operatorname{ch}(Q)\\ P^{\prime}\neq Q^{\prime}\end{subarray}}\langle T1_{P^{\prime}},1_{Q^{\prime}} \rangle\langle h_{P}^{\alpha}\rangle_{P^{\prime}}\langle h_{Q}^{\beta}\rangle _{Q^{\prime}}\Big{|} \tag{5.2}\] \[\lesssim\sum_{P^{\prime}\in\operatorname{ch}(P)}\mu(P^{\prime})^ {1/2}|\langle h_{P}^{\alpha}\rangle_{P^{\prime}}|\sum_{Q^{\prime}\in \operatorname{ch}(Q)}\mu(Q^{\prime})^{1/2}|\langle h_{Q}^{\beta}\rangle_{Q^{ \prime}}|,\]
where
\[\sum_{P^{\prime}\in\operatorname{ch}(P)}\mu(P^{\prime})^{1/2}| \langle h_{P}^{\alpha}\rangle_{P^{\prime}}| \leq M^{1/2}\Big{(}\sum_{P^{\prime}\in\operatorname{ch}(P)}\mu(P ^{\prime})|\langle h_{P}^{\alpha}\rangle_{P^{\prime}}|^{2}\Big{)}^{1/2}\] \[=M^{1/2}\|h_{P}^{\alpha}\|_{L_{2}(\mu)}=M^{1/2},\]
and similarly for the sum over \(Q^{\prime}\in\operatorname{ch}(Q)\). Thus (5.2) \(\lesssim 1\).
If \(P,Q\in\mathscr{D}_{i}\) are different, then all their children \(P^{\prime},Q^{\prime}\) satisfy \(P^{\prime}\neq Q^{\prime}\), and we have proved that
\[|\langle Th_{P}^{\alpha},h_{Q}^{\beta}\rangle|\lesssim 1,\quad P,Q\in\mathscr{D}_{i}, \quad P\neq Q.\]
If \(P=Q\), then we need to deal in addition with the sum
\[\Big{|}\sum_{P^{\prime}\in\operatorname{ch}(P)}\langle T1_{P^{\prime }},1_{P^{\prime}}\rangle\langle h^{\alpha}_{P}\rangle_{P^{\prime}}\langle h^{ \beta}_{P}\rangle_{P^{\prime}}\Big{|}\] \[\qquad\leq\sup_{Q\in\mathscr{B}}\frac{|\langle T1_{Q},1_{Q} \rangle|}{\mu(Q)}\sum_{P^{\prime}\in\operatorname{ch}(P)}\mu(P^{\prime})| \langle h^{\alpha}_{P}\rangle_{P^{\prime}}\langle h^{\beta}_{P}\rangle_{P^{ \prime}}|,\]
where
\[\sum_{P^{\prime}\in\operatorname{ch}(P)}\mu(P^{\prime})|\langle h ^{\alpha}_{P}\rangle_{P^{\prime}}\langle h^{\beta}_{P}\rangle_{P^{\prime}}|\] \[\qquad\leq\Big{(}\sum_{P^{\prime}\in\operatorname{ch}(P)}\mu(P^{ \prime})|\langle h^{\alpha}_{P}\rangle_{P^{\prime}}|^{2}\Big{)}^{1/2}\Big{(} \sum_{P^{\prime}\in\operatorname{ch}(P)}\mu(P^{\prime})|\langle h^{\beta}_{P }\rangle_{P^{\prime}}|^{2}\Big{)}^{1/2}\] \[\qquad=\|h^{\alpha}_{P}\|_{L_{2}(\mu)}\|h^{\beta}_{P}\|_{L_{2}( \mu)}=1.\]
The finiteness of
\[\|T\|_{\operatorname{wbp}(\mathscr{B})}:=\sup_{Q\in\mathscr{B}}\frac{|\langle T 1_{Q},1_{Q}\rangle|}{\mu(Q)}\]
is the so-called weak boundedness property. It follows from the finiteness of the testing quantity
\[\|T\|_{\operatorname{test}^{*}(\mathscr{B})}:=\sup_{Q\in\mathscr{B}}\frac{\|T 1_{Q}\|_{L_{s}(\mu)}}{\mu(Q)^{1/s}},\]
which in turn is clearly dominated by the operator norm of \(T\) on \(L_{s}(\mu)\).
If \(P\) and \(Q\) are well separated, we need and have a better estimate. Assuming for instance that \(\alpha\neq 0\) (the case \(\beta\neq 0\) being symmetric), we have
\[\langle Th^{\alpha}_{P},h^{\beta}_{Q}\rangle =\int_{P}\int_{Q}K(u,v)h^{\alpha}_{P}(v)h^{\beta}_{Q}(u)d\mu(u)d \mu(v)\] \[=\int_{P}\int_{Q}[K(u,v)-K(u,z_{P})]h^{\alpha}_{P}(v)h^{\beta}_{Q} (u)d\mu(u)d\mu(v).\]
If \(d(P,Q)\gg\ell(P)=\ell(Q)\), then \(v\in P\) and \(u\in Q\) satisfy \(d(v,z_{P})\lesssim\ell(P)\ll d(u,v)\), and hence
\[|\langle Th^{\alpha}_{P},h^{\beta}_{Q}\rangle| \lesssim\int_{P}\int_{Q}\omega\Big{(}\frac{\ell(P)}{d(u,z_{P})} \Big{)}\frac{1}{V(u,z_{P})}|h^{\alpha}_{P}(v)h^{\beta}_{Q}(u)|d\mu(u)d\mu(v)\] \[\lesssim\omega\Big{(}\frac{\ell(P)}{d(z_{Q},z_{P})+\ell(P)}\Big{)} \frac{\|h^{\alpha}_{P}\|_{L_{1}(\mu)}\|h^{\beta}_{Q}\|_{L_{1}(\mu)}}{V(z_{Q}, d(z_{Q},z_{P})+\ell(P))} \tag{5.3}\] \[\lesssim\omega\Big{(}\frac{\ell(P)}{d(z_{Q},z_{P})+\ell(P)}\Big{)} \frac{\sqrt{\mu(P)\mu(Q)}}{V(z_{Q},d(z_{Q},z_{P})+\ell(P))}.\]
Actually, this final bound is valid also for \(d(P,Q)\lesssim\ell(P)=\ell(Q)\). In this case, \(d(z_{Q},z_{P})+\ell(P)\asymp 1\), so that the argument of \(\omega\) is roughly \(1\), and
\[V(z_{Q},d(z_{Q},z_{P})+\ell(P))\asymp\mu(Q)\asymp\mu(P).\]
Thus, when \(d(P,Q)\lesssim\ell(P)=\ell(Q)\), the bound (5.3) reduces to the uniform bound obtained earlier.
## 6. Extracting paraproducts
In the two sums
\[\sum_{i=\sigma}^{\tau}\langle T\mathbb{E}_{i}f,\mathbb{D}_{i}g\rangle,\quad\sum_ {i=\sigma}^{\tau}\langle T\mathbb{D}_{i}f,\mathbb{E}_{i}g\rangle,\]
we need to force some cancellation as follows. Let us deal with the first one, the second being symmetric. A typical term in this sum is
\[\langle T\mathbb{E}_{i}f,\mathbb{D}_{i}g\rangle=\sum_{P,Q\in\mathscr{D}_{i}} \langle T1_{P},\mathbb{D}_{Q}g\rangle\langle f\rangle_{P}.\]
Writing
\[\langle f\rangle_{P}=(\langle f\rangle_{P}-\langle f\rangle_{Q})+\langle f \rangle_{Q},\]
we have introduced some cancellation into the first term. On the other hand, in the sum involving the second term, the only factor depending on \(P\) is the indicator \(1_{P}\) in the pairing \(\langle T1_{P},\mathbb{D}_{Q}g\rangle\). Thus, summing over \(P\in\mathscr{D}_{i}\) first, we obtain by linearity
\[\sum_{P,Q\in\mathscr{D}_{i}}\langle T1_{P},\mathbb{D}_{Q}g\rangle\langle f \rangle_{Q}=\sum_{Q\in\mathscr{D}_{i}}\Big{\langle}T\sum_{P\in\mathscr{D}_{i }}1_{P},\mathbb{D}_{Q}g\Big{\rangle}\langle f\rangle_{Q}=\sum_{Q\in\mathscr{D }_{i}}\langle T1,\mathbb{D}_{Q}g\rangle\langle f\rangle_{Q}.\]
For the finite singular integrals that we consider, there is no trouble in making sense of the expression "\(T1\)" above, as the action of the integral operator \(T\) on bounded function is well defined. Denoting (as usual) \(b:=T1\), and using the self-adjointness of \(\mathbb{D}_{Q}\), we find that
\[\sum_{i=\sigma}^{\tau}\sum_{Q\in\mathscr{D}_{i}}\langle b,\mathbb{D}_{Q}g \rangle\langle f\rangle_{Q}=\Big{\langle}\sum_{i=\sigma}^{\tau}\sum_{Q\in \mathscr{D}_{i}}\mathbb{D}_{Q}b\langle f\rangle_{Q},g\Big{\rangle}=:\Big{\langle} \Pi_{b}^{\sigma,\tau}f,g\Big{\rangle}.\]
Here
\[\Pi_{b}^{\sigma,\tau}f:=\sum_{i=\sigma}^{\tau}\sum_{Q\in\mathscr{D}_{i}} \mathbb{D}_{Q}b\langle f\rangle_{Q}=\sum_{i=\sigma}^{\tau}\mathbb{D}_{i}b \mathbb{E}_{i}f\]
is a truncated version of a _dyadic paraproduct_. We can estimate it as follows. Noting that \(\mathbb{D}_{i}\) are martingale differences, the assumption that \(\mathcal{X}\) has martingale type \(p\in[1,2]\) implies that
\[\|\Pi_{b}^{\sigma,\tau}f\|_{L_{s}(\mu;\mathcal{X})} \lesssim\Big{\|}\Big{(}\sum_{i=\sigma}^{\tau}|\mathbb{D}_{i}b|^{ p}\|\mathbb{E}_{i}f\|_{\mathcal{X}}^{p}\Big{)}^{1/p}\Big{\|}_{L_{s}(\mu; \mathcal{X})}\] \[\leq n^{1/p-1/2}\Big{\|}\Big{(}\sum_{i=\sigma}^{\tau}|\mathbb{D} _{i}b|^{2}\|\mathbb{E}_{i}f\|_{\mathcal{X}}^{2}\Big{)}^{1/2}\Big{\|}_{L_{s}( \mu;\mathcal{X})}\] \[\leq n^{1/p-1/2}\Big{\|}\Big{(}\sum_{i=\sigma}^{\tau}\sum_{Q\in \mathscr{D}_{i}}|\mathbb{D}_{Q}b|^{2}\langle\|f\|_{\mathcal{X}}\rangle_{Q}^{ 2}\Big{)}^{1/2}\Big{\|}_{L_{s}(\mu;\mathcal{X})}.\]
Next, we make a stopping time construction as follows. The initial layer of stopping cubes consist of all \(Q\in\mathscr{D}_{\sigma}\). Assuming that some stopping cube \(S\in\mathscr{D}\) has been picked, we look for its maximal dyadic subcubes \(S^{\prime}\subsetneq S\) such that either
\[\langle\|f\|_{\mathcal{X}}\rangle_{S^{\prime}}>4\langle\|f\|_{\mathcal{X}} \rangle_{S},\]
or
\[\sum_{\begin{subarray}{c}Q\in\mathscr{D}\\ S^{\prime}\subsetneq Q\subsetneq S\end{subarray}}|D_{Q}b(u)|^{2}>\lambda^{2} \quad\text{for all}\quad u\in S^{\prime},\]
where \(\lambda>0\) will be specified shortly. Note that the left-hand side of the stopping criterion is constant on \(S^{\prime}\), since \(D_{Q}b\) is constant on the dyadic children of \(Q\), so the condition "for all \(u\in S^{\prime}\)" could be equivalently replaced by "for some \(u\in S^{\prime}\)". Let us refer to the stopping cubes arising from the first or the second criterion as being of the first or the second kind, respectively.
For the stopping cubes of the first kind (pairwise disjoint by their maximality), it is immediate from the stopping criterion that
\[\sum_{S^{\prime}}\mu(S^{\prime})\leq\sum_{S^{\prime}}\frac{1}{4\langle\|f\|_{ \mathcal{X}}\rangle_{S}}\int_{S^{\prime}}\|f\|_{\mathcal{X}}d\mu\leq\frac{1}{ 4\langle\|f\|_{\mathcal{X}}\rangle_{S}}\int_{S}\|f\|_{\mathcal{X}}d\mu=\frac{ 1}{4}\mu(S).\]
To derive a similar estimate for the stopping cubes of the second kind, let us consider the dyadic square function and its truncations
\[\mathfrak{S}b:=\Big{(}\sum_{Q\in\mathscr{D}}|D_{Q}b|^{2}\Big{)}^{1/2},\quad \mathfrak{S}_{P}b:=\Big{(}\sum_{\begin{subarray}{c}Q\in\mathscr{D}\\ Q\subseteq P\end{subarray}}|D_{Q}b|^{2}\Big{)}^{1/2}=\mathfrak{S}(1_{P}(b- \langle b\rangle_{P})).\]
If \(S^{\prime}\subsetneq S\) is one of the stopping cubes of the second kind, then
\[\mathfrak{S}_{S}b(u)^{2}\geq\sum_{\begin{subarray}{c}Q\in\mathscr{D}\\ S^{\prime}\subsetneq Q\subseteq S\end{subarray}}|D_{Q}b(u)|^{2}>\lambda^{2} \quad\text{for all}\quad u\in S^{\prime}.\]
Thus
\[\mu\Big{(}\bigcup S^{\prime}\Big{)}\leq\mu(\mathfrak{S}_{S}b>\lambda) \leq\frac{1}{\lambda}^{2}\int_{E}(\mathfrak{S}_{S}b)^{2}d\mu\] \[=\frac{1}{\lambda^{2}}\int_{S}|b-\langle b\rangle_{S}|^{2}d\mu \leq\frac{1}{\lambda^{2}}\|b\|^{2}_{\operatorname{BMO}^{2}(\mathscr{D})}\mu( S),\]
where
\[\|b\|_{\operatorname{BMO}^{2}(\mathscr{D})}:=\sup_{Q\in\mathscr{D}}\Big{(} \frac{1}{\mu(Q)}\int_{Q}|b-\langle b\rangle_{Q}|^{2}d\mu\Big{)}^{1/2}\]
is the dyadic BMO norm based on \(L_{2}\) averages. (The different BMO norms are equivalent by the John-Nirenberg inequality, which remains valid in this generality.)
If we choose some \(\lambda\gg\|b\|_{\operatorname{BMO}^{2}(\mathscr{D})}\), then \(\mu(\bigcup S^{\prime})\ll\mu(S)\). Denoting by \(\mathscr{S}\) the collection of all stopping cubes of both kinds, and
\[E_{S}:=S\setminus\bigcup_{\begin{subarray}{c}S^{\prime}\in\mathscr{S}\\ S^{\prime}\subsetneq S\end{subarray}}S^{\prime},\]
these subsets \(E_{S}\subseteq S\) are pairwise disjoint, and
\[\mu(E_{S})\gtrsim\mu(S).\]
The existence of such subsets is referred to as the _sparseness_ of the collection \(\mathscr{S}\).
Let us now return to the estimation of \(\Pi^{\sigma,\tau}_{b}f\). For each \(Q\in\mathscr{D}_{i}\) for some \(i\in[\sigma,\tau]\), let \(\pi Q\in\mathscr{S}\) denote the minimal stopping cube that contains \(Q\). Regrouping the
summation under these stopping parents, we have
\[\sum_{i=\sigma}^{\tau}\sum_{Q\in\mathscr{D}_{i}}|\mathbb{D}_{Q}b|^{2} \langle\|f\|_{\mathcal{X}}\rangle_{Q}^{2} \leq\sum_{S\in\mathscr{S}}\sum_{\begin{subarray}{c}Q\in\mathscr{D} \\ \pi Q=S\end{subarray}}|\mathbb{D}_{Q}b|^{2}\langle\|f\|_{\mathcal{X}}\rangle_{Q}^ {2}\] \[\lesssim\sum_{S\in\mathscr{S}}\Big{(}\sum_{\begin{subarray}{c}Q \in\mathscr{D}\\ \pi Q=S\end{subarray}}|\mathbb{D}_{Q}b|^{2}\Big{)}\langle\|f\|_{\mathcal{X}} \rangle_{S}^{2}\] \[\lesssim\sum_{S\in\mathscr{S}}\|b\|_{\mathrm{BMO}^{2}(\mathscr{D })}^{2}1_{S}\langle\|f\|_{\mathcal{X}}\rangle_{S}^{2}.\]
Hence
\[\Big{\|}\Big{(}\sum_{i=\sigma}^{\tau}\sum_{Q\in\mathscr{D}_{i}}| \mathbb{D}_{Q}b|^{2}\langle\|f\|_{\mathcal{X}}\rangle_{Q}^{2}\Big{)}^{1/2} \Big{\|}_{L_{s}(\mu)}\] \[\qquad\lesssim\|b\|_{\mathrm{BMO}^{2}(\mathscr{D})}\Big{\|} \Big{(}\sum_{S\in\mathscr{S}}1_{S}\langle\|f\|_{\mathcal{X}}\rangle_{S}^{2} \Big{)}^{1/2}\Big{\|}_{L_{s}(\mu)}\]
Here
\[\Big{\|}\Big{(}\sum_{S\in\mathscr{S}}1_{S}\langle\|f\|_{\mathcal{X}}\rangle_{ S}^{2}\Big{)}^{1/2}\Big{\|}_{L_{s}(\mu)}\lesssim\Big{\|}\sum_{S\in\mathscr{S}}1 _{S}\langle\|f\|_{\mathcal{X}}\rangle_{S}\Big{\|}_{L_{s}(\mu)}.\]
Dualising with \(h\in L_{s^{\prime}}(\mu)\) and using sparseness and the boundedness of the dyadic maximal operator
\[M_{\mathscr{D}}\phi(u):=\sup_{Q\in\mathscr{D}}1_{Q}(u)\langle|\phi|\rangle_{Q},\]
we find that
\[\int_{E}\Big{(}\sum_{S\in\mathscr{S}}1_{S}\langle\|f\|_{\mathcal{ X}}\rangle_{S}\Big{)}hd\mu =\sum_{S\in\mathscr{S}}\langle\|f\|_{\mathcal{X}}\rangle_{S} \langle h\rangle_{S}\mu(S)\lesssim\sum_{S\in\mathscr{S}}\langle\|f\|_{ \mathcal{X}}\rangle_{S}\langle h\rangle_{S}\mu(E_{S})\] \[\leq\sum_{S\in\mathscr{S}}\int_{E_{S}}M_{\mathscr{D}}(\|f\|_{ \mathcal{X}})(u)M_{\mathscr{D}}h(u)d\mu(u)\] \[\leq\int_{E}M_{\mathscr{D}}(\|f\|_{\mathcal{X}})(u)M_{\mathscr{D }}h(u)d\mu(u)\] \[\leq\big{\|}M_{\mathscr{D}}(\|f\|_{\mathcal{X}})\big{\|}_{L_{s}( \mu)}\|M_{\mathscr{D}}h\|_{L_{s^{\prime}}(\mu)}\] \[\lesssim\big{\|}\|f\|_{\mathcal{X}}\big{\|}_{L_{s}(\mu)}\|h\|_{L_ {s^{\prime}}(\mu)}=\|f\|_{L_{s}(\mu;\mathcal{X})}\|h\|_{L_{s^{\prime}}(\mu)}.\]
Thus
\[\Big{\|}\sum_{S\in\mathscr{S}}1_{S}\langle\|f\|_{\mathcal{X}}\rangle_{S}\Big{\|} _{L_{s}(\mu)}\lesssim\|f\|_{L_{s}(\mu;\mathcal{X})},\]
hence
\[\Big{\|}\Big{(}\sum_{i=\sigma}^{\tau}\sum_{Q\in\mathscr{D}_{i}}|\mathbb{D}_{Q }b|^{2}\langle\|f\|_{\mathcal{X}}\rangle_{Q}^{2}\Big{)}^{1/2}\Big{\|}_{L_{s}( \mu)}\lesssim\|b\|_{\mathrm{BMO}^{2}(\mathscr{D})}\|f\|_{L_{s}(\mu;\mathcal{X})}\]
and therefore
\[\|\Pi_{b}^{\sigma,\tau}f\|_{L_{s}(\mu;\mathcal{X})}\lesssim n^{1/p-1/2}\|b\|_ {\mathrm{BMO}^{2}(\mathscr{D})}\|f\|_{L_{s}(\mu;\mathcal{X})},\]
provided that \(\mathcal{X}\) has martingale type \(p\in[1,2]\).
Recall that \(b=T1\). That this belongs to BMO is a well known necessary condition for the boundedness of \(T\) on \(L_{s}(\mu)\). The standard argument becomes
particularly simple in the present finite case, since \(T1\) is a pointwise well-defined function (not an equivalence class modulo constants as in the general theory). We have
\[T1=T(1_{cB_{Q}})+T(1_{(cB_{Q})^{c}})\]
and hence
\[\Big{(}\int_{Q} |T1-\langle T1\rangle_{Q}|^{s}d\mu\Big{)}^{1/s}\leq 2\inf_{z} \Big{(}\int_{Q}|T1-z|^{s}d\mu\Big{)}^{1/s}\] \[\leq 2\Big{(}\int_{Q}|T1-T(1_{(cB_{Q})^{c}})(z_{Q})|^{s}d\mu \Big{)}^{1/s}\] \[\leq 2\Big{(}\int_{Q}|T(1_{cB_{Q}})|^{s}d\mu\Big{)}^{1/s}+2\Big{(} \int_{Q}|T(1_{(cB_{Q})^{c}})-T(1_{(cB_{Q})^{c}})(z_{Q})|^{s}d\mu\Big{)}^{1/s}.\]
The first term is dominated by the ball testing condition
\[\|T\|_{\text{test}^{s}(\mathscr{B})}:=\sup_{B\in\mathscr{B}}\frac{\|T1_{B}\|_ {L_{s}(\mu)}}{\mu(B)^{1/s}},\]
where \(\mathscr{B}\) is the family of all balls \(B(u,t)\subset E\); namely
\[\Big{(}\int_{Q}|T(1_{cB_{Q}})|^{s}d\mu\Big{)}^{1/s}\leq\|T\|_{\text{test}^{s} (\mathscr{B})}\mu(cB_{Q})^{1/s}\lesssim\|T\|_{\text{test}^{s}(\mathscr{B})}\mu (Q)^{1/s}.\]
For the remaining integral, we observe that
\[\int_{Q} |T(1_{(cB_{Q})^{c}})(u)-T(1_{(cB_{Q})^{c}})(z_{Q})|^{s}d\mu(u)\] \[\leq\int_{Q}\int_{(cB_{Q})^{c}}|K(u,v)-K(z_{Q},v)|d\mu(v)d\mu(u)\] \[\lesssim\int_{Q}\int_{(cB_{Q})^{c}}\omega\Big{(}\frac{d(u,z_{Q})} {d(v,z_{Q})}\Big{)}\frac{1}{V(v,z_{Q})}d\mu(v)d\mu(u).\]
The inner integral is bounded by a constant by computations like those in the proof of Lemma 4.1, and the outer integral over \(Q\) then gives simply \(\mu(Q)\).
## 7. Reorganisation of the remaining parts
After the extraction of the paraproducts, we are left with estimating sums of the form
\[\sum_{i=a}^{b}\sum_{P,Q\in\mathscr{B}_{i}}T(P,Q),\quad T(P,Q)\in\begin{cases} \langle T\mathbb{D}_{P}f,\mathbb{D}_{Q}g\rangle,\\ (\langle f\rangle_{P}-\langle f\rangle_{Q})\langle T1_{P},\mathbb{D}_{Q}g \rangle,\\ \langle T\mathbb{D}_{P}f,1_{Q}\rangle(\langle g\rangle_{Q}-\langle g\rangle_ {P}),\end{cases} \tag{7.1}\]
where we note that the last two "products" are actually duality pairings between e.g. \((\langle f\rangle_{P}-\langle f\rangle_{Q})\in\mathcal{X}\) and \(\langle T1_{P},\mathbb{D}_{Q}g\rangle\in\mathcal{X}^{\prime}\).
Dropping for the moment the summands, and recalling the parameter \(m_{0}\) guaranteed by Lemma 3.2, we further reorganise the summation as
\[\sum_{P,Q\in\mathscr{B}_{i}}=\sum_{\begin{subarray}{c}P,Q\in\mathscr{B}_{i} \\ d(x_{P},x_{Q})\leq\frac{1}{2}\epsilon\delta^{-m_{0}}\ell(P)\end{subarray}}+ \sum_{m=m_{0}+1}^{\infty}\sum_{\begin{subarray}{c}P,Q\in\mathscr{B}_{i}\\ \frac{1}{2}\epsilon\delta^{1-m}\ell(P)<d(x_{P},x_{Q})\leq\frac{1}{2}\epsilon \delta^{-m}\ell(P)\end{subarray}}.\]
In the \(m\)-th sum, the Haar coefficients (appearing when expanding the summands \(T(P,Q)\)) can be estimated by
\[|\langle Th_{P}^{\alpha},h_{Q}^{\beta}\rangle|\lesssim\omega(\delta^{m})\frac{ \sqrt{\mu(P)\mu(Q)}}{V(z_{P},\delta^{-m}\ell(P))}, \tag{7.2}\]
which depends on the lower bound on \(d(x_{P},x_{Q})\). The case \(m=m_{0}\) is included, in which case this bound reduces to just \(O(1)\), regarding \(m_{0}\) as fixed.
Let us observe that we can also truncate the summation over \(m\) from above. Due to the property that \(K(u,v)\neq 0\) only if \(d(u,v)<R\), a pairing like \(\langle T\mathbb{D}_{P}f,\mathbb{D}_{Q}g\rangle\) (or one with \(\mathbb{D}_{P}f\) replaced by \(1_{P}\), or \(\mathbb{D}_{Q}g\) replaced by \(1_{Q}\)) can only be non-zero if \(d(P,Q)<R\) and hence also \(d(x_{P},x_{Q})\lesssim R\). But in the \(m\)-th term we also have the lower bound \(d(x_{P},x_{Q})>\frac{1}{2}\varepsilon\delta^{1-m}\ell(P)\gtrsim\frac{1}{2} \varepsilon\delta^{1-m}r\). Hence this term gives no contribution unless \(\delta^{-m}\lesssim R/r\), thus \(m\lesssim n\).
To proceed further, we will need to make use of a random selection of our dyadic systems \(\mathscr{D}\). We can write the decomposition of the pairing \(\langle Tf,g\rangle\) relative to any such dyadic system, and then take the expectation over the choice of \(\mathscr{D}\). By linearity, we can move the expectation inside the summations. If we assume that \(f\) and \(g\) are boundedly supported (such functions are in any case dense in the spaces that we consider), we entirely avoid any issues of convergence: The summation over \(i\) is finite already, and for each \(i\), the bounded support of \(f\) can only intersect finitely many of the cube \(P,Q\in\mathscr{D}_{i}\).
Let \(A_{m}(P,Q)\) denote the random event--as in Lemma 3.2, only with a slightly different notation--that the two (random) cubes \(P,Q\in\mathscr{D}_{i}\) share a common ancestor in the (random) dyadic system \(\mathscr{D}_{i-m}\). By the summation condition that \(d(x_{P},x_{Q})\leq\frac{1}{2}\varepsilon\delta^{-m}\ell(P)\), Lemma 3.2 guarantees that \(\mathbb{P}(A_{m}(P,Q))\in[\frac{1}{2},1]\). Moreover, Lemma 3.2 also says that \(A_{m}(P,Q)\) depends only on \((\omega_{j})_{i-m\leq j<i}\) (observe the difference in indexing compared to Lemma 3.2). On the other hand, the cubes \(P,Q\), as well as their dyadic children, and hence quantities like \(\mathbb{D}_{P}f\) and eventually \(T(P,Q)\), depend only on \((\omega_{j})_{j\geq i}\). Thus
\[T(P,Q)\text{ and }A_{m}(P,Q)\text{ are independent.}\]
We can then manipulate
\[\begin{split}&\mathbb{E}\sum_{\begin{subarray}{c}P,Q\in\mathscr{D}_{i }\\ \frac{1}{2}\varepsilon\delta^{1-m}(P)<d(x_{P},x_{Q})\\ \leq\frac{1}{2}\varepsilon\delta^{-m}(P)\end{subarray}}T(P,Q)\\ &=\sum_{\begin{subarray}{c}P,Q\in\mathscr{D}_{i}\\ \frac{1}{2}\varepsilon\delta^{1-m}(P)<d(x_{P},x_{Q})\\ \leq\frac{1}{2}\varepsilon\delta^{-m}(P)\end{subarray}}\mathbb{E}T(P,Q) \frac{\mathbb{E}1_{A_{m}(P,Q)}}{\mathbb{P}(A_{m}(P,Q))}\\ &=\sum_{\begin{subarray}{c}P,Q\in\mathscr{D}_{i}\\ \frac{1}{2}\varepsilon\delta^{1-m}(P)<d(x_{P},x_{Q})\\ \leq\frac{1}{2}\varepsilon\delta^{-m}(P)\end{subarray}}\mathbb{E}[T(P,Q)1_{A_ {m}(P,Q)}]\frac{1}{\mathbb{P}(A_{m}(P,Q))}\\ &=\mathbb{E}\sum_{\begin{subarray}{c}P,Q\in\mathscr{D}_{i}\\ \frac{1}{2}\varepsilon\delta^{1-m}(P)<d(x_{P},x_{Q})\\ \leq\frac{1}{2}\varepsilon\delta^{-m}(P)\end{subarray}}T(P,Q)1_{A_{m}(P,Q)} \frac{1}{\mathbb{P}(A_{m}(P,Q))}\\ &=:\mathbb{E}\sum_{\begin{subarray}{c}P,Q\in\mathscr{D}_{i}\\ \frac{1}{2}\varepsilon\delta^{1-m}(P)<d(x_{P},x_{Q})\\ \leq\frac{1}{2}\varepsilon\delta^{-m}(P)\end{subarray}}T_{m}(P,Q).\end{split}\]
Thus, at the small cost of multiplying the summands by numerical factors in the range \([1,2]\), we could insert the extra summation condition \(A_{m}(P,Q)\) that \(P,Q\in\mathscr{D}_{i}\) share a common ancestor \(S\in\mathscr{D}_{i-m}\).
We can now reorganise the sums under these common ancestors. Thus, abbreviating the summation condition
\[\begin{cases}d(x_{P},x_{Q})\leq\frac{1}{2}\varepsilon\delta^{-m_{0}},&\text{ if }m=m_{0},\\ \frac{1}{2}\varepsilon\delta^{1-m}<d(x_{P},x_{Q})\leq\frac{1}{2}\varepsilon \delta^{-m},&\text{if }m>m_{0},\end{cases}\]
simply as \(\sum^{m}\), we have
\[\begin{split}\mathbb{E}\sum_{i=\sigma}^{\tau}\sum_{P,Q\in \mathscr{D}_{i}}T(P,Q)&=\mathbb{E}\sum_{m=m_{0}}^{\infty}\sum_{i=\sigma} ^{\tau}\sum_{P,Q\in\mathscr{D}_{i}}\tilde{T}_{m}(P,Q)\\ &=\mathbb{E}\sum_{m=m_{0}}^{\infty}\sum_{i=\sigma}^{\tau}\sum_{S\in \mathscr{D}_{i-m}}\sum_{\begin{subarray}{c}P,Q\in\mathscr{D}_{i}\\ P,Q\subseteq S\end{subarray}}^{m}\tilde{T}_{m}(P,Q).\end{split}\]
Let us investigate
\[A_{S}^{m}(f,g):=\sum_{\begin{subarray}{c}P,Q\in\mathscr{D}_{i}\\ P,Q\subseteq S\end{subarray}}^{m}\tilde{T}_{m}(P,Q),\]
where \(T_{m}(P,Q)\) is just \(T(P,Q)\) multiplied by a number in \(\{0\}\cup[1,2]\), and \(T(P,Q)\) is as in (7.1).
### Cancellation properties
We see that both \(f\) and \(g\) are acted on by operators that annihilate constants, only observe these functions on \(P,Q\subseteq S\). Thus,
we may replace \(f\) by \(1_{S}(f-\langle f\rangle_{S})\), and similarly with \(g\). Moreover, in the case of \(\mathbb{D}_{P}f\), we may replace \(f\) by
\[\mathbb{D}_{S}^{m}f:=\sum_{\begin{subarray}{c}S^{\prime}\in\mathscr{D}\\ \ell(S^{\prime})=\delta^{m}\ell(S)\end{subarray}}\mathbb{D}_{S^{\prime}}f,\]
and in the case of \(\langle f\rangle_{P}-\langle f\rangle_{Q}\), by
\[\mathbb{E}_{S}^{m}f:=\sum_{\begin{subarray}{c}S^{\prime}\in\mathscr{D}\\ \ell(S^{\prime})=\delta^{m}\ell(S)\end{subarray}}\mathbb{D}_{S^{\prime}}f,\]
as this operator only observed the averages of \(f\) on the of the cubes \(P\) and \(Q\). Combining these observations, we can actually replace \(f\), in the latter case, by
\[1_{S}(\mathbb{E}_{S}^{m}f-\langle\mathbb{E}_{S}^{m}f\rangle_{S})=\sum_{ \begin{subarray}{c}S^{\prime}\in\mathscr{D}\\ \ell(S^{\prime})>\delta^{m}\ell(S)\end{subarray}}\mathbb{D}_{S^{\prime}}f=: \mathbb{D}_{S}^{[0,m)}f.\]
Similar observations apply on the \(g\) side. This describes the cancellation properties present in \(A_{S}^{m}(f,g)\).
### Size properties
Noting that \(\ell(S)=\delta^{-m}\ell(P)\) and \(z_{P}\in S\), we can simplify the denominator in (7.2) as
\[V(z_{P},\delta^{-m}\ell(P))=V(z_{P},\ell(S))\asymp\mu(S).\]
In the case of \(\mathbb{D}_{P}\) and \(\mathbb{D}_{Q}\) on both sides, we have
\[A_{S}^{m}(f,g)=\iint_{S}a_{S}^{m}(u,v)f(v)g(u)d\mu(v)d\mu(u),\]
where
\[a_{S}^{m}(u,v)=\sum_{\alpha,\beta}\sum_{\begin{subarray}{c}P,Q\in\mathscr{D} _{i}\\ P,Q\subseteq S\end{subarray}}^{m}\chi_{\alpha,\beta}^{m}\langle Th_{P}^{ \alpha},h_{Q}^{\beta}\rangle h_{P}^{\alpha}(v)h_{Q}^{\beta}(u)\]
satisfies
\[|a_{S}^{m}(u,v)| \lesssim\sum_{\begin{subarray}{c}P,Q\in\mathscr{D}_{i}\\ P,Q\subseteq S\end{subarray}}^{m}\omega(\delta^{m})\frac{\sqrt{\mu(P)\mu(Q)}}{ \mu(S)}\frac{1_{P}(v)}{\mu(P)^{1/2}}\frac{1_{Q}(u)}{\mu(Q)^{1/2}}\] \[=\omega(\delta^{m})\frac{1}{\mu(S)}\sum_{\begin{subarray}{c}P,Q \in\mathscr{D}_{i}\\ P,Q\subseteq S\end{subarray}}^{m}1_{P}(v)1_{Q}(u)\leq\omega(\delta^{m})\frac{ 1}{\mu(S)}1_{S\times S}(u,v).\]
In the case of \(\langle f\rangle_{P}-\langle f\rangle_{Q}\), we get
\[a_{S}^{m}(u,v)=\sum_{\beta}\sum_{\begin{subarray}{c}P,Q\in\mathscr{D}_{i}\\ P,Q\subseteq S\end{subarray}}^{m}\chi_{\alpha,\beta}^{m}\langle T1_{P},h_{Q}^{ \beta}\rangle\Big{(}\frac{1_{P}(v)}{\mu(P)}-\frac{1_{Q}(v)}{\mu(Q)}\Big{)}h_ {Q}^{\beta}(u),\]
which satisfies
\[|a_{S}^{m}(u,v)| \lesssim\sum_{\begin{subarray}{c}P,Q\in\mathscr{G}_{i}\\ P,Q\subseteq S\end{subarray}}^{m}\omega(\delta^{m})\frac{\mu(P)\sqrt{\mu(Q)}}{ \mu(S)}\Big{(}\frac{1_{P}(v)}{\mu(P)}+\frac{1_{Q}(v)}{\mu(Q)}\Big{)}\frac{1_{Q }(u)}{\mu(Q)^{1/2}}\] \[=\omega(\delta^{m})\frac{1}{\mu(S)}\sum_{\begin{subarray}{c}P,Q \in\mathscr{G}_{i}\\ P,Q\subseteq S\end{subarray}}^{m}\Big{(}1_{P}(v)1_{Q}(u)+\mu(P)\frac{1_{Q}(v)1 _{Q}(u)}{\mu(Q)}\Big{)}\] \[\leq\omega(\delta^{m})\Big{(}\frac{1}{\mu(S)}1_{S\times S}(u,v)+ \sum_{\begin{subarray}{c}Q\subseteq S\\ \ell(Q)=\delta^{m}\ell(S)\end{subarray}}\frac{1_{Q\times Q}(u,v)}{\mu(Q)} \Big{)}\]
The case of \(\langle g\rangle_{Q}-\langle g\rangle_{P}\) is of course symmetric.
Note that \(1_{Q\times Q}(u,v)/\mu(Q)\) is the kernel of the averaging operator \(\mathbb{E}_{Q}\).
Altogether, we find that \(A_{S}^{m}\) takes one of the following forms:
\[A_{S}^{m}=\omega(\delta^{m})\times\begin{cases}\mathbb{D}_{S}^{m}\dot{A}_{S}^ {m}\mathbb{D}_{S}^{m},\\ \mathbb{D}_{S}^{[0,m)}\dot{A}_{S}^{m}\mathbb{D}_{S}^{m},\\ \mathbb{D}_{S}^{m}\dot{A}_{S}^{m}\mathbb{D}_{S}^{[0,m)},\end{cases}\]
where
\[\|\dot{A}_{S}^{m}f\|_{\mathcal{X}}\lesssim\mathbb{E}_{S}\|f\|_{\mathcal{X}}+ \mathbb{E}_{S}^{m}\|f\|_{\mathcal{X}}.\]
## 8. Final estimates
We are left with estimating, e.g.,
\[\sum_{m=m_{0}}^{\infty}\omega(\delta^{m})\sum_{i=\sigma}^{\tau}\langle \mathbb{D}_{i-m}^{[0,m)}\dot{A}_{i-m}^{m}\mathbb{D}_{i-m}^{m}f,g\rangle, \tag{8.1}\]
where
\[G_{i}=\sum_{S\in\mathscr{G}_{i}}G_{S},\quad G\in\{D^{[0,m)},\dot{A}^{m}, \mathbb{D}^{m}\},\]
as well as two similar sums with either the positions of \(\mathbb{D}_{S}^{[0,m)}\) and \(\mathbb{D}_{S}^{m}\) interchanged, or both replaced by \(\mathbb{D}_{S}^{m}\).
Let us estimate the term that we wrote down above.
**8.2 Lemma**.: _Let \(1<p\leq s\leq q<\infty\) and suppose that \(\mathcal{X}\) has martingale type \(p\) and martingale cotype \(q\). Then_
\[\Big{|}\sum_{i=\sigma}^{\tau}\langle\mathbb{D}_{i-m}^{[0,m)}\dot{A}_{i-m}^{m} \mathbb{D}_{i-m}^{m}f,g\rangle\Big{|}\lesssim n^{1/p-1/q}m^{1/p^{\prime}}\|f \|_{L_{s}(\mu;\mathcal{X})}\|g\|_{L_{s}(\mu;\mathcal{X}^{\prime})}.\]
Proof.: By self-adjointness of the difference operators,
\[|\langle\mathbb{D}_{i}^{[0,m)}\dot{A}_{i}^{m}\mathbb{D}_{i}^{m} f,g\rangle| =|\langle\dot{A}_{i}^{m}\mathbb{D}_{i}^{m}f,\mathbb{D}_{i}^{[0,m)} g\rangle|\] \[\leq\langle(\mathbb{E}_{i}+\mathbb{E}_{i+m})\|\mathbb{D}_{i}^{m} f\|_{\mathcal{X}},\|\mathbb{D}_{i}^{[0,m)}g\|_{\mathcal{X}^{\prime}}\rangle.\]
Hence
\[\sum_{i=\sigma}^{\tau} |\langle\mathbb{D}_{i-m}^{[0,m)}\dot{A}_{i-m}^{m}\mathbb{D}_{i-m}^{m }f,g\rangle|=\sum_{i=\sigma-m}^{\tau-m}|\langle\mathbb{D}_{i}^{[0,m)}\dot{A}_{i }^{m}\mathbb{D}_{i}^{m}f,g\rangle|\] \[\leq\Big{\|}\Big{(}\sum_{i=\sigma-m}^{\tau-m}\big{[}(\mathbb{E}_{i }+\mathbb{E}_{i+m})\|\mathbb{D}_{i}^{m}f\|_{\mathcal{X}}]^{q}\Big{)}^{1/q} \Big{\|}_{L_{s}(\mu)}\] \[\qquad\times\Big{\|}\Big{(}\sum_{i=\sigma-m}^{\tau-m}\big{[}\| \mathbb{D}_{i}^{[0,m)}g\|_{\mathcal{X}}]^{q^{\prime}}\Big{)}^{1/q^{\prime}} \Big{\|}_{L_{s^{\prime}}(\mu)}\]
for any \(q,s\in(1,\infty)\). In the first term, we can pull out the conditional expectations (even after a crude estimate \((\mathbb{E}_{i}+\mathbb{E}_{i+m})\phi\leq\sup_{j\in\mathbb{Z}}\mathbb{E}_{j}\phi\) if we wish) by the vector-valued Doob maximal inequality [13, Theorem 3.2.7]:
\[\Big{\|}\Big{(}\sum_{i=\sigma-m}^{\tau-m}\big{[}(\mathbb{E}_{i}+\mathbb{E}_{i +m})\|\mathbb{D}_{i}^{m}f\|_{\mathcal{X}}]^{q}\Big{)}^{1/q}\Big{\|}_{L_{s}( \mu)}\lesssim\Big{\|}\Big{(}\sum_{i=\sigma-m}^{\tau-m}\|\mathbb{D}_{i}^{m}f\| _{\mathcal{X}}^{q}\Big{)}^{1/q}\Big{\|}_{L_{s}(\mu)}.\]
If \(\mathcal{X}\) has martingale cotype \(q\), then
\[\Big{\|}\Big{(}\sum_{i=\sigma-m}^{\tau-m}\|\mathbb{D}_{i}^{m}f\|_{\mathcal{X} }^{q}\Big{)}^{1/q}\Big{\|}_{L_{s}(\mu)}\lesssim\|f\|_{L_{s}(\mu;\mathcal{X})},\]
essentially by definition, since \(\mathbb{D}_{i}^{m}f\) are martingale differences.
The dual side is slightly trickier, since the compound differences
\[\mathbb{D}_{i}^{[0,m)}=\sum_{k=0}^{m-1}\mathbb{D}_{i-k}\]
involve overlapping scales. For the optimal estimate, we need to be somewhat clever with the use of the triangle inequality. This is accomplished as follows: We now assume that \(p\leq s\leq q\); hence \(q^{\prime}\leq s^{\prime}\leq p^{\prime}\), where \(\mathcal{X}\) has martingale type \(p\) and martingale cotype \(q\). Then
\[\Big{\|}\Big{(}\sum_{i=\sigma-m}^{\tau-m}\|\mathbb{D}_{i}^{[0,m)} g\|_{\mathcal{X}}^{q^{\prime}}\Big{)}^{1/q^{\prime}}\Big{\|}_{L_{s^{\prime}}( \mu)}\] \[\qquad\leq n^{1/q^{\prime}-1/s^{\prime}}\Big{\|}\Big{(}\sum_{i= \sigma-m}^{\tau-m}\|\mathbb{D}_{i}^{[0,m)}g\|_{\mathcal{X}^{\prime}}^{s^{ \prime}}\Big{)}^{1/s^{\prime}}\Big{\|}_{L_{s^{\prime}}(\mu)}\] \[\qquad=n^{1/q^{\prime}-1/s^{\prime}}\Big{(}\sum_{j=0}^{m-1}\Big{\|} \Big{(}\sum_{\begin{subarray}{c}i=\sigma-m\\ i\equiv j\mod m\end{subarray}}^{\tau-m}\|\mathbb{D}_{i}^{[0,m)}g\|_{\mathcal{X} ^{\prime}}^{s^{\prime}}\Big{)}^{1/s^{\prime}}\Big{\|}_{L_{s^{\prime}}(\mu)}^{ 1/s^{\prime}}.\]
The inner summation contains every \(m\)-th term among a sequence of length \(n\). Recalling that \(m\lesssim n\), the number of such terms is \(O(n/m)\). (There could of course be a single term even if \(m\gg n\), in which case the estimate would no longer be valid.) Thus, by Holder's inequality, we can continue with
\[\lesssim n^{1/q^{\prime}-1/s^{\prime}}(n/m)^{1/s^{\prime}-1/p^{\prime}}\Big{(} \sum_{j=0}^{m-1}\Big{\|}\Big{(}\sum_{\begin{subarray}{c}i=\sigma-m\\ i\equiv j\mod m\end{subarray}}^{\tau-m}\|\mathbb{D}_{i}^{[0,m)}g\|_{\mathcal{X} ^{\prime}}^{p^{\prime}}\Big{)}^{1/p^{\prime}}\Big{\|}_{L_{s^{\prime}}(\mu)}^{ 1/s^{\prime}}.\]
For each fixed \(j\), the sequence \((\mathbb{D}_{i}^{[0,m)})_{i\equiv j\mod m}\) consists of martingale differences, If \(\mathcal{X}\) has martingale type \(p\), its dual \(\mathcal{X}^{\prime}\) has martingale cotype \(p^{\prime}\), and we can further continue the estimate
\[\lesssim n^{1/q^{\prime}-1/s^{\prime}}(n/m)^{1/s^{\prime}-1/p^{ \prime}}\Big{(}\sum_{j=0}^{m-1}\|g\|_{L_{s^{\prime}}(\mu;\mathcal{X}^{\prime}) }^{s^{\prime}}\Big{)}^{1/s^{\prime}}\] \[=n^{1/q^{\prime}-1/s^{\prime}}(n/m)^{1/s^{\prime}-1/p^{\prime}}(n /m)^{1/s^{\prime}-1/p^{\prime}}m^{1/s^{\prime}}\|g\|_{L_{s^{\prime}}(\mu; \mathcal{X}^{\prime})}\] \[=n^{1/q^{\prime}-1/p^{\prime}}m^{1/p^{\prime}}\|g\|_{L_{s^{\prime }}(\mu;\mathcal{X}^{\prime})}=n^{1/p-1/q}m^{1/p^{\prime}}\|g\|_{L_{s^{\prime }}(\mu;\mathcal{X}^{\prime})}.\]
By symmetry, we infer that
\[\Big{|}\sum_{i=\sigma}^{\tau}\langle\mathbb{D}_{i-m}^{m}\dot{A}_{i-m}^{m} \mathbb{D}_{i-m}^{[0,m)}f,g\rangle\Big{|}\lesssim n^{1/p-1/q}m^{1/q}\|f\|_{L_{ s}(\mu;\mathcal{X})}\|g\|_{L_{s}(\mu;\mathcal{X}^{\prime})}.\]
In the case of \(\mathbb{D}_{i-m}^{m}\) on both sides, the estimation is more straightforward, and we simply estimate
\[\Big{\|}\Big{(}\sum_{i=\sigma-m}^{\tau-m}\|\mathbb{D}_{i}^{m}g\| _{\mathcal{X}}^{q^{\prime}}\Big{)}^{1/q^{\prime}}\Big{\|}_{L_{s^{\prime}}(\mu)}\] \[\qquad\leq n^{1/q^{\prime}-1/p^{\prime}}\Big{\|}\Big{(}\sum_{i= \sigma-m}^{\tau-m}\|\mathbb{D}_{i}g\|_{\mathcal{X}^{\prime}}^{p^{\prime}} \Big{)}^{1/p^{\prime}}\Big{\|}_{L_{s^{\prime}}(\mu)}\lesssim n^{1/q^{\prime}- 1/p^{\prime}}\|g\|_{L_{s^{\prime}}(\mu;\mathcal{X}^{\prime})}\]
and hence
\[\Big{|}\sum_{i=\sigma}^{\tau}\langle\mathbb{D}_{i-m}^{m}\dot{A}_{i-m}^{m} \mathbb{D}_{i-m}^{m}f,g\rangle\Big{|}\lesssim n^{1/p-1/q}\|f\|_{L_{s}(\mu; \mathcal{X})}\|g\|_{L_{s}(\mu;\mathcal{X}^{\prime})}.\]
The largest of the different bounds that we have obtained for the \(m\)-th term is hence
\[n^{1/p-1/q}m^{\max(1/p^{\prime},1/q)}\|f\|_{L_{s}(\mu;\mathcal{X})}\|g\|_{L_{s }(\mu;\mathcal{X}^{\prime})}.\]
Recalling from (8.1) that we still need to sum over \(m\), we finally obtain the bound
\[n^{1/p-1/q}\sum_{m=m_{0}}^{\infty}\omega(\delta^{m})m^{\max(1/p,1/q^{\prime})} \asymp n^{1/p-1/q}\int_{0}^{1}\omega(t)(1+\log\frac{1}{t})^{\max(1 /p,1/q^{\prime})}\frac{dt}{t}\] \[=:n^{1/p-1/q}\|\omega\|_{\operatorname{Dini}^{\max(1/p,1/q^{ \prime})}}.\]
## 9. Concluding remarks
We have now concluded the (unavoidably somewhat lengthy) proof of Theorem 1.2. In the course of the proof, after some preliminary reductions, the main part of the operator was decomposed into two paraproducts and the cancellative part. Recall that we assume that our target Banach space has martingale type \(p\) and martingale cotype \(q\).
For the paraproduct, we found the estimate \(O(n^{1/p-1/2})\). By symmetry, the dual paraproduct then satisfies the bound \(O(n^{1/q^{\prime}-1/2})=O(n^{1/2-1/q})\). Finally, the cancellative part of the operator had the bound \(O(n^{1/p-1/q})\), provided that the kernel satisfies a Dini condition of order \(\max(1/p,1/q^{\prime})\).
It might at first seem counterintuitive that the order of the required Dini condition increases with improving martingale type or cotype. On the other hand, the
dependence on the finiteness parameter \(n\) of the kernel decreases at the same time. A way to think of this is as follows: If the space is better, then we are able to make better use of the properties of the kernel, whereas a poor space only observes a Dini condition of low order, even if more would be available.
If (and only if) \(\mathcal{X}\) is (isomorphic to) a Hilbert space, then one can take \(p=q=2\), and all bounds become \(O(1)\) in terms of \(n\), allowing one to dispense with the truncations and deal with genuine singular integrals. In this case, one needs a Dini condition of order \(1/2\) to run the argument. It seems to be open whether this can be relaxed, even in the scalar case.
In fact, with the present argument, the only way to get \(O(1)\) in terms of \(n\) in the cancellative term is to have \(p=q=2\). This is due to the fact that necessarily \(p\leq 2\) and \(q\geq 2\); hence \(1/p-1/q=(1/p-1/2)+(1/2-1/q)\) can only be zero if both terms vanish. On the other hand, for the paraproducts, it suffices to have just one of \(p\) or \(q\) equal to \(2\), which provides a much larger class of examples.
The boundedness of the dyadic paraproduct \(\Pi_{b}\) with a scalar-valued symbol \(b\in\mathrm{BMO}\), is well known on \(L_{s}(\mu;\mathcal{X})\), in the case that \(\mathcal{X}\) is a UMD space. The result is attributed to Bourgain, and written down in [7]. Our considerations in Section 6 show that they are also bounded on \(L_{s}(\mu;\mathcal{X})\), if \(\mathcal{X}\) has martingale type \(2\). These two classes (UMD and martingale type \(2\)) are not mutually comparable. This raises the interesting question about the maximal class of spaces \(\mathcal{X}\) such that the dyadic paraproduct induces a bounded operator on \(L_{s}(\mu;\mathcal{X})\).
**Acknowledgments.** I would like to thank the anonymous referees for their constructive comments on the manuscript.
|
2306.04622
|
Yet Another Algorithm for Supervised Principal Component Analysis:
Supervised Linear Centroid-Encoder
|
We propose a new supervised dimensionality reduction technique called
Supervised Linear Centroid-Encoder (SLCE), a linear counterpart of the
nonlinear Centroid-Encoder (CE) \citep{ghosh2022supervised}. SLCE works by
mapping the samples of a class to its class centroid using a linear
transformation. The transformation is a projection that reconstructs a point
such that its distance from the corresponding class centroid, i.e.,
centroid-reconstruction loss, is minimized in the ambient space. We derive a
closed-form solution using an eigendecomposition of a symmetric matrix. We did
a detailed analysis and presented some crucial mathematical properties of the
proposed approach. %We also provide an iterative solution approach based
solving the optimization problem using a descent method. We establish a
connection between the eigenvalues and the centroid-reconstruction loss. In
contrast to Principal Component Analysis (PCA) which reconstructs a sample in
the ambient space, the transformation of SLCE uses the instances of a class to
rebuild the corresponding class centroid. Therefore the proposed method can be
considered a form of supervised PCA. Experimental results show the performance
advantage of SLCE over other supervised methods.
|
Tomojit Ghosh, Michael Kirby
|
2023-06-07T17:52:29Z
|
http://arxiv.org/abs/2306.04622v1
|
Yet Another Algorithm for Supervised Principal Component Analysis: Supervised Linear Centroid-Encoder
###### Abstract
We propose a new supervised dimensionality reduction technique called Supervised Linear Centroid-Encoder (SLCE), a linear counterpart of the nonlinear Centroid-Encoder (CE) (Ghosh and Kirby, 2022). SLCE works by mapping the samples of a class to its class centroid using a linear transformation. The transformation is a projection that reconstructs a point such that its distance from the corresponding class centroid, i.e., centroid-reconstruction loss, is minimized in the ambient space. We derive a closed-form solution using an eigendecomposition of a symmetric matrix. We did a detailed analysis and presented some crucial mathematical properties of the proposed approach. We establish a connection between the eigenvalues and the centroid-reconstruction loss. In contrast to Principal Component Analysis (PCA) which reconstructs a sample in the ambient space, the transformation of SLCE uses the instances of a class to rebuild the corresponding class centroid. Therefore the proposed method can be considered a form of supervised PCA. Experimental results show the performance advantage of SLCE over other supervised methods.
Supervised Linear Centroid-Encoder, Centroid-Encoder, Principal Component Analysis (PCA), Supervised PCA, Linear Dimensionality Reduction.
## 1 Introduction
Historically, dimensionality reduction (DR) is an integral component of the machine learning workflow for high-dimensional data sets; see, e.g., (Duda and Hart, 1973; Kirby, 2001; Chepushtanova et al., 2020a). Recent technological advancements in data acquisition, and storage along with the wider accessibility of High Performance Computing (HPC), have increased the need for a broad range of tools capable of high-dimensional data analytics (Hayden, 2015). For example, bioinformaticians seek to understand _omics_ data such as the gene expression levels measured by microarrays or next-generation sequencing techniques where samples consist of 20,000-50,000 measurements (Reuter et al., 2015). Often these high-dimensional features may be noisy, redundant, missing, or irrelevant (Jing et al.,
2015), which has the potential to degrade the performance of machine learning tasks (Shen et al., 2022). Further, the number of samples available is often so small as to preclude the quality training of nonlinear methods (Ghosh and Kirby, 2022; Aminian et al., 2021).
In general, a DR technique is applied as a pre-processing step to reduce data dimension to facilitate visualization, clustering, or classification (Li et al., 2021). The requirements of a DR technique typically include preserving one or more of the intrinsic properties of interest in the embedding space. The intrinsic property can be statistical (Jolliffe, 1986; Van der Maaten and Hinton, 2008), topological (Kohonen, 1993), or geometrical (J. B. Tenenbaum and Langford, 2000; Becht et al., 2019); in the presence of labels, the intrinsic property may consist of discriminating features to assign class membership (Fisher, 1936).
Traditionally, principal component analysis (Jolliffe, 1986) (PCA) is the most widely used DR technique, a projection method that minimizes reconstruction loss. Statistically, the projection is built to capture the maximum variance from the sample covariance matrix. Despite its easy implementation and simple geometric interpretation, PCA often produces ambiguous results, sometimes collapsing distinct classes into overlapping regions in the setting where class labels are available. This is of course due to the fact PCA does not explicitly exploit label information. In contrast, Linear Discriminant Analysis (LDA) uses class labels to avoid class overlap by maximizing the class separation and minimizing the class scatter, thus creating a better embedding than PCA (Duda et al., 2006). This motivates us (and others) to find a supervised PCA algorithm in our quest to obtain the best of both worlds.
It is, in general, possible to add labels to unsupervised methods to create their supervised analogs. A heuristic-based supervised PCA model first selects important features by calculating correlation with the response variable and then applies standard PCA of the chosen feature set (Bair et al., 2006). Another supervised PCA technique, proposed by (Barshan et al., 2011), uses the Hilbert-Schmidt independence criterion to compute the principal components which have maximum dependence on the labels. The Supervised Probabilistic PCA also uses the class labels to create the low-dimensional embedding (Yu et al., 2006) Note that, all these supervised formulations of PCA work better than the unsupervised one for data where labels are available.
In this paper, we propose a linear dimensionality reduction technique that explicitly utilizes the class labels to create a low-dimensional embedding. This algorithm is effectively a linearization of the nonlinear Centroid-Encoder (Ghosh and Kirby, 2022). Being a linear model, SLCE is less prone to overfit high-dimensional data sets, requires less data for learning, and at the same time is easy to interpret geometrically. Here we summarize the main contribution of our work.
* We have proposed a linear dimensionality reduction technique called Supervised Linear Centroid-Encoder (SLCE) that uses class label information. The proposed method doesn't use the response variable; instead uses the class centroid to impose supervision in learning.
* We have shown the connection between PCA and SLCE and argued that SLCE is a form of supervised PCA.
* Unlike Centroid-Encoder, which uses nonlinear mapping, the proposed technique creates a projection by mapping a sample to its class centroid.
* We proposed a closed-form solution using the eigendecomposition of a symmetric matrix.
* We provide upper bound on the number of eigenvectors with positive eigenvalues that comprise the SLCE embedding.
* We have shown how the eigenvalues are connected to the final objective, i.e., the centroid-reconstruction loss.
This paper is organized as follows: In Section 2, we review the related literature on supervised DR techniques. In Section 3, we present Supervised Linear Centroid-Encoder including its derivation and properties. Section 4 presents comparative experimental results on five bench-marking data sets. We conclude in Section 5
## 2 Related Work
Dimensionality reduction has a long history and is still an active area of research, see. e.g., (Chepushtanova et al., 2020; Van Der Maaten et al., 2009) and references therein. A variety techniques with a spectrum of optimization problems and heuristics have been discovered over the past decades. However, considering that our proposed method is supervised and linear, we will briefly describe this class of algorithms.
Fisher's linear discriminant analysis (LDA) is historically one of the most widely used supervised dimensionality reduction techniques (Fisher, 1936; Duda and Hart, 1973). LDA reduces the dimension by minimizing the class scatter and maximizing the class separation in the reduced space. LDA creates the mapping for a \(C\) class data using a \(C-1\)-dimensional sub-space. Although PCA is an unsupervised technique, several attempts have been made to incorporate the label information into the model. One such method is Bair's supervised principal component (Bair's SPCA) which is a heuristic-based approach. It's similar to PCA but uses a subset of features that have the maximum dependencies on the response variables, i.e., class label. The feature dependence on the class label is calculated by the standard regression coefficient, which is defined below
\[w_{j}=\frac{x_{j}^{T}y}{\sqrt{x_{j}^{T}x_{j}}} \tag{1}\]
where \(x_{j}\) is the \(j^{th}\) variable and \(y\) is the response variable. After calculating the \(w_{j}\) or the importance of each feature, a threshold \(\theta\) is used to select the most important ones, and at last, standard PCA is applied to the selected features. The proposed method is a two-step process, and the authors used cross-validation to pick an optimal \(\theta\). Notice the \(\theta\) is data set dependent, and searching for an optimum value using cross-validation is computationally expensive. The shortcomings were addressed by Piironen et al., who proposed an iterative supervised principal component (ISPC) (Piironen and Vehtari, 2018). ISPC doesn't use cross-validation for feature screening and can be used for multiple classes.
Barshan et al. (Barshan et al., 2011) formulated a supervised PCA using reproducing Kernel Hilbert Space to maximize the dependency of a sample in the low-dimensional space on the outcome measurement. Let there be \(n\)\(p\)-dimensional samples stacked in a \(p\times n\) data
matrix \(X\), and \(Y\) be a matrix of outcome measurement. Given a transformation matrix \(U\), the model finds the solution by maximizing the dependency of the projected data \(U^{T}X\) to Y. The kernel Hilbert Space measures the dependence between \(U^{T}X\) and Y. Given \(K\) is the kernel of \(U^{T}X\) (e.g., \(X^{T}UU^{T}X\)), L is the kernel of Y (e.g., \(Y^{T}Y\)), and \(H:=I-n^{-1}ee^{T}\), the problem is posed as a constraint optimization shown below:
\[\begin{split}\underset{U}{argmax}&tr(U^{T}XHLHX^{T}U)\\ &subject\;to\;U^{T}U=I\end{split} \tag{2}\]
Notice the matrix \(Q=XHLHX^{T}\) is real and symmetric, and the eigendecomposition of \(Q\) will provide the solution (\(U\)).
The supervised probabilistic PCA (SPPCA) (Yu et al., 2006) is a generative model that uses latent variables to generate the original data and class label using EM learning. The generation of observed data (\(x\in R^{M}\)) and labels (\(y\in R^{L}\)) from the latent variables (\(z\in R^{K}\)) takes the following form,
\[\begin{split} x=W_{x}z+\mu_{x}+\epsilon_{x}\\ y=W_{y}z+\epsilon_{y}\end{split} \tag{3}\]
where \(W_{x}\in R^{M\times K}\), \(W_{y}\in R^{L\times K}\) are the linear coefficient matrices for data and labels respectively, \(\mu_{x}\) is the data mean, and \(\epsilon_{x},\epsilon_{y}\) are isotropic Gaussian noise model for \(x,y\) respectively. The latent variables are also assumed to follow a Gaussian distribution with zero mean and unit variance and are shared by both inputs and outputs. To generate the class labels, the model uses \(L\), where \(L\) is the dimension of outcomes, deterministic functions that are assumed to be linear. The learning happens in two steps; in the first step, the expected distribution of \(z\) is calculated while fixing the model parameters. In the next step, the log-likelihood of the data is maximized by keeping the distribution of \(z\) unchanged. These two steps are repeated until the model reaches a local minimum.
Li et al. formulated an SVD (singular value decomposition) based supervised PCA, which they call SupSVD (Li et al., 2016). The technique recovers a low-rank approximation of the data matrix \(X\) with the help of a supervision matrix \(Y\) as shown below
\[\begin{split} X=UV^{T}+E,\\ U=YB+F\end{split} \tag{4}\]
where \(X\in R^{p\times n}\), \(Y\in R^{q\times n}\), \(U\in R^{n\times r}\) the latent score matrix, \(V\in R^{p\times r}\) the full-rank loading matrix, \(B\in R^{q\times r}\) the coefficient matrix, and \(E\in R^{n\times p}\), \(F\in R^{n\times r}\) are two error matrices. The first part of Equation 4 extracts a low-rank approximation of data matrix \(X\), and the second part uses multivariate linear regression to impose the supervision effect of \(Y\) on \(U\).
Ritchie et al. proposed another supervised PCA (Ritchie et al., 2019), which minimizes the traditional PCA cost along with a regression loss on the class labels. Their optimization problem is then
\[\begin{split}\underset{L,\beta}{minimize}&\|Y-XL^{T} \beta\|_{F}^{2}+\lambda\|X-XL^{T}L\|_{F}^{2}\\ subject&\;to\;LL^{T}=I_{k}\end{split} \tag{5}\]
where \(X\in R^{n\times p}\) and \(Y\in R^{n\times q}\) are the data matrix and label respectively, \(L\in R^{k\times p}\) is the basis for learned subspace, and \(\beta\in R^{k\times q}\) is the learned coefficients for prediction. The first term is the conventional regression loss calculated on the reduced space (\(XL^{T}\)) and the regression coefficient (\(\beta\)) is the standard least square solution for a fixed \(L\). The authors used a gradient-based iterative method to solve the problem over the Grassmannian manifold. Note that the approach doesn't offer a closed-form solution, and the number of principal components is also user-defined.
In contrast to the above-mentioned supervised PCA technique, which can be applied to discrete data points, supervised functional principal component analysis (SFPCA), proposed by Nie et al. (Nie et al., 2018), works on functional data analysis. The method finds the functional principal components (FPCs) by maximizing the quantity
\[\begin{split} Q(\xi)=\frac{\theta\langle\xi,\mathcal{L}\xi \rangle+(1-\theta)cov^{2}(Y,\langle X,\xi\rangle)}{\|\xi\|^{2}}\\ subject\;\;to\;\;\langle\xi_{i},\xi_{i}\rangle=1,\;\;\langle\xi_{i},\xi_{j}\rangle=0\end{split} \tag{6}\]
where \(X,Y\) are data and labels respectively, \(\xi\) the functional principal component, \(0\leq\theta\leq 1\), \(\mathcal{L}\) is the empirical covariance operator, \(\langle\cdot,\cdot\rangle\) is the usual inner product space. Notice that the first term in the numerator is the unsupervised FPCA and the second term captures the squared covariance between FPC scores \(\langle X,\xi\rangle\) and the response variable \(Y\). The hyperparameter \(\theta\) balance the unsupervised and supervised terms.
## 3 Supervised Linear Centroid-Encoder (SLCE)
Let \(X\in\mathbb{R}^{d\times n}\) be a data matrix where \(n\) is the total number of samples and \(d\) is the dimension of each sample \(x_{i}\in\mathbb{R}^{d}\). Assume the columns of \(X\) each belongs to one of \(M\) classes \(\{C_{j}\}_{j=1}^{M}\) where the set of pattern indices of class \(C_{j}\) is denoted by \(I_{j}\). The centroid of each class is defined as
\[c_{j}=\frac{1}{|C_{j}|}\sum_{i\in I_{j}}x_{i} \tag{7}\]
where \(|C_{j}|\) is the cardinality of class \(C_{j}\). Define a matrix of class means \(\tilde{C}\in\mathbb{R}^{d\times n}\) where the \(i\)'th column of \(\tilde{C}\) is the centroid associated with the class of the \(i\)'th column of \(X\). Note \(\tilde{C}\) will have non-unique entries as long as \(M<n\). For example, consider the data set \(X=\{x_{1},x_{2},x_{3},x_{4},x_{5}\}\) which has two classes \(C_{1},C_{2}\) where \(I_{1}=\{1,3,5\}\) and \(I_{2}=\{2,4\}\). Taking \(c_{1},c_{2}\) as the corresponding centroids we have \(\tilde{C}=\{c_{1},c_{2},c_{1},c_{2},c_{1}\}\). With this set up, we present the formulation of Supervised Linear Centroid-Encoder (SLCE).
### Formulation with Orthogonality Constraint
The goal of SLCE is to provide the orthogonal projection of the data to \(k\) dimensions that best approximates the class centroids. This derivation proceeds one dimension at a time and can be shown to be equivalent to computing the optimal rank \(k\) projection. The projection onto the best one-dimensional space spanned by the unknown vector \(\mathbf{a}\in\mathbb{R}^{d}\) may be determined by the optimization problem
\[\underset{a}{minimize}\;\;\|\tilde{C}-aa^{T}X\|_{F}^{2}\;\;\;subject\;to\;a^{ T}a=1 \tag{8}\]
The Lagrangian of Equation (8) is
\[\mathcal{L}(a,\lambda)=\|\tilde{C}-aa^{T}X\|_{F}^{2}-\lambda(a^{T}a-1) \tag{9}\]
where \(\lambda\) is the Lagrangian multiplier. Notice that, setting \(\frac{\partial\mathcal{L}}{\partial\lambda}=0\) implies \(a^{T}a=1\). Taking the derivative of Equation 9 w.r.t. \(a\) and setting it to 0 gives,
\[(X\tilde{C}^{T}+\tilde{C}X^{T}-XX^{T})a=(\lambda+a^{T}XX^{T}a)a \tag{10}\]
Setting \(\mu=\lambda+a^{T}XX^{T}a\) we obtain
\[(X\tilde{C}^{T}+\tilde{C}X^{T}-XX^{T})a=\mu a \tag{11}\]
Since \(X\tilde{C}^{T}+\tilde{C}X^{T}-XX^{T}\) is a symmetric matrix, the solution to the optimization problem given by Equation (9) is provided by the eigenvector of Equation (11) with the largest eigenvalue. To solve for the second projection direction \(b\) we require
\[\begin{split}&\underset{b}{minimize}\ \ \|\tilde{C}-bb^{T}X\|_{F}^{2}\\ & subject\ to\ b^{T}b=1\ \ \ b^{T}a=0\end{split} \tag{12}\]
The Lagrangian of Equation 12 is
\[\mathcal{L}(b,\alpha,\beta)=\|\tilde{C}-bb^{T}X\|_{F}^{2}-\alpha(b^{T}b-1)- \beta(b^{T}a) \tag{13}\]
where \(\alpha\) and \(\beta\) are the Lagrangian multipliers. Differentiating the Lagrangian again and setting equal to zero we obtain the necessary condition for \(b\)
\[\begin{split}&(X\tilde{C}^{T}+\tilde{C}X^{T}-XX^{T})b=\gamma b\\ & where\ \ \gamma=(\alpha+b^{T}XX^{T}b)\end{split} \tag{14}\]
Here \(b\) is the eigenvector of the symmetric matrix \(X\tilde{C}^{T}+\tilde{C}X^{T}-XX^{T}\) associated with the second largest eigenvalue and (from symmetry) \(b\) is orthogonal to \(a\). One can proceed in a similar fashion and obtain the remaining solutions (Horn and Johnson, 2012).
### Formulation as a Rank-\(k\) Projection
To find the best rank-\(k\) projection we can solve the optimization problem
\[\begin{split}&\underset{A}{minimize}\ \ \|\tilde{C}-AA^{T}X\|_{F}^{2}\ \ subject\ to\ A^{T}A=I\\ &\ \
multipliers 1. Differentiating the Equation (16) w.r.t. \(A\) and setting the derivative to zero gives
Footnote 1: We have left the proof in Appendix 5
\[(X\tilde{C}^{T}+\tilde{C}X^{T}-XX^{T})A=A\Lambda \tag{17}\]
In Figure (1) we illustrate the geometric intuition behind SLCE. This figure captures how SLCE produces a subspace to to reduce the data that captures label information through the centroids.
### Some properties of SLCE
Here we present several of mathematical properties of the proposed algorithm.
**Property 1. The matrix \(\mathbf{X\tilde{C}^{T}+\tilde{C}X^{T}}\) is a symmetric and positive semi-definite (PSD).**
Proof: By construction \(\tilde{C}\) has the corresponding \(c_{j}\)'s for each \(x_{i}\) in \(X\) and both \(\tilde{C},X\in\mathbb{R}^{d\times n}\). Without loss of generality, we can order the samples in \(X\) based on the corresponding class label, i.e., all the samples of class \(C_{1}\) will appear first, followed the samples of class \(C_{2},C_{3},...,C_{M}\) where \(M\) is the total number of classes. Lets consider the representation of
Figure 1: The geometric intuition of the SLCE algorithm using Setosa and Versicolor classes of Iris data, where we used the first two features to represent each sample. In panel (a), we used an arbitrary line to compute the cost in Equation (8). The original samples were reconstructed using the line, and then the distances from the corresponding class centroid were denoted using the black lines. On the other hand, panel (b) shows the reconstruction of all the samples using the SLCE solution and the distances from the corresponding class centroids. Notice that the sum of the distances \((d_{1},...,d_{6})\) using SLCE is less than using an arbitrary line; SLCE explicitly searches for a line that minimizes the sum of the distances.
the term \(X\tilde{C}^{T}\), i.e.,
\[X\tilde{C}^{T}=\sum_{j=1}^{M}\sum_{i\in I_{j}}x_{i}c_{j}^{T} \tag{18}\]
where \(I_{j}\) is the index set of \(j^{th}\) class. This can be rewritten as
\[X\tilde{C}^{T}=\sum_{j=1}^{M}\left(\sum_{i\in I_{j}}x_{i}\right)c_{j}^{T} \tag{19}\]
from which it follows
\[X\tilde{C}^{T}=\sum_{j=1}^{M}|C_{j}|\left(\frac{1}{|C_{j}|}\sum_{i\in I_{j}}x_ {i}\right)c_{j}^{T} \tag{20}\]
Defining \(|C_{j}|\) as the cardinality of \(j^{th}\) class we see
\[X\tilde{C}^{T}=\sum_{j=1}^{M}|C_{j}|\left(c_{j}c_{j}^{T}\right) \tag{21}\]
Notice, each \(c_{j}c_{j}^{T}\) is PSD. As \(X\tilde{C}\) is a sum of \(M\) PSD matrices, hence \(X\tilde{C}\) is a symmetric PSD matrix. Hence
\[\tilde{C}X^{T}=X\tilde{C}^{T} \tag{22}\]
so it is also PSD. Therefore \(X\tilde{C}^{T}+\tilde{C}X^{T}\) is a symmetric PSD matrix.
Given the construction from above, we have the following additional property:
**Property 2. If M is the number of classes, X and \(\tilde{\bf C}\) the data and centroid matrix, respectively, then the rank \(({\bf X\tilde{C}^{T}+\tilde{C}X^{T}})\leq{\bf M-1}\).**
Proof: From Property 1. we know rank \((X\tilde{C}^{T}+\tilde{C}X^{T})=rank(\tilde{C}X^{T})\). We also know that \(rank(\tilde{C}X^{T})=rank(\tilde{C})\). Note, mean subtraction of data matrix \(X\) makes the class centroid as linearly dependent2. Hence \(\text{rank}(X\tilde{C}^{T}+\tilde{C}X^{T})=rank(\tilde{C})\leq(M-1)\).
Footnote 2: See Appendix 5
This next property is important in that it tells us which eigenvectors to use for the data reduction.
**Property 3. The matrix \({\bf X\tilde{C}^{T}+\tilde{C}X^{T}-XX^{T}}\) has at most \({\bf M-1}\) positive eigenvalues where M is the number of classes in \(\tilde{\bf C}\).**
Proof: Let \(A:=X\tilde{C}^{T}+\tilde{C}X^{T}-XX^{T}\) and \(B:=XX^{T}\) where \(A,B\in\mathbb{R}^{d\times d}\). \(A+B=X\tilde{C}^{T}+\tilde{C}X^{T}\). From the Monotonicity theorem, which is a corollary of Weyl's theorem involving inequalities of Hermitian matrices (Horn and Johnson, 2013) we have
\[\lambda_{i}(A+B)\geq\lambda_{i}(A) \tag{23}\]
where we assume the eigenvalues are in decreasing order. Hence we conclude
\[\lambda_{i}(X\tilde{C}^{T}+\tilde{C}X^{T})\geq\lambda_{i}(X\tilde{C}^{T}+ \tilde{C}X^{T}-XX^{T}) \tag{24}\]
Since \(\lambda_{i}(X\tilde{C}^{T}+\tilde{C}X^{T})=0\) if \(i\geq M\), it follows that
\[\lambda_{i}(X\tilde{C}^{T}+\tilde{C}X^{T}-XX^{T})\leq 0\]
for \(i\geq M\) from which the property follows. The Figure (2) demonstrates the inequality on PANCAN data.
There is also an important relationship between the eigenvalues and the objective function.
**Property 4. Let a be an eigenvector of the matrix \(\mathbf{X\tilde{C}^{T}+\tilde{C}X^{T}-XX^{T}}\) with eigenvalue \(\mu\). Then it follows**
\[\|\mathbf{\tilde{C}-aa^{T}X}\|_{\mathbf{F}}^{2}=\mathbf{Tr}(\mathbf{\tilde{C}^ {T}\tilde{C}})-\mu\]
Proof: Given
\[(X\tilde{C}^{T}+\tilde{C}X^{T}-XX^{T})a=\mu a\]
it follows that
\[a^{T}(X\tilde{C}^{T}+\tilde{C}X^{T}-XX^{T})a=\mu \tag{25}\]
and, after taking the trace and using its properties,
\[-Tr(a^{T}X\tilde{C}^{T}a)-Tr(a^{T}\tilde{C}X^{T}a)+Tr(a^{T}XX^{T}a)=-\mu \tag{26}\]
Now adding \(Tr(\tilde{C}^{T}\tilde{C})\) to the both side of Equation 26
\[Tr(\tilde{C}^{T}\tilde{C}-\tilde{C}^{T}aa^{T}X-X^{T}aa^{T}\tilde{C}+X^{T}aa^{T }aa^{T}X)=Tr(\tilde{C}^{T}\tilde{C})-\mu \tag{27}\]
\[Tr[(\tilde{C}-aa^{T}X)^{T}(\tilde{C}-aa^{T}X)]=Tr(\tilde{C}^{T}\tilde{C})-\mu \tag{28}\]
\[\|\tilde{C}-aa^{T}X\|_{F}^{2}=Tr(\tilde{C}^{T}\tilde{C})-\mu \tag{29}\]
The above equation establishes the relationship between the cost and the eigenvalue. If \(\mu_{1}>\mu_{2}\) and the corresponding costs are \(C_{\mu_{1}},C_{\mu_{2}}\), then \(C_{\mu_{1}}<C_{\mu_{2}}\). Figure 3 demonstrates the relationship visually.
**Property 5. Let A be a rank k orthonormal matrix that minimizes the error in Equation 15. Then the total error in the approximation is**
\[\|\tilde{\bf C}-\mbox{AA}^{\bf T}{\bf X}\|_{\bf F}^{2}=\mbox{Tr}(\tilde{\bf C }^{\bf T}\tilde{\bf C})-\sum_{\bf i=1}^{\bf k}\mu_{\bf i} \tag{30}\]
Proof: the derivation is similar to that of Equation (29).
**Property 6. The subspace of best fit given by the range of A, \({\cal R}({\bf A})\), has \(k\) dimensions where \({\bf k}\leq{\bf M}-{\bf 1}\) is the number of non-zero eigenvalues \(\mu_{i}\).**
Proof: This fact follows directly from the error Equation (30). The error is no longer reduced when the eigenvalues become zero. From Property 3 the number of positive eigenvalues is bounded by \(M-1\). If we take additional eigenvectors to represent the data they will not decrease the error in the objective function if the eigenvalue is zero and will increase the error if the eigenvalue is negative.
Figure 3: We show how the one-dimensional reconstruction cost (Equation 8) changes for the first 25 eigenvector along with the corresponding eigenvalues. The experiment is run using COIL20 dataset.
### Connection with PCA
PCA can be derived from the reconstruction perspective as follows,
\[\begin{split}\underset{a}{\text{\emph{minimize}}}&\|X-aa^{T}X \|_{F}^{2}\\ & subject\text{ }to a^{T}a=1\end{split} \tag{31}\]
where \(a\) is the transformation vector. Comparing equation 31 with 8 we can say PCA reconstructs each samples, whereas SLCE reconstructs the centroid of a class. As SLCE uses the class labels to calculate the centroids, hence it can be thought of as supervised PCA.
Figure 4 compares the solutions of PCA and SLCE on a toy data. Notice that the first eigenvector of PCA is governed by the data variance where as the solution of SLCE is governed by the class centroids.
In Figure 5, we show the visualization on MNIST digits 4, 7, and 9. These three digits are not separable in two-dimensional PCA space, and we want to
Figure 4: Comparison of PCA,LDA, and SLCE solution on Iris data. We used the first two features to represents each samples from Setosa and Versicolor classes.
produce better visualization. In panel (a), we present the 2D SLCE projection. The three test classes are clearly separated, creating three blobs for each category.
### Connection with Centroid-Encoder
The nonlinear mapping of centroid-encoder (CE) (Ghosh and Kirby, 2022) is defined as,
\[\underset{\theta}{\text{minimize}}\ \ \sum_{j=1}^{M}\sum_{i\in I_{j}}\|c_{j}-f(x^{i };\theta))\|_{2}^{2} \tag{32}\]
where \(f(\cdot)=h(g(\cdot))\), composition of dimension reducing _encoder_ mapping \(g\) followed by a dimension increasing _decoder_ mapping \(h\) with \(\theta\) being the parameter set for the mapping function. Both SLCE and CE reconstruct a class-centroid from the samples belonging to that class, but unlike CE, SLCE incorporates a linear mapping with orthogonality constraints.
Figure 5: Comparison of SLCE (a) and PCA (b) projection on MNIST digits 4,7, and 9.
## 4 Visualization and Classification Results
We present the comparative evaluation of our models on various data sets using several linear dimensionality reduction techniques. We evaluated our proposed approach, SLCE as described in Section3.1 to compare and contrast with other five State-of-the-Art techniques.
### Experimental Details
This section compares our proposed method with other linear dimensionality reduction techniques on eleven benchmarking data sets. Table 1 gives the data set details which were also used in literature (Barshan et al., 2011; Ritchie et al., 2019). We ran three sets of experiments to compare our method with Fisher LDA (Duda et al., 2006), Bairs Supervised PCA (Bair et al., 2006), Barshan's HSIC Supervised PCA (Barshan et al., 2011), Richie's Supervised PCA (Ritchie et al., 2019, 2020),and PCA (Jolliffe, 1986). We didn't compare the SupSVD method (Li et al., 2016) as the authors didn't run any classification experiment in reduced space. The experiments follow the standard workflow.
* **Step1:** Split each data sets into training and test partition.
* **Step2:** Train each models on the training set.
* **Step3:** Using the trained models, project the training and test samples on \(p\)-dimensional space where \(p\in\{2,3,5,10,15,20\}\).
* **Step4:** Calculate \(5-\)NN accuracy on the \(p-\)dimensional space.
* **Step5:** Repeat steps 1 to 4 twenty-five times and report average accuracy with standard deviation.
The experiment-specific details are given below.
**Experiment 1:** This experiment aims to compare each model's low-dimensional (
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Dataset & \#Features & \#Classes & \#Samples & Domain \\ \hline USPS & 256 & 10 & 11000 & Image \\ \hline MNIST & 784 & 10 & 70000 & Image \\ \hline Human Activity & 561 & 6 & 5744 & Accelerometer Sensor \\ \hline Ionosphere & 34 & 2 & 354 & Radar \\ \hline Colon & 2000 & 2 & 62 & Biology \\ \hline Mice Protein & 77 & 8 & 975 & Biology \\ \hline Arcene & 10000 & 2 & 900 & Mass \\ & & & \multicolumn{1}{c}{} & Spectrometric \\ \hline PANCAN & 20531 & 5 & 801 & RNA-Seq \\ \hline Olivetti & 4096 & 20 & 400 & Image \\ \hline Yale Face & 1024 & 15 & 165 & Image \\ \hline COIL20 & 1024 & 20 & 1440 & Image \\ \hline \end{tabular}
\end{table}
Table 1: Descriptions of the data sets used for bench-marking experiments.
\(\{2,3\}\)) embedding on USPS, MNIST, Human Activity, Mice Protein, Arcene, and PanCan data sets. The comparison is made using a \(5-\)NN classifier on reduced space. We split each data set into a ratio of 80:20 of training and test partition, except for MNIST, which has a separate test set. We repeat the process 25 times and report the average accuracy with standard deviation.
**Experiment 2:** In this experiment, we compared SLCE with Richie's Supervised PCA (Ritchie et al., 2020) on Colon, Ionosphere, and Arcene data sets. Following the experimental setup in (Ritchie et al., 2020), we split each dataset into an \(80:20\) ratio of train and test. We built our models on the training set to embed data on two-dimensional space; we then predicted the class of test samples using a \(5-\)NN classifier. We compared our method with the published results in (Ritchie et al., 2020).
**Experiment 3:** The goal of this experiment is to compare the models by performing classification on different embedding dimensions, i.e., \(p\in\{5,10,15,20\}\). We used three data sets, Olivetti, YaleFace, and COIL20, and split them into a \(50:50\) ratio of train and test. Each model is fitted on the training partition, and the \(5-\)NN classification is calculated using different embedding dimensions. The process is repeated 25 times, and the average accuracies are plotted for comparison.
### Results
First, we discuss the results of _Experiment 1_ comparing SLCE, LDA, PCA, HSIC PCA, and Bair's SPCA as presented in Table 2. We used embedding dimensions 2 and 3. We observe that SLCE generally produces better generalization performance than other methods, both in the two and three-dimensional embedding space.
Notice that PCA performed better than LDA on the Arcene data. Because Arcene has two classes, the LDA classification occurs
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \hline \multicolumn{5}{|c|}{Classification on embedding dimension = 2} \\ \hline Dataset & SLCE & LDA & PCA & HSIC SPCA & Bair’s SPCA \\ \hline USPS & 55.64 \(\pm\) 0.95 & \(47.20\pm 0.77\) & \(39.83\pm 0.91\) & \(45.67\pm 1.16\) & \(40.88\pm 0.92\) \\ \hline MNIST & \(45.74\pm 0.00\) & \(46.03\pm 0.00\) & \(42.43\pm 0.00\) & \(44.17\pm 0.00\) & \(41.92\pm 0.00\) \\ \hline Activity & 65.20 \(\pm\) 0.87 & \(64.18\pm 1.36\) & \(51.83\pm 1.46\) & \(63.26\pm 1.26\) & \(53.05\pm 1.76\) \\ \hline Mice Protein & \(65.56\pm 2.83\) & \(51.50\pm 2.37\) & \(44.91\pm 2.82\) & \(51.54\pm 3.59\) & \(64.57\pm 9.24\) \\ \hline Arcene & \(83.61\pm 5.05\) & \(63.71\pm 5.70\) & \(67.71\pm 7.63\) & \(71.80\pm 6.25\) & \(68.98\pm 5.14\) \\ \hline PANCAN & \(88.42\pm 2.44\) & \(94.23\pm 4.78\) & \(94.63\pm 1.71\) & \(94.65\pm 1.95\) & \(94.55\pm 2.27\) \\ \hline \hline \multicolumn{5}{|c|}{Classification on embedding dimension = 3} \\ \hline USPS & **76.69 \(\pm\) 0.69** & \(69.37\pm 1.15\) & \(47.12\pm 0.78\) & \(66.99\pm 0.81\) & \(48.79\pm 1.80\) \\ \hline MNIST & **69.72 \(\pm\) 0.00** & \(67.51\pm 0.00\) & \(48.75\pm 0.00\) & \(63.59\pm 0.00\) & \(49.90\pm 0.00\) \\ \hline Activity & **77.98 \(\pm\) 1.21** & \(70.57\pm 1.02\) & \(66.96\pm 1.04\) & \(70.87\pm 1.16\) & \(66.91\pm 1.13\) \\ \hline Mice Protein & **80.56 \(\pm\) 2.94** & \(68.50\pm 3.32\) & \(64.46\pm 3.11\) & \(67.89\pm 2.70\) & \(77.09\pm 5.05\) \\ \hline Arcene & \(\textbf{84.00}\pm\textbf{6.62}\) & \(62.54\pm 8.70\) & \(71.22\pm 6.90\) & \(76.68\pm 5.82\) & \(69.07\pm 6.89\) \\ \hline PANCAN & \(98.94\pm 0.89\) & \(\textbf{99.09}\pm\textbf{0.85}\) & \(98.60\pm 0.73\) & \(94.53\pm 1.12\) & \(98.28\pm 1.01\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classification accuracies (%) of 5-NN classifier on the 2D and 3D embedded data by various dimensionality reduction techniques. The best results are highlighted in bold.
in 1-dimensional space, whereas PCA uses two-dimensional space. We think the extra dimension of PCA makes it better than LDA in this case. Observe that SLCE performed poorly on PANCAN data in 2D space, but the performance improved significantly in 3D. By design SLCE creates the embedding by reconstructing the class centroids in the ambient dimension. If centroids are close to each other in the ambient space, SLCE will put the classes close together in the low-dimensional space deteriorating the classification. But in a higher dimension, the classes could be separated, improving the classification result. The visualization in Figure 6 of the PANCAN data using SLCE establishes the fact. Notice that the samples from black and blue categories are on top of each other in 2D space but are well separated in 3D space, which clarifies the jump in classification accuracy.
The classification of the Arcene dataset in 3D space has mostly stayed the same compared to 2D for SLCE. As Arcene has two classes, the first eigenvector of SLCE has a positive eigenvalue, and the following two solutions have zero eigenvalues, see Figure 7. Therefore the second and third eigenvectors don't minimize the centroid reconstruction loss much compared to the first one; hence they don't contribute much to classification. In fact, classification accuracy using the first eigenvector is \(83.12\pm 6.28\), which is as good as using two/three dimensions. Not surprisingly, in most cases, supervised methods perform better than PCA, which doesn't use labels. The standard deviation on MNIST is 0 for all the models given MNIST has a fixed training and test partition.
Table 3 gives quantitative measures of the quality of embedding comparing SLCE with LRPCAs. SLCE stood out as the best-performing model in all three scenarios. Notice that the standard deviation is also better in all the cases except for Ionosphere.
Now we turn our attention to the third experiment, where the goal is to plot the classification accuracies as a function of embedding dimension. Figure 8 presents the classification accuracies using different embedding dimensions on Olivetti, Yaleface, and COIL20 data. In general, the accuracy increases with the embedding dimension across the methods. Our
Figure 6: Embedding of PANCAN data in two (left) and three (right) using SLCE. The solid and blank circles are the training and test cases respectively.
proposed method SLCE significantly outperform other supervised models in all three cases. The Yaleface has 15 classes, so the LDA classification uses 14-dimensional space. As in Experiment 1, PCA performed poorly in most classification tasks except for three cases with embedding dimension five, where PCA showed better performance than LDA. Note SLCE didn't improve the accuracy on Yaleface from embedding dimensions 15 to 20. The reason is the same as in Arcene. The top fourteen eigenvectors have positive eigenvalues, which minimizes the cost, and any eigenvector after that doesn't reduce the cost; hence the classification accuracy remains the same.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Data set} & \multicolumn{3}{c|}{Supervised Methods} \\ \cline{2-4} & SLCE & LRPCA (CV) & LRPCA (MLE) \\ \hline Colon & \(\mathbf{83.08\pm 7.22}\) & \(80.80\pm 10.40\) & \(80.80\pm 12.50\) \\ \hline Ionosphere & \(\mathbf{86.03\pm 4.36}\) & \(83.90\pm 4.20\) & \(85.90\pm 2.60\) \\ \hline Arcene & \(\mathbf{83.41\pm 6.51}\) & \(80.67\pm 11.20\) & \(81.00\pm 8.40\) \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of classification results between SLCE and LRPCA on several benchmarking data sets. The mean classification accuracies are measured in two-dimensional space over 25 runs. Results of LRPCA are reported from (Ritchie et al., 2020). The best result is highlighted in bold.
## 5 Conclusion and Future Work
We proposed a supervised dimensionality reduction technique SLCE. We formulated a constrained optimization problem and showed that it has a closed-form solution using an eigen-decomposition. We proved the eigenvalue problem has at most \(M-1\) number of positive eigenvalues where \(M\) is the number of classes. We established a connection between the
Figure 8: Comparison of classification accuracies on Olivetti (top), Yaleface (middle), and COIL20 (bottom) data using LDA, PCA, HSIC PCA, and SLCE. For each data set, classification is done on four different embedding dimensions, 5, 10, 15, and 20. As Yale Face has 15 classes so the maximum embedding dimension for LDA is 14 (\(\#class-1\)).
eigenvalues and the cost of the model. We further demonstrated that our proposed model, SLCE is a form of supervised PCA. Unlike other supervised PCA formulations, our model doesn't use raw class labels; instead, it uses the class centroid to impose the supervision. At the same time, the model doesn't require tuning any hyperparameters using a validation set as needed by Bair's SPCA and Richie's SPCA. Unlike Centroid-Encoder, SLCE doesn't require tuning a neural network architecture and other hyperparameters, e.g., learning rate, minibatch size, etc. The closed form solution is appealing and SLCE can be used for much smaller datasets than required by nonlinear data fitting problems.
Our classification and visualization experiment on various data sets established the efficiency of the proposed method and showed that, in most cases, it produces better generalization performance compared to other linear State-of-the-Art techniques. The computational complexity of SLCE is the same as PCA, with an added overhead to compute the class centroids, but this is also linear with the number of classes.
The objective function of SLCE can be modified to accommodate a penalty term using \(\ell_{1}\)-norm to promote feature sparsity. In this setting, the algorithm could be used as a feature selector. The model can be easily extended to the semisupervised setting by adding the sample reconstruction term of PCA. With its current form, SLCE reconstructs class centroids from the samples in the same class. In reduced space, two classes may overlap if the class centroids are close in ambient space. Adopting a cost that also caters to separating the different classes in embedding space may benefit classification tasks. In the future, we will explore these ideas.
## Acknowledgments
We would like to acknowledge support for this research from the National Science Foundation under award NSF-ATD 1830676.
## Appendix A Proof of \(\Lambda\) being diagonal
The Lagrangian of rank \(k\) projection,
\[\mathcal{L}(A,\Lambda)=\|\tilde{C}-AA^{T}X\|_{F}^{2}-tr(\Lambda(A^{T}A-I)) \tag{33}\]
For the rank \(k\)-projection, we have \(k+k(k-1)/2\) number of constraints. The \(k\)-constraints are for each column of matrix A having unit norm, i.e. \({A_{i}}^{T}A_{i}=1\). The \(k(k-1)/2\)-constraints are for each pair of \({A_{i}}^{T}A_{j}=0,i\neq j\), i.e., column \(i\) and \(j\) of \(A\) are orthogonal to each other. For each constraints there will be a Lagrange multiplier as shown below,
\[\begin{split}\mathcal{L}(A,\Lambda)=\|\tilde{C}-AA^{T}X\|_{F}^{ 2}-\lambda_{1}({A_{1}}^{T}A_{1}-1)-\ldots-\lambda_{k}({A_{k}}^{T}A_{k}-1)-\\ \lambda_{k+1}({A_{1}}^{T}A_{2}-0)-\lambda_{k+1}({A_{2}}^{T}A_{1} -0)-\ldots-\lambda_{k(k-1)/2}({A_{k(k-2)/2}^{T}A_{k(k-1)/2}-0})\\ -\lambda_{k(k-1)/2}({A_{k(k-1)/2}^{T}A_{k(k-2)/2}-0})\end{split} \tag{34}\]
We can incorporate all the constraints by using the trace of the matrix \(\Lambda(A^{T}A-I)\). Note the constraints from \(k+1\) to \(k(k-1)/2\) are repeated twice because of the symmetry of inner product. i.e. \({A_{i}}^{T}A_{j}={A_{j}}^{T}A_{i}\). Notice that the diagonal entries of \(\Lambda\) will contain the first \(k\) Lagrange multipliers (\(\lambda_{1}\ldots\lambda_{k}\)) and the off-diagonal elements will contain the rest with \(\Lambda_{i,j}=\Lambda_{j,i}\). It's clear that \(\Lambda\) is a symmetric matrix and therefore we can do eigendecomposition of as follows, \(\Lambda=V\Phi V^{T}\), where \(V\) is a orthogonal matrix such that \(V^{T}V=VV^{T}=I\), and \(\Phi\) is a diagonal matrix which contains the eigenvalues. We can expand the terms in trace in Equation 33 as follows,
\[\begin{split} tr(\Lambda(A^{T}A-I))=tr(V\Phi V^{T}(A^{T}A-I))\\ tr(\Lambda(A^{T}A-I))=tr(\Phi V^{T}(A^{T}A-I)V)\\ tr(\Lambda(A^{T}A-I))=tr(\Phi(V^{T}A^{T}AV-V^{T}IV))\\ tr(\Lambda(A^{T}A-I))=tr(\Phi(V^{T}A^{T}AV-I))\\ tr(\Lambda(A^{T}A-I))=tr(\Phi(M^{T}M-I))\ \ where\ \ M:=AV\end{split} \tag{35}\]
Notice\(A=MV^{T}\) and \(AA^{T}=MM^{T}\). Now we can rewrite the cost function in Equation33 as follows,
\[\mathcal{L}(M,\Phi)=\|\tilde{C}-MM^{T}X\|_{F}^{2}-tr(\Phi(M^{T}M-I)) \tag{36}\]
Comparing Equation 33 with 36, we can have a change of variable \(M=AV\) and make the Lagrange multiplier as a diagonal matrix \(\Phi\). At last we show that if \(A\) is the optimal solution of Equation 33, then \(M\) is also a solution. The solution of 33 comes as a eigendecomposition problem in the form,
\[(X\tilde{C}^{T}+\tilde{C}X^{T}-XX^{T})A=A\Lambda \tag{37}\]
\[(X\tilde{C}^{T}+\tilde{C}X^{T}-XX^{T})A=AV\Phi V^{T} \tag{38}\]
\[(X\tilde{C}^{T}+\tilde{C}X^{T}-XX^{T})AV=AV\Phi \tag{39}\]
\[(X\tilde{C}^{T}+\tilde{C}X^{T}-XX^{T})M=M\Phi \tag{40}\]
It can be seen that the change of variable gives the same eigendecomposition problem. Hence we can write the Lagrange multiplier as a diagonal matrix \(\Phi\).
## Appendix B Linearly dependency of the centroid matrix after mean subtraction
\(X\in\mathbb{R}^{d\times n}\) is the data matrix with \(n\) samples where each sample \(x_{i}\in\mathbb{R}^{d}\). \(X\) has \(M\) classes, where \(I_{j}\), \(|C_{j}|\), and \(c_{j}\) are the index set, cardinality and the mean of \(j^{th}\) class. We have the centroid matrix \(\tilde{C}\) which contains the corresponding class centroid of \(x_{i}\). Note, like \(X\), \(\tilde{C}\in\mathbb{R}^{d\times n}\) but \(\tilde{C}\) has repeated entries. As \(X\) is mean-subtracted, we can write,
\[\sum_{i=1}^{n}x_{i}=0 \tag{41}\]
\[\sum_{j\in I_{1}}x_{j}+\sum_{j\in I_{2}}x_{j}+\ldots+\sum_{j\in I_{M}}x_{j}=0 \tag{42}\]
\[|C_{1}|\sum_{j\in I_{1}}\frac{1}{|C_{1}|}x_{j}+|C_{2}|\sum_{j\in I_{2}}\frac{ 1}{|C_{2}|}x_{j}+\ldots+|C_{M}|\sum_{j\in I_{M}}\frac{1}{|C_{M}|}x_{j}=0 \tag{43}\]
\[|C_{1}|c_{1}+|C_{2}|c_{2}+\ldots+|C_{M}|c_{m}=0 \tag{44}\]
\[-\frac{|C_{1}|}{|C_{M}|}c_{1}-\frac{|C_{2}|}{|C_{M}|}c_{2}-\ldots-\frac{|C_{M- 1}|}{|C_{M}|}c_{m-1}=c_{m} \tag{45}\]
Equation 45 shows that \(m^{th}\) class centroid is a linear sum of the rest. Therefore the matrix \(\tilde{C}\) is linearly dependent and \(rank(\tilde{C})=M-1\)
|
2307.13077
|
Ruled surfaces in 3-dimensional Riemannian manifolds
|
In this work, ruled surfaces in 3-dimensional Riemannian manifolds are
studied. We determine the expression for the extrinsic and sectional curvature
of a parametrized ruled surface, where the former one is shown to be
non-positive. We also quantify the set of ruling vector fields along a given
base curve which allows to define a relevant reference frame that we refer to
as Sannia frame. The fundamental theorem of existence and equivalence of
Sannia-ruled surfaces in terms of a system of invariants is given. The second
part of the article tackles the concept of the striction curve, which is proven
to be the set of points where the so-called Jacobi evolution function vanishes
on a ruled surface. This characterization of striction curves provides
independent proof for their existence and uniqueness in space forms and
disproves their existence or uniqueness in some other cases.
|
Marco Castrillón, María Eugenia Rosado, Alberto Soria
|
2023-07-24T18:59:34Z
|
http://arxiv.org/abs/2307.13077v4
|
# Ruled surfaces in \(3\)-dimensional Riemannian manifolds
###### Abstract
In this work ruled surfaces in \(3\)-dimensional Riemannian manifolds are studied. We determine the expression for the extrinsic and sectional curvature of a parametrized ruled surface, where the former one is shown to be non-positive. We also quantify the set of ruling vector fields along a given base curve which allow to define a relevant reference frame along it and that we refer to as _Sannia_. The fundamental Theorem of existence and equivalence of Sannia ruled surfaces in terms of a system of invariants is given. The second part of the article tackles the concept of strictly curve, which is proven to be the set of points where the so-called _Jacobi evolution function_ vanishes on a ruled surface. This provides independent proofs for their existence and uniqueness in space forms, and to disprove its existence or uniqueness in some other cases.
_Mathematics Subject Classification 2020:_ Primary: 53B25; Secondary: 53B20, 53A55.
_Keywords and phrases:_ Ruled surface, Riemannian manifold, Jacobi field, Sannia frame, differential invariant, striction curve.
Introduction
Ruled surfaces constitute a distinguished family of submanifolds in the area of Differential Geometry with a long history and a rich collection of works in the literature. Even though their original framework in the Euclidean space is still an active field of research (just to mention one, the reader may have a look at the study of non-developable ruled surfaces in [13]), ruled surfaces are also a prominent object in other Riemannian manifolds. For example, in the case of the Heisenberg group, where parts of planes, helicoids and hyperbolic paraboloids are proven to be the only minimal ruled surfaces (see [20]), or the minimal (resp. maximal) ruled surfaces in the Bianchi-Cartan-Vranceanu spaces \(\mathbb{E}(\kappa,\tau)\) (resp. in their Lorentzian counterparts), see [1]. In [5] several results for ruled surfaces in 3-dimensional Riemannian manifolds are established, where formulas for the striction curve, the distribution parameter and the first and second fundamental forms of ruled surfaces in space forms are obtained. In addition, a necessary and sufficient condition for the Riemann tensor of an extrinsically flat surface to be ruled is also derived, together with a proof for the well-known fact that surfaces in 3-dimensional space forms with vanishing Gauss curvature are necessarily ruled. However, this question is still open for more general contexts. Ruled surfaces have also been studied in Lorentzian contexts as in [6], where all Weingarten surfaces are characterized in the 3-dimensional Minkowski spacetime. Other results concerning ruled surfaces in the 3-dimensional Minkowski spacetime have been established in [4, 9, 10]. Lightlike ruled hypersurfaces have also been of great utility to address the problem of some geometric inequalities as in [14], where some cases of the so-called null Penrose inequality for the Bondi energy of a spacetime are proven.
All Euclidean ruled surfaces except cylinders admit a unique special curve called _striction_, which is made up of the so-called central points of each generator [18]. In the case of the cylinder the uniqueness property is not fulfilled for every curve in it is a striction. On the other hand, striction curves can degenerate to a single point, as occurs with the cone, where it is made up exclusively of its vertex. A central point in a geodesic generator of the surface furnish a critical point to the distance from points on that generator to neighboring geodesics. The family of generators along the striction curve is geodesically parallel along it. Striction curves are important, among other reasons, because they contain the singular points of the ruled surface in case they exist [8]. To the best of our knowledge there are no results in the literature involving a method to determine the striction curve of ruled surfaces in generic Riemannian manifolds. In this work we present a new strategy to determine the existence of striction curves in such context based on a function referred to as _Jacobi evolution function_, which vanishes at these type of curves in case they lie on the surface. Even though the presence of striction curves is addressed in [5] for ruled surfaces in \(S^{3}(r)\) and \(\mathbb{H}^{3}(r)\), no results are found in the literature where either the existence or uniqueness of the striction curve is violated in such contexts. In fact, in this work, the aforementioned _Jacobi evolution function_ is applied to prove that there exists at most one striction curve on ruled surfaces in the 3-dimensional
hyperbolic space \(\mathbb{H}^{3}\), with examples of surfaces with no striction.
In case that the striction curve of a ruled surface in a 3-dimensional Riemannian manifold has a regular striction curve, it is possible to define a specific reference frame called _Sannia_ after the Italian mathematician who first proposed it (see [18], [19]), and which is characterized for having the generating vector of the rulings as the first element of the basis. The evolution of the _Sannia frame_ provide two Euclidean invariants associated to the surface referred to as _curvature_ and _torsion_, which together with the (striction) angle enclosed, determine a complete system of Euclidean invariants for the ruled surface. A fundamental problem in Riemannian Geometry is that of equivalence of objects in a determined class, namely to provide a criterion to know whether two given objects in this class are congruent under isometries or not. In [3] the problem of curves with values in an arbitrary dimensional Riemannian manifold is solved with respect to the Frenet curvatures. However, spaces with non-constant curvature require additional invariants to establish such result. The classical Sannia problem of reconstructing suitable surfaces provided a set of associated invariant functions is known is presented in Theorem 5.3.8 in [18]. In this work we establish an analogous result of local existence and uniqueness of ruled surfaces in generic 3-dimensional Riemannian manifolds. In our main Theorem we prove that four functions of suitable regularity and the Sannia frame at a given point \(p_{0}\in M\) determine uniquely a (local) ruled surface passing through \(p_{0}\). Moreover, we do not require the base to be a striction curve. This is a great advantage since, as we shall see, certain ruled surfaces lack this sort of curves.
The paper is structured as follows. In Section 2 we introduce the concept of _parametrized ruled surface_, which is the most practical way to present ruled surfaces for the purposes of this work. As a first result, we obtain the expression for the sectional curvature of any ruled surfaces in a 3-dimensional Riemannian manifold in terms of the ambient geometry and its second fundamental form. We also define the so called _distribution parameter function_ which reduces to the classical distribution parameter when is evaluated on the so-called striction curve of any ruled surface in the Euclidean space. At the end of the section we introduce the concept of _Sannia ruled surface_, which we define as the ones admitting a ruling vector field with linearly independent covariant derivative. In particular this means that a Sannia frame can be defined along the base curve. An idea of the size of the set of vectors satisfying such a property is put forward in Proposition 13. Deriving the evolution equations for the vectors of a Sannia frame along a given base curve gives rise to a set of two invariant functions associated to the _Sannia ruled surface_ which, together with two additional angle functions, provide the fundamental theorem of ruled surfaces (see Theorem 20 below). The existence tackled in the Theorem is completed with the uniqueness only in space forms. In Section 3 we focus our interest on the existence and uniqueness of striction curves. We define a new function on the ruled surface that we refer to as _Jacobi evolution function_, which vanishes on the striction curves in case of existence. This makes such a function a useful tool for proving the presence of striction curves on ruled surfaces. In the case of constant curvature \(k\), we compute its expression explicitly in the three pos
sible cases of sign of \(k\). In particular, for \(k<0\), the striction curve may not exist (in contrast to [5]). Furthermore, some extra examples of rules surfaces in product manifolds are put forward to disprove the uniqueness of the striction curve in such backgrounds. In the text, Einstein's summation convention will be assumed.
## 2 Ruled surfaces in \(3\)-dimensional Riemannian manifolds
### Definitions and properties
Let \((M,g)\) be a \(3\)-dimensional Riemannian manifold and let \(\psi\colon\Sigma\to(M,g)\) be an immersed orientable surface in \((M,g)\). We recall some basic concepts of the geometry of hypersurfaces (see [5, Volume II, Chapter VII, Section 3]). We denote by \(\nabla\) and \(\nabla^{\Sigma}\) the Levi-Civita connections of \((M,g)\) and \((\Sigma,\psi^{*}g=g_{\Sigma})\) respectively. Locally, we can assume \(\Sigma\) to be embedded in \((M,g)\). Let us choose a unit normal vector \(\xi\) in a neighborhood \(U\) of a point \(p\in\Sigma\). Recall that the second fundamental form of \(\Sigma\) is defined as
\[\vec{h}\colon T_{p}\Sigma\times T_{p}\Sigma \to T_{p}^{\perp}\Sigma\] \[(U,V) \mapsto(\nabla_{U}V)_{p}^{\perp}=h(U,V)\xi,\]
where \(T_{p}^{\perp}\Sigma\) is the subspace of normal vectors on \(\Sigma\), and \(h\colon T_{p}\Sigma\times T_{p}\Sigma\to\mathbb{R}\) is the associated second fundamental form tensor of \(\Sigma\) defined as
\[h(U,V)=-g(\nabla_{U}\xi,V)\]
Given any vectors fields \(X,Y,Z,T\in\mathfrak{X}(M)\), we consider the curvature tensor of \((M,g)\) as
\[R(X,Y)Z=\nabla_{X}\nabla_{Y}Z-\nabla_{Y}\nabla_{X}Z-\nabla_{[X,Y]}Z, \tag{1}\]
and the Riemann tensor is \(\operatorname{Riem}(X,Y,Z,T)=-g(R(X,Y)Z,T)\). The induced curvature tensor and Riemann tensor on \(\Sigma\) will be denoted by \(R^{\Sigma}\) and \(\operatorname{Riem}^{\Sigma}\) respectively. Given any \(p\in M\) and two linearly independent vectors \(X,Y\in T_{p}M\) generating a plane \(T_{p}\Sigma\subset T_{p}M\), the sectional curvature \(K(T_{p}\Sigma)\) in \((M,g)\) is defined by
\[K(T_{p}\Sigma)=\frac{\operatorname{Riem}(X,Y,X,Y)}{g(X,X)g(Y,Y)-g(X,Y)^{2}},\]
and the sectional curvature of \(\Sigma\) will be denoted by \(K^{\Sigma}\).
Ruled surfaces are a remarkable family of hypersurfaces in \(3\)-dimensional Riemannian manifolds. They are generally defined as follows:
**Definition 1**: _We say that the immersed hypersurface \(\psi\colon\Sigma\to(M,g)\) in \((M,g)\) is ruled if there exists a foliation of \(\Sigma\) by complete curves which are geodesics in the ambient space \((M,g)\)._
However, in the following we are going to work with a notion, closer to the classical constructions in the Euclidean space, of ruled surface defined by base curves and ruling directions.
**Definition 2**: _Let \((M,g)\) be a complete \(3\)-dimensional Riemannian manifold. Let \(\alpha\colon I\to(M,g)\) be a smooth regular curve and \(Z\) a non-vanishing smooth vector field along \(\alpha\), i.e. a curve in \(TM\) such that \(Z(u)\in T_{\alpha(u)}M\), \(\forall u\in I\). The parametrized ruled surface defined by \(\alpha\) and \(Z\) is the differentiable map_
\[\mathbf{X}\colon I\times\mathbb{R} \to(M,g) \tag{2}\] \[(u,v) \mapsto\gamma_{Z(u)}(v)=\exp_{\alpha(u)}(vZ(u)),\]
_where \(\gamma_{Z(u)}\colon\mathbb{R}\to(M,g)\) is the geodesic satisfying_
\[\gamma_{Z(u)}(0) =\alpha(u),\] \[\gamma_{Z(u)}^{\prime}(0) =Z(u).\]
**Remark 3**: _Obviously, the image of the map \(\mathbf{X}\) is not necessarily an embedded surface in \((M,g)\). But if we require that rank of \(d\mathbf{X}\) to be 2, it will be an immersed surface. On the other hand, if \((M,g)\) is not complete, the definition above is still valid by restricting the domain of \(\mathbf{X}\)._
Note that the partial derivative \(\mathbf{X}_{v}=\partial\mathbf{X}/\partial v\) is a geodesic vector field, that is
\[\nabla_{\mathbf{X}_{v}}\mathbf{X}_{v}=0.\]
In addition, \(\mathbf{X}_{v}(u,0)=Z(u)\). The other partial derivative \(\mathbf{X}_{u}=\partial\mathbf{X}/\partial u\) is a Jacobi vector field along the geodesic \(\gamma_{Z(u)}\) since it is a geodesic variation. Hence it satisfies the Jacobi equation
\[\nabla_{\mathbf{X}_{v}}\nabla_{\mathbf{X}_{v}}\mathbf{X}_{u}+R(\mathbf{X}_{v},\mathbf{X}_{u})\mathbf{X}_{u}=0. \tag{3}\]
In the following result we derive the expression for the (intrinsic) curvature \(K_{p}^{\Sigma}\) of a ruled surface \(\Sigma\) in \((M,g)\).
**Proposition 4**: _Let \(\Sigma\) be a parametrized ruled surface in \((M,g)\) defined by \(\mathbf{X}:I\times\mathbb{R}\to(M,g)\), where \(d\mathbf{X}\) is of rank \(2\). Then, we have_
\[K_{p}^{\Sigma}=\frac{-g(\nabla_{\mathbf{X}_{v}}\nabla_{\mathbf{X}_{v}} \mathbf{X}_{u},\mathbf{X}_{u})}{||\mathbf{X}_{u}||^{2}||\mathbf{X}_{v}||^{2}- g(\mathbf{X}_{u},\mathbf{X}_{v})^{2}}-\frac{\mathrm{vol}_{g}(\mathbf{X}_{u}, \mathbf{X}_{v},\nabla_{\mathbf{X}_{u}}\mathbf{X}_{v})^{2}}{(||\mathbf{X}_{u}|| ^{2}||\mathbf{X}_{v}||^{2}-g(\mathbf{X}_{u},\mathbf{X}_{v})^{2})^{2}}. \tag{4}\]
**Proof.** Taking into account the Gauss equation (see [11, Volume II, Chapter VII, Proposition 4.1]), the sectional curvature for \(T_{p}\Sigma\) is written as follows
\[K_{p}^{\Sigma} =\frac{\mathrm{Riem}^{\Sigma}(\mathbf{X}_{u},\mathbf{X}_{v}, \mathbf{X}_{u},\mathbf{X}_{v})}{||\mathbf{X}_{u}||^{2}||\mathbf{X}_{v}||^{2}- g(\mathbf{X}_{u},\mathbf{X}_{v})^{2}}\] \[=\frac{\mathrm{Riem}(\mathbf{X}_{u},\mathbf{X}_{v},\mathbf{X}_{u},\mathbf{X}_{v})+g(\vec{h}(\mathbf{X}_{u},\mathbf{X}_{u}),\vec{h}(\mathbf{X}_ {v},\mathbf{X}_{v}))-g(\vec{h}(\mathbf{X}_{u},\mathbf{X}_{v}),\vec{h}(\mathbf{ X}_{u},\mathbf{X}_{v}))}{||\mathbf{X}_{u}||^{2}||\mathbf{X}_{v}||^{2}-g( \mathbf{X}_{u},\mathbf{X}_{v})^{2}}.\]
Since \({\bf X}_{v}\) is geodesic and \([{\bf X}_{u},{\bf X}_{v}]=0\), we have
\[K_{p}^{\Sigma}=\frac{-g(\nabla_{{\bf X}_{v}}\nabla_{{\bf X}_{v}}{\bf X}_{u},{\bf X }_{u})-h({\bf X}_{u},{\bf X}_{v})^{2}}{||{\bf X}_{u}||^{2}||{\bf X}_{v}||^{2}-g ({\bf X}_{u},{\bf X}_{v})^{2}},\]
The proof is complete by taking into account
\[|h({\bf X}_{u},{\bf X}_{v})|=|g(\xi,\nabla_{{\bf X}_{u}}{\bf X}_{v})|=\frac{|{ \rm vol}_{g}({\bf X}_{u},{\bf X}_{v},\nabla_{{\bf X}_{u}}{\bf X}_{v})|}{\sqrt{ ||{\bf X}_{u}||^{2}||{\bf X}_{v}||^{2}-g({\bf X}_{u},{\bf X}_{v})^{2}}}.\]
**Remark 5**: _Given any immersed surface \(\psi\colon\Sigma\to(M,g)\), the sectional curvatures \(K(T_{p}\Sigma)\) in \((M,g)\) and \(K_{p}^{\Sigma}\) in \((\Sigma,g_{\Sigma})\) defined by \(T_{p}\Sigma\) respectively are related as_
\[K_{p}^{\Sigma}=K(T_{p}\Sigma)+K_{ext}^{\Sigma},\]
_where \(K_{ext}^{\Sigma}\) is the extrinsic or Gauss curvature (the determinant of the second fundamental form endomorphism). By virtue of (4), it follows that_
\[K(T_{p}\Sigma) = \frac{-g(\nabla_{{\bf X}_{v}}\nabla_{{\bf X}_{v}}{\bf X}_{u},{ \bf X}_{u})}{||{\bf X}_{u}||^{2}||{\bf X}_{v}||^{2}-g({\bf X}_{u},{\bf X}_{v}) ^{2}}, \tag{5}\] \[K_{\rm ext}^{\Sigma} = -\frac{{\rm vol}_{g}({\bf X}_{u},{\bf X}_{v},\nabla_{{\bf X}_{u} }{\bf X}_{v})^{2}}{(||{\bf X}_{u}||^{2}||{\bf X}_{v}||^{2}-g({\bf X}_{u},{\bf X }_{v})^{2})^{2}}.\]
_Note that the extrinsic curvature \(K_{ext}^{\Sigma}\) of a parametrized ruled surface in a generic Riemannian background is non-positive, as (5) shows. In particular, the (intrinsic) curvature of a ruled surface is always less or equal than the ambient sectional curvature. Also notice that the extrinsic curvature relation \(K_{\rm ext}^{\Sigma}\) in (5) is still valid for any other basis of \(T_{p}\Sigma\) since it is a tensorial expression._
The expression of the extrinsic curvature that we have obtained above is connected with a classical invariant in the theory of ruled surfaces, the distribution parameter \(\lambda\), which is defined on the so-called striction curve of ruled surfaces in the Euclidean space (see [8], [18], [21] for more details). In this work we devote Section 3 to study the main properties of this curve in the current setting in case of existence. In the Euclidean space ruled surfaces always admit a striction curve which can always be considered the base curve of the parametrized ruled surface. For this reason the classical distribution parameter is defined as
\[\lambda(u)=\frac{{\rm vol}_{g}(\alpha^{\prime},Z,\nabla_{\alpha^{\prime}}Z)}{ \|\nabla_{\alpha^{\prime}}Z\|^{2}},\]
where \(\alpha:I\to(\Sigma,g_{\Sigma})\) is the striction base curve and \(Z\) is the ruling vector field along \(\alpha\). This function plays an important role in many results for classical ruled surfaces in \(\mathbb{R}^{3}\). We next define a function defined on a parametrized ruled surface \({\bf X}:I\times\mathbb{R}\to(M,g)\) motivated by the classical concept of distribution parameter:
**Definition 6** (Extended distribution parameter): _Let \(\mathbf{X}:I\times\mathbb{R}\rightarrow(M,g)\) be a parametrized ruled surface as in (2) in a Riemmanian \(3\)-manifold \((M,g)\) with \(d\mathbf{X}\) of rank \(2\). We define the extended distribution parameter function of \(\mathbf{X}:I\times\mathbb{R}\rightarrow(M,g)\) as_
\[\lambda(u,v)=\frac{\mathrm{vol}_{g}(\mathbf{X}_{u},\mathbf{X}_{v},\nabla_{ \mathbf{X}_{u}}\mathbf{X}_{v})}{\|\nabla_{\mathbf{X}_{u}}\mathbf{X}_{v}\|^{2}}. \tag{6}\]
**Remark 7**: _As we will illustrate in Section 3, for \(v=0\) the above formula reduces to the classical distribution parameter \(\lambda(u,0)=\lambda(u)\) provided \(\alpha:I\rightarrow\mathbf{X}(I\times\mathbb{R})\) is an striction base curve._
**Definition 8**: _Let \(\mathbf{X}:I\times\mathbb{R}\rightarrow(M,g)\) a parametrized ruled surface in \((M,g)\) as in (2). The function \(\sigma\colon I\rightarrow\mathbb{R}\) satisfying_
\[\cos\sigma_{\alpha(u)}=\frac{g(\alpha^{\prime}(u),Z(u))}{\|\alpha^{\prime}(u) \|},\]
_is called base angle along the curve \(\alpha\)._
In the following result we prove that the inner product of the vectors \(\mathbf{X}_{u}\) and \(\mathbf{X}_{v}\) remains constant along the rulings of a parametrized ruled surface.
**Proposition 9**: _Let \(\mathbf{X}:I\times\mathbb{R}\rightarrow(M,g)\) a parametrized ruled surface in \((M,g)\) as in (2) by a unitary vector field \(Z\). Then \(g(\mathbf{X}_{u},\mathbf{X}_{v})\) is constant along its rulings. Moreover, the angle \(\sigma\) between \(\mathbf{X}_{u}\) and \(\mathbf{X}_{v}\) at any point \(q\) of the ruling containing \(p=\alpha(u)\) is given by_
\[\cos\sigma_{q}=\frac{\|\alpha^{\prime}(u)\|\cos\sigma_{p}}{\|\mathbf{X}_{u}\|_ {q}}. \tag{7}\]
**Proof.** Differentiating the function \(g(\mathbf{X}_{u},\mathbf{X}_{v})\) along the vector field \(\mathbf{X}_{v}\) gives
\[\mathbf{X}_{v}(g(\mathbf{X}_{u},\mathbf{X}_{v}))=g(\nabla_{\mathbf{X}_{v}} \mathbf{X}_{u},\mathbf{X}_{v})+g(\mathbf{X}_{u},\nabla_{\mathbf{X}_{v}} \mathbf{X}_{v})=g(\nabla_{\mathbf{X}_{u}}\mathbf{X}_{v},\mathbf{X}_{v})=0,\]
where we have taken into account that \([\mathbf{X}_{u},\mathbf{X}_{v}]=0\) and \(\mathbf{X}_{v}\) is geodesic. This means that \(g(\mathbf{X}_{u},\mathbf{X}_{v})=\|\mathbf{X}_{u}\|\cos\sigma\) is constant along the rulings of \(\mathbf{X}:I\times\mathbb{R}\rightarrow(M,g)\). This value can be obtained by evaluating it at the point \(p=\alpha(u)\). Indeed, given any \(q\) of the ruling at \(p=\alpha(u)\),
\[g(\mathbf{X}_{u},\mathbf{X}_{v})|_{q}=\|\mathbf{X}_{u}\|_{q}\cos\sigma_{q}=\| \mathbf{X}_{u}\|_{p}\cos\sigma_{p}=\|\alpha^{\prime}(u)\|\cos\sigma_{p},\]
from where relation (7) follows.
**Remark 10**: _Relation (7) shows that whenever the vectors \(\mathbf{X}_{u}(u,0)=\alpha^{\prime}(u)\) and \(Z_{p}=\mathbf{X}_{v}(u,0)\) are orthogonal at the base curve, they remain orthogonal along the ruling \(\gamma_{Z(u)}(v)=\mathbf{X}(u,v)\). Nevertheless, the coordinate basis \((\mathbf{X}_{u},\mathbf{X}_{v})\) associated to the parametrization (2) with \(d\mathbf{X}\) of rank \(2\) is not necessarily orthogonal._
As already mentioned, \({\bf X}_{u}\) is a Jacobi vector field on every parametrized ruled surface, not necessarily orthogonal to its rulings. However, it is sometimes useful to consider the orthogonal component to the surface generators, which turns out to be a Jacobi field too. In the following Proposition we compute the decomposition of \({\bf X}_{u}\) into its tangent and normal components to the rulings.
**Proposition 11**: _Let \({\bf X}:I\times\mathbb{R}\rightarrow(M,g)\) be a parametrized ruled surface in \((M,g)\) as in (2). Then the decomposition of the Jacobi field \({\bf X}_{u}\) into its tangential and normal part with respect to the ruling is_
\[{\bf X}_{u}=\|\alpha^{\prime}(u)\|(\cos\sigma_{p}){\bf X}_{v}+{\bf X}_{u}^{ \perp}, \tag{8}\]
_where \({\bf X}_{u}^{\perp}\) is a Jacobi field along the ruling \(\gamma_{Z(u)}\) and orthogonal to it, and \(\sigma_{p}\) is the base angle at \(p=\alpha(u)\)._
**Proof.** It is a well known fact that any Jacobi vector field along a geodesic curve \(\gamma(v)\) parametrized by its arc length decomposes as
\[J(v)=(a+bv)\gamma^{\prime}(v)+J^{\perp}(v),\]
where \(a=g(J(0),\gamma^{\prime}(0))\), \(b=g((\nabla_{\gamma^{\prime}}J)_{p},\gamma^{\prime}(0))\) and \(J^{\perp}\) is an orthogonal Jacobi field along \(\gamma\). In this background, the Jacobi field \({\bf X}_{u}\) evaluated at \(p\) reads \({\bf X}_{u}|_{p}=\alpha^{\prime}(u)\), so
\[a=g(\alpha^{\prime}(u),Z_{p})=\|\alpha^{\prime}(u)\|\cos\sigma_{p},\]
and
\[b=g((\nabla_{{\bf X}_{v}}{\bf X}_{u})|_{p},Z_{p})=\frac{1}{2}{\bf X}_{u}(g({ \bf X}_{v},{\bf X}_{v}))|_{p}=0,\]
since \(\|{\bf X}_{v}\|=1\). Decomposition (8) follows from such values.
### Sannia invariants and the Fundamental Theorem of Ruled Surfaces
Orthonormal frames along curves are often considered in Geometry and Physics to address problems in relation to the geometry of manifolds and submanifolds. In this work we consider frames whose first vector is determined by the geodesics defining the ruled surface. Under suitable regularity hypothesis, a possible way of constructing the rest of the vectors in the frame is by considering successive derivatives of the first one along the tangent direction to the curve (see for example [3]). However, it may occur that some of the derivatives are linearly dependent with some other vector in the frame, which would prevent such a set of vectors to become a basis. With the following results we first intend to give an idea of the size of the set of ruled surfaces admitting such a relevant frame, and then we prove the main result of this work, the _fundamental theorem of ruled surfaces_, in which we state that certain ruled surfaces are uniquely determined from certain invariants.
**Definition 12**: _A vector field \(Z\in\mathfrak{X}(\alpha)\) along a smooth curve \(\alpha\colon I\to M\) taking values into a manifold \(M\) endowed with a linear connection \(\nabla\) is said to be in general position at \(u_{0}\in I\) if the vector fields \(Z\) and \(\nabla_{\alpha^{\prime}}Z\) along \(\alpha\) are linearly independent at \(u_{0}\). The vector field \(Z\in\mathfrak{X}(\alpha)\) is in general position up to order \(2\) if it is in general position up to this order for every \(u\in(a,b)\)._
The following result quantifies the set of vectors in general position along a curve on a manifold endowed with a linear connection. To this purpose it will be necessary to make use of the so-called jet bundles. We refer the reader to [12, section 41] for more details on this topic.
**Proposition 13**: _Let \(M\) be a \(3\)-dimensional manifold endowed with a linear connection \(\nabla\) and let \(\alpha\colon I\to M\) be a smooth curve on \(M\). The set of vector fields \(Z\in\mathfrak{X}(\alpha)\) along \(\alpha\) that are in general position is a dense set on \(\mathfrak{X}(\alpha)\) with respect to the strong topology._
**Proof.** The sections of the bundle \(E=\alpha^{*}TM\to I\) are vector fields along \(\alpha\). Let consider the \(1\)-jet bundle of \(E\). The morphism on \(J^{1}E\) given by
\[\varrho\colon J^{1}E \longrightarrow E\oplus E\] \[j_{u}^{1}Z \mapsto(Z(u),(\nabla_{\alpha^{\prime}}Z)(u))\]
is an isomorphism. We define the singular set
\[S=\{(Z_{1},Z_{2})\in E\oplus E:Z_{1}\wedge Z_{2}=0\},\]
which can written as the (non-disjoint) union \(S=S_{1}\cup S_{2}\), with
\[S_{1} =\{(Z_{1},Z_{2})\in E\oplus E:Z_{2}=fZ_{1}\}\simeq E\times\mathbb{ R},\] \[S_{2} =\{(Z_{1},0)\in E\oplus E\}\simeq E.\]
Both \(S_{1}\) and \(S_{2}\) are closed submanifolds of \(E\oplus E\), of dimensions \(3+1\) (resp. codimension \(2-1\)) and \(3\) (resp. codimension \(3\)) respectively. The same properties apply to \(T_{1}=\varrho^{-1}(S_{1})\) and \(T_{2}=\varrho^{-1}(S_{2})\). According to Thom's transversality Theorem (_cf._[22, VII, Theoreme 4.2]), the set of curves \(Z(u)\in\Gamma(E)\) such that \(j^{1}Z\) is transversal to both \(T_{1}\) and \(T_{2}\) is open and dense in \(\Gamma(E)\) with the strong topology. For these curves \(Z(u)\), the codimension of \((j^{1}Z)^{-1}(T_{1})\) is \(2\) and the codimension of \((j^{1}Z)^{-1}(T_{2})\) is \(3\). Since these are set in \(I\subset\mathbb{R}\), they must be empty. Therefore, for this dense set of curves \(Z(u)\), we have that \(j^{1}Z\cap S=\varnothing\), and the proof is complete.
**Definition 14**: _Let \((M,g)\) be a Riemannian \(3\)-manifold. A parametrized ruled surface \(\mathbf{X}:I\times\mathbb{R}\to(M,g)\) is said to be a Sannia ruled surface if \(Z\in\mathfrak{X}(\alpha)\) is in general position with respect to the Levi-Civita connection of \(g\)._
**Proposition 15**: _Let \((M,g)\) be an oriented \(3\)-dimensional Riemannian manifold and let \(\mathbf{X}:I\times\mathbb{R}\to(M,g)\) be a Sannia ruled surface defined by a vector field \(Z\) along a smooth curve \(\alpha\colon I\to(M,g)\). Then, there exist unique vector fields \(X_{i}\), \(1\leq i\leq 3\), defined along \(\alpha\) and smooth functions \(\kappa_{i}\colon I\to\mathbb{R},\ 0\leq i\leq 2\), with \(\kappa_{0}>0\) and \(\kappa_{1}>0\), such that_
1. \((X_{1}(u),X_{2}(u),X_{3}(u))\) _is a positively oriented orthonormal linear frame of_ \(T_{\alpha(u)}M\) _for_ \(u\in I\)_._
2. _The following formulas hold:_ 1. \(Z=\kappa_{0}X_{1}\)_,_ 2. \(\nabla_{\alpha^{\prime}}X_{1}=\kappa_{1}X_{2}\)_,_ 3. \(\nabla_{\alpha^{\prime}}X_{2}=-\kappa_{1}X_{1}+\kappa_{2}X_{3}\)_,_ 4. \(\nabla_{\alpha^{\prime}}X_{3}=-\kappa_{2}X_{2}\)_._
**Proof.** We define
\[X_{1} =(\kappa_{0})^{-1}Z, \tag{10}\] \[X_{2} =(\kappa_{1})^{-1}\nabla_{\alpha^{\prime}}X_{1}, \tag{9}\]
where \(\kappa_{0}=\|Z\|\), \(\kappa_{1}=\|\nabla_{\alpha^{\prime}}X_{1}\|\). For \(X_{3}\) we consider the unique vector field defining a positive orthonormal basis with \(X_{1}\) and \(X_{2}\). Differentiating \(g(X_{i},X_{j})\) with respect to \(u\) we get the formulas (2a)-(2d) in the statement.
**Definition 16**: _The frame \((X_{1},X_{2},X_{3})\) along \(\alpha\) determined in the above Proposition is called Sannia frame along \(\alpha\), and the functions \(\kappa_{0},\kappa_{1},\kappa_{2}\) are the Sannia invariants of the ruled surface \({\bf X}:I\times\mathbb{R}\to(M,g)\)._
Let \(\theta\) and \(\varphi\) denote the spherical angles of \(\alpha^{\prime}\) with respect to \((X_{1},X_{2},X_{3})\); that is,
\[\frac{\alpha^{\prime}}{\|\alpha^{\prime}\|}=\cos\varphi\cos\theta\,X_{1}+\sin \varphi\,X_{2}+\cos\varphi\sin\theta\,X_{3}. \tag{11}\]
If \(\varphi=0\), then \(\theta\) is the base angle (see Definition 8). We will show in Section 3 that this case corresponds to the base curve being a striction one.
**Remark 17**: _It would be more accurate to collect the angle invariants \(\theta\) and \(\varphi\) into a single smooth function \(\varsigma\colon I\to S^{2}\subset\mathbb{R}^{3}\) assigning the coordinates of \(\alpha^{\prime}\) with respect to the Sannia basis. However, in order to be close to the classical results in the Euclidean space, we will keep track of the angles \(\theta\) and \(\varphi\) instead of \(\varsigma\)._
In Definition 6 we have presented a function that extends the traditional Euclidean distribution parameter to the whole surface in generic 3-dimensional Riemannian backgrounds. With a view to recovering the distribution parameter in the classical sense, we next give the value of this function in terms of the invariants \(\kappa_{0}\), \(\kappa_{1}\), \(\kappa_{2}\), \(\theta\) and \(\varphi\) of the Sannia frame associated to a base curve which does not have to be necessarily a striction curve.
**Proposition 18**: _If \((X_{1},X_{2},X_{3})\) is the Sannia frame of a Sannia ruled surface \({\bf X}:I\times\mathbb{R}\to(M,g)\), then the value of the distribution parameter function \(\lambda\) on the base curve \(\alpha\) is_
\[\lambda(u,0)=\frac{\|\alpha^{\prime}\|\kappa_{0}^{2}\kappa_{1}\cos\varphi\sin \theta}{\big{(}\frac{dx_{0}}{du}\big{)}^{2}+\kappa_{0}^{2}\kappa_{1}^{2}}.\]
**Proof.** The result follows directly when (11) and relations (2a)-(2d) in Proposition 15 are inserted into expression (6), evaluated at \(v=0\).
**Remark 19**: _Note that it is always possible to consider a unit ruling \(Z\) along an arc-length parametrized curve \(\alpha\), i.e \(\kappa_{0}=1\) and \(\|\alpha^{\prime}\|=1\). In such case the value of the distribution parameter function along \(\alpha\) reduces to_
\[\lambda(u,0)=\frac{\cos\varphi\sin\theta}{\kappa_{1}}. \tag{12}\]
We next state and prove the local existence and uniqueness theorem of ruled surfaces in general \(3\)-Riemannian manifolds:
**Theorem 20** (Fundamental theorem of ruled surfaces): _Let \((M,g)\) be a \(3\)-dimensional oriented Riemannian manifold and let \((e_{1},e_{2},e_{3})\) be a positively oriented orthonormal basis of \(T_{p_{0}}M\), \(p_{0}\in M\). Given smooth functions \(\overline{\kappa}_{i}\colon(u_{0}-\varepsilon,u_{0}+\varepsilon)\to\mathbb{R}\), \(0\leq i\leq 2\), \(\overline{\theta}\colon(u_{0}-\varepsilon,u_{0}+\varepsilon)\to\mathbb{R}\), \(\overline{\varphi}\colon(u_{0}-\varepsilon,u_{0}+\varepsilon)\to\mathbb{R}\) with \(\overline{\kappa}_{0},\overline{\kappa}_{1}>0\), there exists \(\delta<\varepsilon\), a curve \(\alpha\colon(u_{0}-\delta,u_{0}+\delta)\to(M,g)\), parametrized by its arc length, and a vector field \(Z\in\mathfrak{X}(\alpha)\) such that the Sannia ruled surface \(\mathbf{X}:I\times\mathbb{R}\to(M,g)\), \(\mathbf{X}(u,v)=\exp_{\alpha(u)}(vZ(u))\) satisfies:_
1. \(\alpha(u_{0})=p_{0}\)_,_
2. \(X_{i}(u_{0})=e_{i}\) _for_ \(1\leq i\leq 3\)_,_
3. \(\kappa_{i}=\overline{\kappa}_{i}\) _for_ \(0\leq i\leq 2\)_,_ \(\overline{\theta}=\theta\)_,_ \(\overline{\varphi}=\varphi\)_,_
_where \(\kappa_{i}\), \(0\leq i\leq 2\), \(\theta\) and \(\varphi\) are the invariant of Definition 16 and \(X_{i}(u_{0})\) for \(1\leq i\leq 3\) is the Sannia frame. In addition, given any two points \(p_{0},p_{0}^{\prime}\in M\) and two oriented orthonormal bases \((e_{i})_{i=1}^{3}\), \((e_{i}^{\prime})_{i=1}^{3}\) of \(T_{p_{0}}M\) and \(T_{p_{0}^{\prime}}M\) respectively, there always exists a local isometry around \(p_{0}\) and \(p_{0}^{\prime}\) sending one ruled surface to the other if and only if \((M,g)\) is of constant curvature._
**Proof.** We consider the vector bundle \(p_{M}\colon\oplus^{3}TM\to M\) and a normal coordinate system \((U,(x_{i})_{i=1}^{3})\) centred at \(p_{0}\) associated to the orthonormal basis \((e_{i})_{i=1}^{3}\). We define the natural coordinate system \((x^{i},y_{k}^{j})\), \(i,j,k=1,2,3\) in \(p_{M}^{-1}(U)\) such that
\[w_{j}=y_{j}^{i}(w)\left.\frac{\partial}{\partial x^{i}}\right|_{p},\qquad \forall w=(w_{1},w_{2},w_{3})\in\oplus^{3}T_{x}M,\quad p\in U.\]
Let \(X\colon I\to\oplus^{3}TM\), \(a<u_{0}<b\), be the curve given by
\[X(u)=\left(X_{1}(u),X_{2}(u),X_{3}(u)\right).\]
Without loss of generality, the ruling can be considered to be unit, i.e. \(\overline{\kappa}_{0}=1\) in Proposition 15. This condition together with the formulas (2a)-(2d) can be
expressed as follows:
\[\left\{\begin{array}{l}\frac{d\left(x^{i}\circ\alpha\right)}{du} \!\!=\!\cos\overline{\varphi}\cos\overline{\theta}\left(y_{1}^{i}\circ X \right)+\sin\overline{\varphi}\left(y_{2}^{i}\circ X\right)+\cos\overline{ \varphi}\sin\overline{\theta}\left(y_{3}^{i}\circ X\right),\\ \frac{d\left(y_{1}^{i}\circ X\right)}{du}\!\!=\!\overline{\kappa}_{1}\left(y_ {2}^{i}\circ X\right)\!-\!\Gamma_{jk}^{i}\frac{d\left(x^{j}\circ\alpha\right) }{du}\left(y_{1}^{k}\circ X\right),\\ \frac{d\left(y_{2}^{i}\circ X\right)}{du}\!\!=\!-\overline{\kappa}_{1}\left(y_ {1}^{i}\circ X\right)\!+\!\overline{\kappa}_{2}\left(y_{3}^{i}\circ X\right) \!-\!\Gamma_{jk}^{i}\frac{d\left(x^{j}\circ\alpha\right)}{du}\left(y_{2}^{i} \circ X\right),\\ \frac{d\left(y_{3}^{i}\circ X\right)}{du}\!\!=\!-\overline{\kappa}_{2}\left(y _{2}^{i}\circ X\right)\!-\!\Gamma_{jk}^{i}\frac{d\left(x^{j}\circ\alpha \right)}{du}\left(y_{3}^{i}\circ X\right),\end{array}\right. \tag{13}\]
with \(1\leq i\leq 3\) and where \(\Gamma_{jk}^{i}\) are the components of the Levi-Civita connection \(\nabla\) of \(g\) with respect to the coordinate system \((x_{i})_{i=1}^{3}\). From the general theory of ODE's, the system (13) has unique solutions \(x^{i}\circ\alpha,y_{k}^{j}\circ X\), \(i,j,k=1,2,3\), satisfying the initial conditions \(\left(x^{i}\circ\alpha\right)\left(u_{0}\right)=x^{i}(x_{0})\), \(\left(y_{k}^{j}\circ X\right)\left(u_{0}\right)=\hat{\delta}_{k}^{j}\), \(i,j,k=1,2,3\), where \(\hat{\delta}\) is the Kronecker delta. We consider the ruled surface \(\mathbf{X}:I\times\mathbb{R}\rightarrow(M,g)\), \(\mathbf{X}(u,v)=\exp_{\alpha(u)}(vZ(u))\) with
\[\alpha(u)=(x^{1}(u),x^{2}(u),x^{3}(u)),\qquad Z(u)=y_{1}^{i}(u)\left(\frac{ \partial}{\partial x^{i}}\right)_{\alpha(u)}.\]
We first prove that
\[X_{k}(u)=y_{k}^{i}(u)\left(\frac{\partial}{\partial x^{i}}\right)_{\alpha(u)},\qquad 1\leq k\leq 3,\]
define an orthonormal basis. To this end, consider the functions
\[\phi_{ij}(u)=g_{\alpha(u)}(X_{i}(u),X_{j}(u)),\qquad 1\leq i\leq j\leq 3.\]
As a consequence of the system (13), it is straightforward to check that \(\phi_{ij}\) satisfy the following equations,
\[\left\{\begin{array}{l}\frac{d\phi_{11}}{du}=2\overline{\kappa }_{1}\phi_{12},\\ \frac{d\phi_{12}}{du}=\overline{\kappa}_{1}\phi_{22}-\overline{\kappa}_{1} \phi_{11}+\overline{\kappa}_{2}\phi_{13},\\ \frac{d\phi_{13}}{du}=\overline{\kappa}_{1}\phi_{23}-\overline{\kappa}_{2} \phi_{12},\\ \frac{d\phi_{22}}{du}=-2\overline{\kappa}_{1}\phi_{12}+2\overline{\kappa}_{2} \phi_{23},\\ \frac{d\phi_{23}}{du}=-\overline{\kappa}_{1}\phi_{13}+\overline{\kappa}_{2} \phi_{33}-\overline{\kappa}_{2}\phi_{22},\\ \frac{d\phi_{33}}{du}=-2\overline{\kappa}_{2}\phi_{23},\end{array}\right.\]
with initial conditions \(\phi_{ij}(u_{0})=\hat{\delta}_{j}^{i}\). But the constant function \(\hat{\phi}_{ij}(u)=\hat{\delta}_{j}^{i}\) also satisfy these system of differential equations with these initial conditions.
By virtue of the unicity theorem of EDOs, it follows that \(\phi_{ij}(u)=\hat{\delta}_{j}^{i}\), for all \(u\). In addition, since \((X_{1}(u_{0}),X_{2}(u_{0}),X_{3}(u_{0}))\) is an oriented positive basis, by continuity, so it is \((X_{1}(u),X_{2}(u),X_{3}(u))\) for any \(u\). The frame determined by \(X\) verifies that \(X_{1}=Z\) and \(\nabla_{\alpha^{\prime}}X_{1}=\overline{\kappa}_{1}X_{2}\), so since \((X_{1},X_{2},X_{3})\) is positive, it is the Sannia basis of the ruled surface \({\bf X}:I\times\mathbb{R}\to(M,g)\). Finally, again from (13), it follows that \(\overline{\kappa}_{1}\) and \(\overline{\kappa}_{2}\) are the Sannia invariants.
We now prove the uniqueness of the ruled surface up to isometry. First, if \((M,g)\) is of constant curvature, given \(p_{0},p_{0}^{\prime}\in M\) and \((e_{i})_{i=1}^{3}\), \((e_{i}^{\prime})_{i=1}^{3}\) oriented orthonormal bases at \(T_{p_{0}}M\) and \(T_{p_{0}^{\prime}}M\) respectively, we consider a local isometry \(\phi\) sending \(p_{0}\) to \(p_{0}^{\prime}\) and \((e_{i})_{i=1}^{3}\) to \((e_{i}^{\prime})_{i=1}^{3}\). Since \(\phi\) preserves the Levi Civita connection, the image \(\phi\circ{\bf X}\) of the ruled surface defined by \(p_{0}\) and \((e_{i})_{i=1}^{3}\) is a ruled surface with the same parameters \(\overline{\kappa}_{1}\), \(\overline{\kappa}_{2}\). By the uniqueness of the system of differential equations (13), we have: \({\bf X}^{\prime}=\phi\circ{\bf X}\). Conversely, if given \(p_{0}\), \(p_{0}^{\prime}\in M\) and orthonormal bases \((e_{i})_{i=1}^{3}\), \((e_{i}^{\prime})_{i=1}^{3}\) at \(T_{p_{0}}M\) and \(T_{p_{0}^{\prime}}M\) respectively, there is a local isometry sending one to the other, the space is locally isotropic, and hence of contant curvature (_cf._[11]).
## 3 Central points and striction curves
Euclidean ruled surfaces contain a special curve called _striction_ which is the locus of the points whose distance with respect to neighbouring geodesic is extremal. In particular striction curves are divided into _expanding-contracting_ and _contracting-expanding_ curves depending on whether this distance corresponds to a maximum or a minimum value respectively. In general, the striction curve of a family of Euclidean curves is the locus for the points where the geodesic curvature of the corresponding orthogonal set of curves vanishes (read [15, 16] for more details). The striction curve of any Euclidean ruled surface is also characterised by the fact that the generators along it are parallel in the Levi-Civita sense. The following definition extends such a property to general ambient Riemannian manifolds:
**Definition 21**: _Let \({\bf X}:I\times\mathbb{R}\to(M,g)\) be a parametrized ruled surface. A curve \(s\colon I\to{\bf X}(I\times\mathbb{R})\) is said to be a striction curve if_
\[g(s^{\prime},\nabla_{s^{\prime}}{\bf X}_{v})=0. \tag{14}\]
As already mentioned, whenever the striction curve \(s\colon I\to{\bf X}(I\times\mathbb{R})\) exists, it can be taken as the base curve of the ruled surface so that it can be reparametrized as \({\bf X}(u,v)=\exp_{s(u)}(vZ(u))\).
**Proposition 22**: _Let \({\bf X}:I\times\mathbb{R}\to(M,g)\) be a Sannia ruled surface in a \(3\)-dimensional Riemannian manifold \((M,g)\) admitting a striction curve \(s\colon I\to{\bf X}(I\times\mathbb{R})\) and chosen to be its base curve. For any \(p={\bf X}(u,0)\), the vectors \(\{X_{1},X_{3}\}\) of the associated Sannia frame constitute an orthonormal basis of \(T_{p}\left({\bf X}(I\times\mathbb{R})\right)\) and the angle \(\varphi\) defined by (11) vanishes identically. Therefore_
\[s^{\prime}(u)=\cos\sigma_{s(u)}\,X_{1}+\sin\sigma_{s(u)}\,X_{3}\quad\text{for all $u\in I$.}\]
**Proof.** The above relation holds since the tangent plane to \({\bf X}:I\times\mathbb{R}\to(M,g)\) along \(s\) is spanned by \(s^{\prime}\) and \(Z\), and the second vector \(X_{2}=\nabla_{s^{\prime}}Z/||\nabla_{s^{\prime}}Z||\) of the corresponding Sannia frame is perpendicular to both of them.
As mentioned above, in the Euclidean setting the ruling vector field of a ruled surface is parallel along the striction with respect to its induced Levi-Civita connection. This property also holds in a general Riemannian background as shown in the following result.
**Corollary 23**: _Under the hypotheses of Proposition 22, the ruling vector field \(Z\) is parallel along the striction curve with respect to the induced connection on the Sannia ruled surface._
**Proof.** The Gauss identity on the striction reads
\[\nabla_{s^{\prime}}Z=\nabla_{s^{\prime}}^{\Sigma}Z+\vec{h}(s^{\prime},Z),\]
where \(\nabla^{\Sigma}\) stands for the induced connection on the ruled surface \(\Sigma\equiv{\bf X}(I\times\mathbb{R})\). By Proposition 22, \(\nabla_{s^{\prime}}Z\) is orthogonal to \(\Sigma\), which means that \(\nabla_{s^{\prime}}^{\Sigma}Z=0\).
**Remark 24**: _In the classical statement of the fundamental theorem or ruled surfaces in the Euclidean space \((\mathbb{R}^{3},\delta)\) an striction curve can always be taken as the base curve in the parametrization for it always exists. By virtue of the above proposition, in such a context \(\varphi=0\) and \(\theta=\sigma\), i.e. the base angle on the striction. In particular, the distribution parameter formula (12) on a general base curve reduces to_
\[\lambda(u,0)=\frac{\sin\sigma_{s(u)}}{\kappa_{1}},\]
_which is the classical expression for the Euclidean distribution parameter (on a striction curve) [18, Lemma 5.3.7]._
In the following result we obtain a condition for the striction curve in terms of the coordinate basis \(({\bf X}_{u},{\bf X}_{v})\).
**Proposition 25**: _Let \({\bf X}:I\times\mathbb{R}\to(M,g)\) be a parametrized ruled surface \({\bf X}:I\times\mathbb{R}\to(M,g)\) with \(\alpha\colon I\to{\bf X}(I\times\mathbb{R})\) an associated base curve, and with a unitary ruling vector field \(Z\in\mathfrak{X}(\alpha)\). A curve \(s\colon I\to{\bf X}(I\times\mathbb{R})\) on such a surface is a striction curve if and only if the squared norm of the vector field \({\bf X}_{u}\) along each generator \(\gamma_{Z(u)}(v)\) is critical at the corresponding point \(s(u)\), i.e._
\[{\bf X}_{v}(\|{\bf X}_{u}\|^{2})|_{p}=0,\quad\mbox{for all }p=s(u). \tag{15}\]
**Proof.** Given any curve of the form \(s:I\to{\bf X}(I\times\mathbb{R})\), \(s(u)={\bf X}(u,v(u))\), where \(v:I\to\mathbb{R}\) is smooth, we have \(s^{\prime}={\bf X}_{u}+v^{\prime}{\bf X}_{v}\). Then
\[g(s^{\prime},\nabla_{s^{\prime}}{\bf X}_{v}) = g({\bf X}_{u}+v^{\prime}{\bf X}_{v},\nabla_{{\bf X}_{u}+v^{ \prime}{\bf X}_{v}}{\bf X}_{v})=g({\bf X}_{u}+v^{\prime}{\bf X}_{v},\nabla_{{ \bf X}_{u}}{\bf X}_{v})\] \[= g({\bf X}_{u},\nabla_{{\bf X}_{u}}{\bf X}_{v})=g({\bf X}_{u}, \nabla_{{\bf X}_{v}}{\bf X}_{u})=\tfrac{1}{2}{\bf X}_{v}(\|{\bf X}_{u}\|^{2}),\]
where we have taken into account that \(g({\bf X}_{v},{\bf X}_{v})\) is constantly \(1\), and that that \(\nabla_{{\bf X}_{u}}{\bf X}_{v}-\nabla_{{\bf X}_{v}}{\bf X}_{u}=[{\bf X}_{u}, {\bf X}_{v}]=0\). The proof is complete by the definition of striction curve.
**Remark 26**: _Since the length of the Jacobi vector field \({\bf X}_{u}\) gives a local idea of the deviation of a congruence of geodesics, relation (15) is in accordance with the definition of striction line established in a Euclidean sense, where any point in these curves has a critical distance with respect to neighbouring geodesics, as mentioned before._
The previous result motivates the following definition.
**Definition 27**: _Let \({\bf X}:I\times\mathbb{R}\to(M,g)\) be a parametrized ruled surface in a \(3\)-dimensional Riemannian manifold \((M,g)\). The function that describes the derivative of the squared norm of the Jacobi field_
\[F\colon{\bf X}(I\times\mathbb{R})\to\mathbb{R},\quad F(p)=\tfrac{1}{2}({\bf X} _{v}\|{\bf X}_{u}\|^{2})|_{p} \tag{16}\]
_will be referred to as the Jacobi evolution function of the ruled surface. The points of the parametrized ruled surface where \(F\) vanishes are called central points._
**Remark 28**: _Therefore, striction curves lie on the vanishing set of the function \(F\). Furthermore, the function \(F\) describes the evolution of the norm of the Jacobi variational field \({\bf X}_{u}\) along each ruling. The study of the sign of \(F\) provides relevant information with regard to the behavior of the rulings in a local manner. If \(F\) is strictly positive at a point, the norm of the Jacobi is increasing and hence the rulings are locally spreading apart. Just the opposite happens in case that \(F\) is strictly negative at some point. Since \((u,v)\) is a system of coordinates associated to the parametrization (2) of the ruled surface, we will simply refer to \((F\circ{\bf X})(u,v)\) as \(F(u,v)\) for the sake of simplicity._
We next explore the behaviour of the first and second derivatives of the function \(F\).
**Theorem 29**: _Let \({\bf X}:I\times\mathbb{R}\to(M,g)\) be a parametrized ruled surface in a Riemannian \(3\)-manifold \((M,g)\). The first and second derivative of the Jacobi evolution function \(F\) are_
\[\frac{\partial F}{\partial v}={\rm Riem}({\bf X}_{v},{\bf X}_{u},{ \bf X}_{v},{\bf X}_{u})+\|\nabla_{{\bf X}_{v}}{\bf X}_{u}\|^{2}, \tag{18}\] \[\frac{\partial^{2}F}{\partial\,v^{2}}=-g\left((\nabla_{{\bf X}_{ v}}R)\left({\bf X}_{v},{\bf X}_{u},{\bf X}_{v}\right),{\bf X}_{u}\right)+4{\rm Riem }({\bf X}_{v},{\bf X}_{u},{\bf X}_{v},\nabla_{{\bf X}_{v}}{\bf X}_{u}). \tag{17}\]
**Proof.** If we differentiate \(F\) along the \({\bf X}_{v}\) direction, we obtain
\[\frac{\partial F}{\partial v}=\nabla_{{\bf X}_{v}}g(\nabla_{{\bf X}_{v}}{\bf X }_{u},{\bf X}_{u})=g(\nabla_{{\bf X}_{v}}\nabla_{{\bf X}_{v}}{\bf X}_{u},{\bf X }_{u})+g(\nabla_{{\bf X}_{v}}{\bf X}_{u},\nabla_{{\bf X}_{v}}{\bf X}_{u}).\]
As \({\bf X}_{u}\) is a Jacobi vector field along the rulings, it satisfies (3) so that
\[\frac{\partial F}{\partial v} = -g(R({\bf X}_{v},{\bf X}_{u}){\bf X}_{v},{\bf X}_{u})+\|\nabla_{{ \bf X}_{v}}{\bf X}_{u}\|^{2}\] \[= {\rm Riem}({\bf X}_{v},{\bf X}_{u},{\bf X}_{v},{\bf X}_{u})+\| \nabla_{{\bf X}_{v}}{\bf X}_{u}\|^{2},\]
and
\[\frac{\partial^{2}F}{\partial\,v^{2}} = -g\left(\nabla_{\mathbf{X}_{v}}(R(\mathbf{X}_{v},\mathbf{X}_{u}) \mathbf{X}_{v}),\mathbf{X}_{u}\right)-g(R(\mathbf{X}_{v},\mathbf{X}_{u}) \mathbf{X}_{v}),\nabla_{\mathbf{X}_{v}}\mathbf{X}_{u})\] \[+2g(\nabla_{\mathbf{X}_{v}}\nabla_{\mathbf{X}_{v}}\mathbf{X}_{u}, \nabla_{\mathbf{X}_{v}}\mathbf{X}_{u})\] \[= -g\left(\nabla_{\mathbf{X}_{v}}(R(\mathbf{X}_{v},\mathbf{X}_{u}) \mathbf{X}_{v}),\mathbf{X}_{u}\right)-3g(R(\mathbf{X}_{v},\mathbf{X}_{u}) \mathbf{X}_{v},\nabla_{\mathbf{X}_{v}}\mathbf{X}_{u}).\]
On the other hand, taking into account the symmetries of the Riemann tensor and the fact that \(\mathbf{X}_{v}\) is a geodesic vector field we obtain
\[g(\nabla_{\mathbf{X}_{v}}(R(\mathbf{X}_{v},\mathbf{X}_{u})\mathbf{ X}_{v}),\mathbf{X}_{u}) = g((\nabla_{\mathbf{X}_{v}}R)(\mathbf{X}_{v},\mathbf{X}_{u}, \mathbf{X}_{v}),\mathbf{X}_{u})\] \[+g(R(\mathbf{X}_{v},\mathbf{X}_{u})\mathbf{X}_{v},\nabla_{ \mathbf{X}_{v}}\mathbf{X}_{u}).\]
Plugging (20) into (19) finally gives (18), which finally concludes the proof of the theorem.
As a consequence, in the special case where the ambient manifold has constant curvature the above derivatives of \(F\) read as follows.
**Corollary 30**: _Let \((M(k),g)\) be \(3\)-Riemannian manifold of constant sectional curvature \(k\) and \(\mathbf{X}:I\times\mathbb{R}\to(M,g)\) be a parametrized ruled surface with \(Z\) a unitary vector field. Then the first and second derivatives of the Jacobi evolution function \(F\) read_
\[\frac{\partial F}{\partial v} = -k(\|\mathbf{X}_{u}\|^{2}-\|\alpha^{\prime}(u)\|^{2}\cos^{2} \sigma_{p})+\|\nabla_{\mathbf{X}_{v}}\mathbf{X}_{u}\|^{2} \tag{22}\] \[\frac{\partial^{2}F}{\partial\,v^{2}} = (-2k)\mathbf{X}_{v}(\|\mathbf{X}_{u}\|^{2})=-4kF(u,v)\]
_where \(p=\alpha(u)=\mathbf{X}(u,0)\) and \(\sigma_{p}\) is the base angle between \(\alpha^{\prime}(u)\) and \(Z\) at \(p\)._
**Proof.** By the sectional curvature relation for a space form, we have
\[\mbox{Riem}(\mathbf{X}_{v},\mathbf{X}_{u},\mathbf{X}_{v},\mathbf{X}_{u})=-k \left(\|\mathbf{X}_{u}\|^{2}-g(\mathbf{X}_{u},\mathbf{X}_{v})^{2}\right), \tag{23}\]
since \(\mathbf{X}_{v}\) is a unitary vector field. On the other hand, we know by Proposition 9 that \(g(\mathbf{X}_{u},\mathbf{X}_{v})=\|\alpha^{\prime}(u)\|\cos\sigma_{p}\) is constant along the associated ruling. Hence, (23) can be rewritten as
\[\mbox{Riem}(\mathbf{X}_{v},\mathbf{X}_{u},\mathbf{X}_{v},\mathbf{X}_{u})=-k \left(\|\mathbf{X}_{u}\|^{2}-\|\alpha^{\prime}(u)\|^{2}\cos^{2}\sigma_{p} \right), \tag{24}\]
so (17) becomes (21) when relation (24) is inserted. Let us derive now relation (22). By virtue of formula (18), the second derivative of the Jacobi evolution function \(F\) in a space form becomes
\[\frac{\partial^{2}F}{\partial\,v^{2}}=4\,\mbox{Riem}(\mathbf{X}_{v},\mathbf{ X}_{u},\mathbf{X}_{v},\nabla_{\mathbf{X}_{v}}\mathbf{X}_{u})=-4\,g(R( \mathbf{X}_{v},\mathbf{X}_{u})\mathbf{X}_{v},\nabla_{\mathbf{X}_{v}}\mathbf{X }_{u}),\]
for \(\nabla R=0\) holds. Since
\[R(\mathbf{X}_{v},\mathbf{X}_{u})\mathbf{X}_{v}=R(\mathbf{X}_{v},\mathbf{X}_{u} ^{\perp})\mathbf{X}_{v}=k\mathbf{X}_{u}^{\perp},\]
where the last equality is fulfilled as a consequence of the ambient manifold being a space form, we have
\[\frac{\partial^{2}F(u,v)}{\partial v^{2}}=(-4k)g({\bf X}_{u}^{\perp},\nabla_{{\bf X }_{v}}{\bf X}_{u}). \tag{25}\]
Using the splitting (8) given in Proposition 11 for the Jacobi field \({\bf X}_{u}\), we obtain
\[g({\bf X}_{u}^{\perp},\nabla_{{\bf X}_{v}}{\bf X}_{u}) = g({\bf X}_{u},\nabla_{{\bf X}_{v}}{\bf X}_{u})-\|\alpha^{\prime}( u)\|(\cos\sigma_{p})g({\bf X}_{v},\nabla_{{\bf X}_{v}}{\bf X}_{u})\] \[= \frac{1}{2}{\bf X}_{v}(\|{\bf X}_{u}\|^{2}),\]
with \(\sigma_{p}\) the base angle at \(p=\alpha(u)\), where \([{\bf X}_{u},{\bf X}_{v}]=0\) has been applied in combination with the fact that \({\bf X}_{v}\) is geodesic. Inserting (26) into (25) finally gives (22).
In the next result we integrate the differential equation (22) in the different models for positive, zero or negative \(k\).
**Theorem 31**: _Let \({\bf X}:I\times\mathbb{R}\to(M,g)\) be a parametrized ruled surface in a complete manifold \((M,g)\) of constant sectional curvature \(k\), with \(Z\) a unitary ruling vector field. Then the expression of the Jacobi evolution function \(F\) is_
\[F(u,v) = C_{1}\cos{(\sqrt{4k}v)}+C_{2}\sin{(\sqrt{4k}v)},\quad\mbox{if $k>0 $}, \tag{28}\] \[F(u,v) = C_{1}+C_{2}v,\quad\mbox{if $k=0$}\] (29) \[F(u,v) = C_{1}\cosh{(\sqrt{-4k}v)}+C_{2}\sinh{(\sqrt{-4k}v)},\quad\mbox{ if $k<0$} \tag{27}\]
_where_
\[C_{1}=g_{p}(\nabla_{\alpha^{\prime}}Z,\alpha^{\prime}), \tag{30}\]
_and_
\[C_{2} = \frac{1}{\sqrt{|4k|}}\left(-k\|\alpha^{\prime}\|_{p}^{2}\sin^{2} \sigma_{p}+\|\nabla_{\alpha^{\prime}}Z\|_{p}^{2}\right),\quad\mbox{if $k>0$ or $k<0$}, \tag{32}\] \[C_{2} = \|\nabla_{\alpha^{\prime}}Z\|_{p}^{2},\quad\mbox{if $k=0$}. \tag{31}\]
**Proof.** Evaluating the Jacobi evolution function \(F(u,v)\) at \(v=0\) gives
\[F(u,0)=\mbox{$\frac{1}{2}$}{\bf X}_{v}(\|{\bf X}_{u}\|^{2})|_{p}=g_{p}(\nabla _{{\bf X}_{v}}{\bf X}_{u},{\bf X}_{u})=g_{p}(\nabla_{{\bf X}_{u}}{\bf X}_{v},{ \bf X}_{u})=g_{p}(\nabla_{\alpha^{\prime}}Z,\alpha^{\prime}),\]
which implies (30), after evaluating (27), (28) and (29), at \(v=0\). To compute \(C_{2}\), we use the Jacobi first derivative relation (21), which becomes
\[\frac{\partial F}{\partial v}(u,0)=-k\|\alpha^{\prime}(u)\|_{p}^{2}\sin^{2} \sigma_{p}+\|\nabla_{\alpha^{\prime}}Z\|_{p}^{2} \tag{33}\]
since \({\bf X}_{u}(u,0)=\alpha^{\prime}(u)\). Differenciating (27) at \(v=0\) from (33) we obtain
\[C_{2}\sqrt{4k}=-k\|\alpha^{\prime}(u)\|^{2}\sin^{2}\sigma_{p}+\|\nabla_{ \alpha^{\prime}}Z\|_{p}^{2},\]
which is none other than \(C_{2}\) in (31) for \(k>0\). The respective value of \(C_{2}\) for \(k<0\) is obtained in an analogous way differenciating (29) at \(v=0\). Likewise, in the Euclidean context \(k=0\), \((\partial F/\partial v)(0)=\|\nabla_{\alpha^{\prime}}Z\|_{p}^{2}\), which implies the value of \(C_{2}\) in (32) for the Jacobi evolution function is \(F(u,v)=C_{1}+C_{2}v\) in the flat case.
**Theorem 32**: _Let \({\bf X}:I\times\mathbb{R}\to(M,g)\) be a Sannia ruled surface \({\bf X}:I\times\mathbb{R}\to(M,g)\) in a complete manifold \((M,g)\) of constant sectional curvature \(k\), where \(\alpha\colon I\to{\bf X}(I\times\mathbb{R})\) is an associated base curve, and with a unitary ruling vector field \(Z\in\mathfrak{X}(\alpha)\). If \(s\colon I\to{\bf X}(I\times\mathbb{R})\), \(s(u)={\bf X}(u,v(u))\) is an striction curve, then_
\[v(u) = \frac{1}{\sqrt{4k}}\arctan\left(\frac{-\sqrt{4k}\,g_{p}(\nabla_ {\alpha^{\prime}}Z,\alpha^{\prime})}{-k\|\alpha^{\prime}\|_{p}^{2}\sin^{2} \sigma_{p}+\|\nabla_{\alpha^{\prime}}Z\|_{p}^{2}}\right)\quad\mbox{if $k>0$}, \tag{35}\] \[v(u) = -\frac{g_{p}(\nabla_{\alpha^{\prime}}Z,\alpha^{\prime})}{\|\nabla _{\alpha^{\prime}}Z\|_{p}^{2}}\quad\mbox{if $k=0$},\] (36) \[v(u) = \frac{1}{\sqrt{-4k}}\mbox{arctanh}\Bigg{(}\frac{-\sqrt{-4k}\,g_{ p}(\nabla_{\alpha^{\prime}}Z,\alpha^{\prime})}{-k\|\alpha^{\prime}\|_{p}^{2} \sin^{2}\sigma_{p}+\|\nabla_{\alpha^{\prime}}Z\|_{p}^{2}}\Bigg{)}\quad\mbox{ if $k<0$}, \tag{34}\]
_where \(\sigma_{p}\) is the angle between the vectors \(\alpha^{\prime}\) and \(Z\) at \(p=\alpha(u)\). In addition, if \(M\) is simply connected (that is, \(M=S^{3}\) for \(k>0\), \(M=\mathbb{R}^{3}\) for \(k=0\) or \(M=\mathbb{H}^{3}\) for \(k<0\) with their standard metrics scaled by \(1/\sqrt{|k|}\) when \(k\neq 0\)), then for \(k\geq 0\), every Sannia surface has a unique striction curve. For \(k<0\), every Sannia surface has at most one striction curve._
**Proof.** As already noted in Remark 28, a necessary condition for a point \(p\) to lie on the striction curve is that the Jacobi evolution function verifies \(F(p)=0\). Then the expressions for \(v(u)\) are directly obtained from (27), (28) and (29) respectively, taking into account (31) or (32). For \(k>0\), the expression must be understood to provide \(v(u)=\pm\pi/2\) if \(-k\|\alpha^{\prime}\|_{p}^{2}\sin^{2}\sigma_{p}+\|\nabla_{\alpha^{\prime}}Z\|_ {p}^{2}=0\). In any case, the value of \(v(u)\) is periodic with period \(2\pi/\sqrt{k}\), which is exactly the period of the geodesics in \(S^{3}\), so that the striction curve is geometrically unique. The uniqueness holds trivially true for \(k=0\). For \(k<0\), the injectivity of the function \(\mbox{arctanh}\) implies that there is at most a solution for \(v(u)\).
**Remark 33**: _Formula (35) is the classical formula for the striction in the Euclidean space. Formulas (34) in \(S^{3}\) and (36) in \(\mathbb{H}^{3}\) were given in [5], where the uniqueness of striction curves is also addressed. However, they obtain the results by working with the sphere or the hyperbolic space as submanifolds of \(\mathbb{R}^{4}\) with the standard Euclidean or Lorentzian metric respectively, that is, in a less intrinsic fashion. Our approach involves a differential equation that, in principle, could be analysed in arbitrary Riemannian manifolds. Note that in such a case, the equation will involve the curvature of the ambient space. Finally, the authors also claim in [5] that the striction curve always exists for \(k<0\), a fact that is wrong as the following example illustrates._
_Example 1: A ruled surface in the hyperbolic space without friction curve._ We consider in \((\mathbb{H}^{3},g=\frac{1}{z}(dx^{2}+dy^{2}+dz^{2}))\) the ruled surface with base curve \(\alpha\colon[0,2\pi)\to(\mathbb{H}^{3},g)\) determined by \(\alpha(u)=(\cos u,\sin u,1)\) and unitary ruling vector \(Z(u)=(\cos u,\sin u,0)\) defined along \(\alpha\). A straightforward computation shows that
\[\nabla_{\alpha^{\prime}}Z=-\sin u\,\partial_{x}+\cos u\,\partial_{y}.\]
The form for the Jacobi evolution function \(F\) is described by relation (29) for negative constant sectional curvature. The coefficients \(C_{1}\) and \(C_{2}\) read this time \(C_{1}=g_{p}(\alpha^{\prime},\nabla_{\alpha^{\prime}}Z)=1\) and \(C_{2}=\frac{1}{2}\left(\|\alpha^{\prime}\|_{p}^{2}\sin^{2}\sigma_{p}+\|\nabla_ {\alpha^{\prime}}Z\|_{p}^{2}\right)=1\), since \(\sigma_{p}=\pi/2\). Therefore, the Jacobi evolution function associated to such ruled surfaces is
\[F(u,v)=\cosh 2v+\sinh 2v.\]
Condition \(F=0\) holds on the striction line in case of existence, which is equivalent to
\[1+\tanh 2v=0,\]
which clearly does not have any solution.
On the other hand, when the ambient manifold has not constant curvature, the uniqueness of the striction curve cannot be guaranteed. This is shown by the following examples.
_Example 2: A ruled surface in a product manifold equipped with different striction curves._ We consider the surface of revolution \(\psi\colon\Sigma\to(\mathbb{R}^{3},\delta)\) in \((\mathbb{R}^{3},\delta)\) generated by the rotation of the curve \(f(x)=2+\sin x\) in the \(xy\)-plane about the \(x\)-axis. Such a surface is defined by the following parametrization:
\[\mathbf{X}\colon[0,2\pi)\times\mathbb{R}\to(\mathbb{R}^{3},\delta)\] \[\mathbf{X}(u,v)=(v,(2+\sin v)\cos u,(2+\sin v)\sin u).\]
The expression for the induced metric \(g_{\Sigma}\) on \(\Sigma\) reads
\[g_{\Sigma}=(2+\sin v)^{2}du^{2}+(1+\cos^{2}v)dv^{2}\]
in local coordinates associated to the parametrization (37). It is a well known fact that the generating curves of a surface of revolution in \((\mathbb{R}^{3},\delta)\) are geodesics of the surface. Consider the product manifold \(\mathbb{R}\times\Sigma\) endowed with the product metric \(g=dt^{2}+g_{\Sigma}\). Since \(\Sigma\subset(\mathbb{R}\times\Sigma,g)\) is a totally geodesic slice of this Riemannian product, its Gauss curvature is zero and the generating geodesic curves of \((\Sigma,g_{\Sigma})\) are also geodesics in the ambient product manifold. Let us choose \(\alpha(u)=\mathbf{X}(u,0)\) as base curve of \((\Sigma,g_{\Sigma})\). When we consider the embedded ruled surface \(\psi\colon\Sigma\to(\mathbb{R}\times\Sigma,g)\), \(\alpha^{\prime}(u)=\mathbf{X}_{u}|_{\alpha}\) is the initial \(v=0\) value of the Jacobi vector field \(\mathbf{X}_{u}\) along the rulings, and \(Z=(1/\sqrt{1+\cos^{2}v})\mathbf{X}_{v}\) is a unit geodesic vector tangent to them. Let us compute the Jacobi evolution function \(F\) in this context. A straightforward calculation gives
\[\nabla_{\alpha^{\prime}}Z=\frac{\cos v}{(1+\cos^{2}v)(2+\sin v)}\mathbf{X}_{u}.\]
As a consequence
\[F(u,v)=g(\mathbf{X}_{u},\nabla_{\mathbf{X}_{u}}Z)=\frac{\cos v(2+\sin v)}{1+\cos^ {2}v}.\]
The striction curves are given by the solution of \(F=0\), and this happens if and only if \(v=\pi/2+k\pi\), where \(k\in\mathbb{Z}\). This means that each curve \(\alpha_{v_{k}}(u)=(u,\pi/2+k\pi)\) with \(k\in\mathbb{Z}\) is a striction curve of \(\psi\colon\Sigma\to(\mathbb{R}\times\Sigma,g)\).
_Example 3: A ruled surface in a warped product manifold with an arbitrary number of striction curves._ Given an open interval \(I\subset\mathbb{R}\) and a \(2\)-dimensional Riemannian manifold \((F,g_{F})\), consider the product \(3\)-manifold \(I\times F\) endowed with the metric \(g=dt^{2}+f^{2}(t)g_{F}\), where \(f\colon I\to\mathbb{R}\) is a smooth positive function. We will refer to the warped product manifold \((I\times F,g)\) as \(I\times_{f}F\). Consider a closed unit curve \(\alpha^{F}\colon[a,b)\to(F,g_{F})\) in the fiber of \(I\times_{f}F\). Given any \(v\in I\), \(\alpha^{F}\) can be lifted in a natural way to the slice \(\{t=v\}\) as the curve \(\alpha_{v}\colon[a,b)\to I\times_{f}F\) defined by \(\alpha_{v}(u)=(v,\alpha^{F}(u))\), with \(Z=\partial_{t}\) as ruling unit vector field along \(\alpha_{v}\). As usual, the initial value of the Jacobi field \(\mathbf{X}_{u}\) on \(\alpha_{v}\) is \(\mathbf{X}_{u}|_{\alpha_{v}}=\alpha_{v}^{\prime}\). Since \(Z=\partial_{t}|_{\alpha_{v}}\) is orthogonal to the base curve, \(\mathbf{X}_{u}\) will remain orthogonal to the ruling by virtue of Proposition 9. Since \(\mathbf{X}_{u}\) is tangent to the fiber in \(I\times_{f}F\) and \(\mathbf{X}_{v}=\partial_{t}\) is orthogonal to it. It holds
\[\nabla_{\mathbf{X}_{u}}\mathbf{X}_{v}=(\partial_{t}\log f)\mathbf{X}_{u},\]
so the corresponding Jacobi evolution function reads
\[F(u,v)=g(\mathbf{X}_{u},\nabla_{\mathbf{X}_{u}}\mathbf{X}_{v})=(\partial_{t} \log f)\|\mathbf{X}_{u}\|^{2}.\]
The solution to the equation \(F=0\) which determines the striction curves is given in this case by \(f^{\prime}(v)=0\). Hence, there will be as many striction curves as there are values at which the function \(f^{\prime}\) vanishes in \(I\). For instance, if we consider \(I=\mathbb{R}\) and \((F,g_{F})\) isometric to the two-dimensional Euclidean space \((\mathbb{R}^{2},\delta)\), for the choice \(f(t)=\sin t\), every curve of the form \(\alpha_{v_{k}}(u)=(\pi/2+k\pi,\cos u,\sin u)\) with \(k\in\mathbb{Z}\) is a striction line in the ruled surface determined by it and \(\mathbf{X}_{v}=\partial_{t}\).
|
2310.17806
|
Transporting treatment effects from difference-in-differences studies
|
Difference-in-differences (DID) is a popular approach to identify the causal
effects of treatments and policies in the presence of unmeasured confounding.
DID identifies the sample average treatment effect in the treated (SATT).
However, a goal of such research is often to inform decision-making in target
populations outside the treated sample. Transportability methods have been
developed to extend inferences from study samples to external target
populations; these methods have primarily been developed and applied in
settings where identification is based on conditional independence between the
treatment and potential outcomes, such as in a randomized trial. We present a
novel approach to identifying and estimating effects in a target population,
based on DID conducted in a study sample that differs from the target
population. We present a range of assumptions under which one may identify
causal effects in the target population and employ causal diagrams to
illustrate these assumptions. In most realistic settings, results depend
critically on the assumption that any unmeasured confounders are not effect
measure modifiers on the scale of the effect of interest (e.g., risk
difference, odds ratio). We develop several estimators of transported effects,
including g-computation, inverse odds weighting, and a doubly robust estimator
based on the efficient influence function. Simulation results support
theoretical properties of the proposed estimators. As an example, we apply our
approach to study the effects of a 2018 US federal smoke-free public housing
law on air quality in public housing across the US, using data from a DID study
conducted in New York City alone.
|
Audrey Renson, Ellicott C. Matthay, Kara E. Rudolph
|
2023-10-26T22:55:45Z
|
http://arxiv.org/abs/2310.17806v2
|
# Transporting treatment effects from difference-in-differences studies
###### Abstract
Difference-in-differences (DID) is a popular approach to identify the causal effects of treatments and policies in the presence of unmeasured confounding. DID identifies the sample average treatment effect in the treated (SATT). However, a goal of such research is often to inform decision-making in target populations outside the treated sample. Transportability methods have been developed to extend inferences from study samples to external target populations; these methods have primarily been developed and applied in settings where identification is based on conditional independence between the treatment and potential outcomes, such as in a randomized trial. This paper develops identification and estimators for effects in a target population, based on DID conducted in a study sample that differs from the target population. We present a range of assumptions under which one may identify causal effects in the target population and employ causal diagrams to illustrate these assumptions. In most realistic settings, results depend critically on the assumption that any unmeasured confounders are not effect measure modifiers on the scale of the effect of interest. We develop several estimators of transported effects, including a doubly robust estimator based on the efficient influence function. Simulation results support theoretical properties of the proposed estimators. We discuss the potential application of our approach to a study of the effects of a US federal smoke-free housing policy, where the original study was conducted in New York City alone and the goal is extend inferences to other US cities.
_Keywords:_ difference-in-differences, transportability, efficiency, causal inference
## 1 Introduction
Difference-in-differences (DID) is a popular identification strategy when studying the causal effects of large-scale social and economic policies [1, 2]. DID is appealing when: (i) randomization is not feasible, (ii) there is variation across jurisdictions and over time in terms of whether a policy was adopted, and (iii) not all variables that are confounders of the policy-outcome relationship are measured, leading to concerns about confounding bias [3]. By comparing pre- and post-policy outcomes in both the jurisdiction implementing the policy and a comparable jurisdiction without the policy, and making a so-called parallel trends assumption (i.e., that changes in average potential outcomes over time are independent of policy adoption) [1, 4], DID can identify the causal effect of the policy on the outcome, even in settings where unmeasured variables would confound either (i) a pre-post analysis or (ii) a post-policy comparison between the treated and untreated jurisdictions.
An important (often under-recognized) aspect of DID is that it identifies the average treatment effect among the treated (ATT) in the post-policy period, and not the average treatment effect (ATE) or other common parameters of interest [1]. For example, the ATT in a study of a policy raising the minimum wage is the effect of the policy on outcomes for the population living in the jurisdiction(s) that actually raised the minimum wage, and not the population living in all the jurisdictions in the study--those with and without the policy. ATT estimates resulting from a DID analysis can be informative as to whether to maintain or discontinue policies in those locations. However, a major goal of DID research is often to inform policy decisions by governments that have not yet adopted the policy of interest; in the minimum wage example, it may be of interest to inform decisions by the federal government or states with less generous minimum wage laws. Naively considering the estimated policy effects to apply to untreated jurisdictions requires the additional, strong assumption that there are no effect measure modifiers (measured or unmeasured) whose distribution varies between the treated jurisdiction(s) under study and the untreated jurisdiction(s)
to which one wishes to make inferences [5, 6]. For example, such an extrapolation would be biased if effects of the minimum wage differ by age, and age distributions differed across states.
Generalizability and transportability methods have been developed with the goal of formally extending inferences made in one population to another population in the presence of effect heterogeneity [7, 8, 9]. These methods have mainly been applied in contexts where internal validity is established based on an unconfoundedness assumption, typically achieved through a randomized controlled trial (RCT). It is well-known that real-world RCTs can deliver high internal validity, but that inferences from such studies apply only to the people participating in the RCT, which may differ from the true target population in terms of effects experienced. We define "target population" to be the population to whom inference is desired, as dictated by substantive concerns. For example, in RCTs of medical treatments, the target population may be the population that should receive treatments in practice, which may differ in important aspects from the individuals included in the trial [10, 11]. Methods exist to quantitatively extend (i.e., transport or generalize) effects estimated in RCTs to target populations other than the included study sample, possibly alleviating the well-known tradeoff between internal and external validity in such studies [12].
It is plausible that transportablity methods could be used to quantitatively extend causal effects estimated from DID studies to target populations other than the treated sample, possibly alleviating the well-known tradeoff between internal and external validity in such DID studies as well. However, to our knowledge, neither identification assumptions nor estimators for transporting DID estimates have been addressed in the literature. DID presents special challenges for transportability because of the presumed existence of unmeasured confounders. Standard approaches to transportability assume that a conditional average treatment effect is constant between the sample and target population after conditioning on a measured set of covariates; if any unmeasured confounders in a DID application are also effect measure modifiers of the treatment-outcome relationship, then the existence of these unmeasured confounders creates complexities in evaluating this condition which have not, to our knowledge, been explored. Causal diagrams [13] may facilitate such an exploration, as they have been essential in understanding assumptions for identification of transported effects [6, 9, 14], but have seen limited use in DID settings [4, 15]. This disconnect may be because causal diagrams generally only capture nonparametric independence assumptions [13], whereas parallel trends is a semiparametric assumption partially restricting the functional form of the outcome distribution [16].
This paper develops a formal approach to identification and estimation of effects in a target population, based on DID conducted in a study sample that differs from the target population. This paper is framed as transportability in the sense that we assume the study sample is not a subset of the target population [8, 17], though our results can easily be extended to the case where the study sample is nested within the target population. We employ causal diagrams to understand the sampling mechanism (i.e, the model that distinguishes the study sample from the target population), and show that our results rely crucially on the assumption that unmeasured confounders are either independent of the sampling process or are not effect modifiers on the scale of the effect being estimated (in this paper, we focus on additive effects such as the ATT and ATE, but our results can be generalized to non-additive measures, such as risk ratios). Section 2 describes the observed data and preliminary assumptions, Section 3 presents key identification results linking the observed data in the sample to causal quantities in the target population, and Section 4 presents estimators (including a doubly robust estimator based on the efficient influence function) for these quantities, which are illustrated using simulation in Section 5. Section 6 concludes.
## 2 Preliminaries
Suppose we observe data on the variables \(W_{i},A_{i},Y_{i0},\) and \(Y_{i1}\) in a study sample containing \(n\) individuals or units (\(i=1,...,n\)), where \(W_{i}\) are (possibly multivariate) baseline covariates measured just before exposure, \(A_{i}\) is a binary exposure, and \(Y_{it}\) (\(t=0,1\)) are outcomes measured before (\(t=0\)) and after (\(t=1\)) exposure occurs. Hereafter, we drop the \(i\) subscript unless needed to resolve ambiguity. Suppose that the study sample is not representative of the true target population of interest, and that the latter contains \(N\) individuals (\(i=1,...,N\)). We let \(S=1\) denote membership in the study sample and \(S=0\) denote membership in the target population. We assume that outcomes are only measured in the study sample, but that treatment and covariates are measured in both the study sample and the target population. Thus, the observed data take the form \(O=\{S,A,W,Y_{0}S,Y_{1}S\}.\) Throughout, we use \(f(x|\cdot)\) to denote a conditional density if \(x\) is continuous and a conditional probability mass function if \(x\) is discrete. Caligraphic uppercase letters denote the support of a random variable.
We use \(Y_{t}(a)\) to denote a potential outcome, or the outcome that would have occurred if exposure \(A\) had been set by intervention to the value \(a\). We assume the following throughout:
**Assumption 1**.: _(No interference) \(Y_{it}(a_{i},a_{i^{\prime}})=Y_{it}(a_{i})\) for \(i\neq i^{\prime}\), with \(i,i^{\prime}\) such that \(\{S_{i},S_{i^{\prime}}\}\in\{0,1\}^{2}\)_
**Assumption 2**.: _(Treatment version irrelevance) If \(A_{i}=a\), then \(Y_{it}=Y_{it}(a)\) with \(i,i^{\prime}\) such that \(\{S_{i},S_{i^{\prime}}\}\in\{0,1\}^{2}\)_
Assumptions 1 and 2 are standard in the causal inference and transportability literature and are not specific to the DID setting. Assumption 1 requires that one unit's treatment does not impact another unit's potential outcome
in either the sample or the target. Assumption 2 requires that treatments are sufficiently well-defined that observed outcomes can stand in for potential outcomes under treatment with the observed exposures, and that versions of the treatment do not differ between the sample and target. Assumptions 1 and 2 are often referred to together as the stable unit treatment value assumption (SUTVA).
### Difference-in-differences in the study sample
Here, we give a brief review of causal identification based on DID, which we will assume is the basis of identification in the study sample. Specifically, we invoke the following assumptions, standard in the DID literature [1, 18, 19]:
**Assumption 3**.: _(No anticipation): \(Y_{0}(a)=Y_{0}\) for \(a=0,1\)_
**Assumption 4**.: _(Positivity of treatment assignment) If \(f(w|S=1)>0\) then \(f(A=0|W=w,S=1)>0\) with probability 1 for all \(w\in\mathcal{W}\)_
**Assumption 5**.: _(Parallel Trends): For \(w\in\mathcal{W}\):_
\[\mathbb{E}\{Y_{t}(0)-Y_{t-1}(0)|A=1,S=1,W=w\}=\mathbb{E}\{Y_{t}(0)-Y_{t-1}(0)| A=0,S=1,W=w\}\]
Assumption 3 states that future treatment does not impact the prior outcomes (this assumption can also be relaxed to allow anticipation up to a known time period [20]). It is well known that under Assumptions 1-5, it is possible to identify the \(W\)-conditional SATT, defined as \(\eta(w)\equiv\mathbb{E}[Y_{1}(1)-Y_{1}(0)|W=w,A=1,S=1].\) Specifically, under Assumptions 1-5 we have:
\[\eta(w) =\mathbb{E}[Y_{1}-Y_{0}|W=w,A=1,S=1]-\mathbb{E}[Y_{1}-Y_{0}|W=w,A =0,S=1] \tag{1}\] \[\equiv m_{1}(w)-m_{0}(w),\]
where we define \(m_{a}(w)=\mathbb{E}[Y_{1}-Y_{0}|W=w,A=a,S=1]\). By extension, the unconditional sample ATT (abbreviated SATT, usually the focal parameter in DID) is identified as \(\mathbb{E}[Y_{1}(1)-Y_{1}(0)|A=1,S=1]=\mathbb{E}[m_{1}(W)-m_{0}(W)|A=1,S=1].\) However, and importantly for our discussion, Assumptions 1-5 are not sufficient to identify parameters unconditional on \(A=1\), such as the sample average treatment effect (SATE), defined as \(\mathbb{E}[Y_{1}(1)-Y_{1}(0)|S=1]\). This is because parallel trends provides information about potential outcomes only among the treated group; without further assumptions there is no basis for identification of potential outcomes for the group \(A=0\). Moreover, and as is the focus of this paper, additional assumptions would be required to identify effects outside the study sample, since parallel trends and positivity of treatment assignment are conditional on \(S=1\).
## 3 Identification of transported treatment effects
In this section we consider the task of equating a causal estimand (i.e., one specified in terms of potential outcomes) in the target population to a function of the distribution of the observed data, \(O\). Specifically, we focus on the population average treatment effect in the treated (PATT), defined as \(\mathbb{E}[Y_{1}(1)-Y_{1}(0)|A=1,S=0]\), and the population average treatment effect (PATE) defined as \(\mathbb{E}[Y_{1}(1)-Y_{1}(0)|S=0]\). We begin by introducing a motivating example, after which we introduce and discuss a set of sufficient identifying assumptions, and present identifying formulas which equal each causal estimand if the assumptions are true.
### Motivating example
As of July 30, 2018, a US Department of Housing and Urban Development (HUD) rule required all public housing authorities to implement smoke-free housing (SFH) policies banning smoking in residences. As a motivating example, consider the question, what effect did the federal SFH policy have on air quality in US public housing developments? To answer this question, we consider transporting the results from a study conducted in public housing buildings in New York City (NYC) only. Specifically, a team of investigators conducted air quality monitoring in living rooms and common areas of NYC public housing buildings, both before the federal policy went into effect (from April to July 2018), and again approximately every six months for 3 years post-policy. A DID analysis was conducted to estimate the effect of the policy on indoor air nicotine (among other measures), using as a comparison group a sample of households receiving housing assistance through a program known as Section 8, a public subsidy to supplement rental costs in private sector buildings [21]. Air quality was sampled in stairwells, hallways, and living rooms; for simplicity here we focus on stairwells. Because of concerns about systematic variation in outdoor air quality between building types, the investigators adjusted for outdoor ambient PM\({}_{2.5}\) in their DID estimates. (We note that in the
original study, building inclusion criteria were high-rise [\(>\)15 floors], large resident population [\(>\)150 units], at least 80% Black or Hispanic residents, and at least 20% younger than 18 years; for simplicity we ignore these criteria here.)
In this example, \(Y_{t}\) is a continuous variable representing log-transformed air nicotine in stairwells (where we let \(t=0\) denote April-July 2018 and \(t=1\) denote April-September 2021), \(A=1\) denotes residence in a public housing building, \(A=0\) denotes residence in a Section 8 household, \(S=1\) denotes residence in NYC, \(S=0\) denotes residence outside of NYC, and \(W\) is a continuous variable capturing outdoor ambient PM\({}_{2.5}\). Thus, if Assumptions 1-5 hold (along with correct model specification and no measurement error), the DID results in this study may be interpreted as estimates of the effect of the SFH policy on indoor air quality for NYC public housing residents only. Though the study is informative as to the effect of the policy in NYC, it is also of interest to federal policymakers to estimate the PATT, which here represents the effect of the HUD rule on air nicotine in April-September 2021 in public housing in the US outside of NYC (\(S=0\)). Moreover, it may also be of interest to assess the PATE, which here represents the effect of a hypothetical policy covering both public housing and Section 8 housing. Importantly, the estimates in this study cannot be interpreted as estimates of the PATT or PATE without additional assumptions.
### Naive approach
We begin with an approach to transportability that does not take into account the causal structure of DID (in particular, does not take into account unmeasured confounding), after which we will use causal diagrams to illustrate why this approach will usually fail. Since all identified potential outcomes in equation (1) are conditional on \(A=1\), an obvious starting point in attempting to transport effects identified through DID is to identify the PATT. Inspecting equation (1), a natural approach may be to assume that the \(W\)-conditional SATTs (conditional on each value of \(w\)) are equal between the sample and the target. If this were the case, one could identify the PATT using the following expression:
\[\mathbb{E}[Y_{1}^{1}-Y_{1}^{0}|A=1,S=0] =\mathbb{E}[\mathbb{E}(Y_{1}^{1}-Y_{1}^{0}|W,A=1,S=1)|A=1,S=0]\] \[=\mathbb{E}[m_{1}(W)-m_{0}(W)|A=1,S=0] \tag{2}\]
Specifically, in order for equation (2) to hold, the following assumptions would be sufficient:
**Assumption 6**.: _(Exchangeability of selection) \(Y_{t}(a)\perp\!\!\!\perp S|W,A=1\)_
**Assumption 7**.: _(Positivity of selection) If \(f(w|A=1,S=0)>0\) then \(f(S=1|W=w,A=1)>0\) with probability 1 for all \(w\in\mathcal{W}\)_
Assumption 6 states that, among the treated group, the distributions of potential outcomes in the sample and target are equal after conditioning on \(W\). Assumption 7 states that any covariate values that may occur in the target treated group must also be possible in the sample treated group. Assumptions 6 and 7 together imply that the \(W\)-conditional ATT is constant across settings, and hence that the PATT is identified by equation (2).
Though Assumption 6 is similar to the exchangeability of selection assumption usually invoked for transportability of the average treatment effect (ATE), it differs importantly in that it must hold conditional on \(A=1\). This is so because DID was the basis for identification in the sample, so any effects identified in the sample (whether SATT or \(W\)-conditional SATT) are conditional on \(A=1\), and the basis for transportability is therefore the constancy of the \(W\)-conditional SATTs (not SATEs) across settings. This constancy is dependent on replacing potential outcomes in the _treated_ target with those in the _treated_ sample, and for this replacement to be licensed, Assumption 6 must condition on \(A=1\).
Unfortunately, conditioning on \(A=1\) means Assumption 6 is unlikely to hold in most DID applications. To illustrate this point, Figure 0(a) displays a single world intervention graph (SWIG) depicting a common DID setting. SWIGs are similar to causal directed acyclic graphs (DAGs) in that nodes represent random variables, directed arrows represent direct effects, and conditional independencies are given by \(d\)-separation rules [22]. SWIGs extend DAGs by depicting interventions on variables as split nodes (\(A|a\) in Figure 0(a) indicates intervening to set \(A=a\)), and any variables affected by the intervention variable become potential outcomes under that intervention. Figure 0(a) represents a standard DID scenario in the sense that \(U\) represents unmeasured common causes of \(A\) and \(Y_{1}\) that would confound a cross-sectional comparison, and whose existence motivates the use of DID. In Figure 0(b), we add \(S\) with arrows into \(A\) and \(W\), depicting the assumption that distributions of these variables differ across settings. Following the convention of selection diagrams, arrows emanating from \(S\) represent "exogenous conditions that determine the values of the variables to which they point" [14]. Assumption 6 would not be expected to hold in Figure 0(b) due to the existence of the path \(S\to A\gets U\to Y_{1}(a)\), on which \(A\) is a collider. Thus, this path is opened by conditioning on \(A\) (and not closed by conditioning on \(W\)), rendering \(S\) potentially associated with \(Y_{1}(a)\) conditional on \(\{W,A=1\}\). For the same reason, \(S\) would also be associated with \(Y_{0}\) conditional on \(\{W,A=1\}\) in at least one data distribution consistent with the SWIG. Importantly, such paths will be present whenever (i) there is unmeasured confounding,
and (ii) the target and sample differ in the distribution of treatment (conditional on \(W\)). We expect (i) to always be the case (otherwise DID would be unnecessary). We also expect (ii) to be the case except in rare circumstances such as when \(A\) is experimentally assigned in both the target and the sample. Importantly, this failure of Assumption 6 occurs regardless of whether the unmeasured confounders \(U\) differ marginally in distribution between the target and sample (i.e., whether or not there is an arrow from \(S\) into \(U\) in Figure 0(b)).
### Identification via restrictions on effect heterogeneity
The analysis in the previous subsection illustrated that identification of transported effects based on exchangeability according to measured covariates (as in Assumption 6) is unlikely to be tenable in DID studies, since conditioning on \(A=1\) will typically cause unmeasured covariates \(U\) to be associated with the sampling mechanism \(S\) regardless of whether this association exists marginally. Thus, Assumption 6 would likely only be plausible if \(U\) were included in the conditioning event, but this would not aid identification since \(U\) is unmeasured. Fortunately, it is possible to identify transported effects when variables needed for exchangeability are unmeasured, so long as those variables are not also effect measure modifiers on the scale on which the causal effects are being measured, which we illustrate here.
We begin by expressing the concept that unmeasured confounders \(U\) drive our decision to use DID by stating the following Assumptions, which relate only to identification in the sample:
**Assumption 8**.: _(Latent exchangeability of treatment) \(Y_{t}(a)\perp A|W,U,S=1\) for \(t\in\{0,1\}\) and \(a\in\{0,1\}\)_
**Assumption 9**.: _(Latent positivity of treatment) If \(f(u,w|S=1)>0\) then \(f(A=a|U=u,W=w,S=1)>0\) with probability 1 for \(a\in\{0,1\}\) and \(\{u,w\}\in\{\mathcal{U},\mathcal{W}\}\)_
In a sense, Assumption 8 does not introduce any new restrictions because one can define \(U\) to be whatever variables (known or unknown) confound the cross-sectional association between \(A\) and \(Y_{t}\) and which motivate the use of DID in the first place. In contrast, Assumption 9 may be restrictive; the requirement that unmeasured confounding variables \(U\) (known or unknown) have overlapping distribution between the treated and untreated may not hold in some settings and is not necessary for identification of the SATT via DID. (As an aside, it can be shown that parallel trends will hold if (i) Assumptions 9 and 8 hold and (ii) \(U\) exerts a constant effect on \(Y_{0}\) and \(Y_{1}\) on the additive scale within levels of \(W\) among the treated [4]. However, in this paper we assume parallel trends to hold and do not consider what conditions render it plausible or not.) Next consider the follow assumptions aimed at identification in the target:
**Assumption 10**.: _(Latent exchangeability of selection) \(Y_{t}(a)\perp S|W,U,A\) for \(t\in\{0,1\}\) and \(a\in\{0,1\}\)_
**Assumption 11**.: _(Latent positivity of selection) If \(f(u,w|A=a,S=0)>0\) then \(f(S=1|U=u,W=w,A=a)>0\) with probability 1 for \(a\in\{0,1\}\) and \(\{u,w\}\in\{\mathcal{U},\mathcal{W}\}\)_
Assumption 10 modifies Assumption 6 by allowing for \(U\) in the conditioning event, so that the potential outcomes are equal in distribution between the sample and the target after conditioning on \(W\), \(U\), and \(A\). Similarly, Assumption 11 requires all possible values of both \(U\) and \(W\) in the target population to also be possible in the sample. In addition to conditioning on \(U\), Assumptions 10 and 11 modify Assumption 6 and 7 by requiring their respective conditions for both the treated and untreated, not just the treated. We can similarly assess Assumption 10 graphically: if (as is the case in Figure 0(a)) the variables \(\{W,U,A\}\)\(d\)-separate \(S\) from \(Y_{0}\) and \(S\) from \(Y_{1}(a)\), then Assumption 10 holds.
Figure 1: Single world intervention graphs (SWIGs) illustrating common scenarios relevant for (a) DID in a sample and (b) transportability of DID results to a target population. Double-headed arrows indicate unmeasured common causes between two nodes.
Because \(U\) is unmeasured, Assumptions 10 and 11 are insufficient for transportability; they render effects conditional on \(\{W,U,A\}\) constant across settings, but these effects are not themselves identifiable. However, transportability is still possible if \(U\) is not an additive effect measure modifier, which we state as follows:
**Assumption 12**.: _(U-homogeneity) \(\mathbb{E}[Y_{1}(1)-Y_{1}(0)|U,W,S,A]=\mathbb{E}[Y_{1}(1)-Y_{1}(0)|W,S,A]\)_
Note that Assumption 12 does not require that \(U\) not be a confounder, only that the treatment effect does not vary across levels of \(U\) on the additive scale. Note also the scale-dependence of Assumption 12; for example, it cannot hold for both log-transformed \(Y_{t}\) and \(Y_{t}\) on its natural scale, unless there is no effect of treatment or \(U\) is unassociated with \(Y_{t}\). The fact that Assumption 12 refers to the additive scale follows the fact that our focus is on additive treatment effects, if effects on an alternate scale (such as risk ratios) were of interest, then Assumption 12 would need to be reformulated to express treatment effect homogeneity on that scale. If effects on the additive scale are homogeneous with respect to \(U\), then additive effects conditional on \(\{W,U,A\}\) (which are constant across settings by Assumptions 10 and 11) do not depend on \(U\), yielding identification of the PATT. This is stated in the following theorem:
**Theorem 1**.: _(Transportability for difference-in-differences) Under Assumptions 1-5 and 10-12, the PATT is identified as \(\mathbb{E}[Y_{1}(1)-Y_{1}(0)|A=1,S=0]=\mathbb{E}[m_{1}(W)-m_{0}(W)|A=1,S=0]\). Moreover, if Assumptions 8 and 9 also hold, then the identification for the PATE is given as \(\mathbb{E}[Y_{1}(1)-Y_{1}(0)|S=0]=\mathbb{E}[m_{1}(W)-m_{0}(W)|S=0]\), and for the population average treatment effect in the untreated (PATU) as \(\mathbb{E}[Y_{1}(1)-Y_{1}(0)|A=0,S=0]=\mathbb{E}[m_{1}(W)-m_{0}(W)|A=0,S=0]\)._
The proof of Theorem 1 is provided in Appendix A.1. Importantly, Theorem 1 gives identifying formulas for the PATT as well as the PATE and PATU. Notably, Assumptions 8-9 are only required for identification of the PATE and PATU, not the PATT. This is an important distinction, particularly because one of the key advantages of DID is that identification can hold without having to assume positivity for the unmeasured confounders. As an aside, the addition of Assumptions 8 and 12 to the standard identifying assumptions for DID (in our exposition, Assumptions 1-5) also renders identifiable the SATE and the sample average treatment effect in the untreated (SATU) (shown in Appendix A.2). These results are intuitive: under latent exchangeability of selection, the treatment effects in the population are weighted averages of the \(\{W,U,A\}-\)conditional treatment effects in the sample; these conditional effects do not depend on \(U\) under \(U\)-homogeneity. Moreover, because \(U\) represents all unmeasured confounders, differences between the \(W\)-conditional SATT, SATE, and SATU can only be caused by effect heterogeneity according to \(U\), which has been ruled out by Assumption 12. Therefore the \(W\)-conditional SATT, SATE, and SATU all equal one another.
Assumptions 8-12 are not the only set of assumptions that yield identification of effects in the target when \(U\) is related to the sampling mechanism, but alternative assumption sets will generally also place restrictions on unmeasured effect heterogeneity. For example, supposing that Assumptions 10-11 hold, it is possible to identify the PATT under a parallel trends assumption for both the treated and untreated counterfactual regimes (i.e., if we added to Assumption 5 an equivalent expression replacing \(Y_{t}(0)\) and \(Y_{t-1}(0)\) with \(Y_{t}(1)\) and \(Y_{t-1}(1)\)). However, this stronger parallel trends assumption also implies the \(W\)-conditional SATT, SATE, and SATU are all equal [20], implying effect homogeneity according to \(U\).
### Application
Table 1 provides interpretations of each of the 12 Assumptions presented in terms of the applied question. In particular, \(U\) represents unmeasured differences between public housing and Section 8 that impact levels of air nicotine independently of the treatment, leading investigators to pursue a DID design. For example, \(U\) may represent ventilation (with \(U=1\) denoting high and \(U=0\) denoting low ventilation); we expect public housing building to more often have low ventilation and that ventilation impacts air nicotine, but ventilation was not measured in the study. In Figure (b)b, arrows from \(S\) into \(W\) and \(A\) depict measured environmental and societal conditions that lead to differing air quality and differing distributions of public housing vs. Section 8 residence across regions in the US. Since \(Y_{t}\) was log-transformed, Assumption 12 requires that for buildings with the same levels of outdoor PM\({}_{2.5}\) and separately for public housing and Section 8, the additive effect of a smoke-free housing policy on log-transformed air nicotine (and hence a type of multtplicative effect) is constant for buildings with high and low ventilation. Thus, Assumption 12 would be violated if high- and low-ventilation buildings had differing baseline levels of air nicotine and the effect of a smoke-free housing policy was to decrease air nicotine by a constant absolute amount (e.g., a constant reduction in parts per million).
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline Assumption & Meaning in application \\ \hline
1 (No interference) & Nicotine levels in one building are not affected by SFH policies in other buildings \\
2 (Treatment version irrelevance) & Variation in SFH implementation/enforcement do not affect air nicotine \\
3 (No anticipation) & Individuals did not change their behavior in anticipation of SFH \\
4 (Positivity of treatment assignment) & There are Section 8 buildings at all levels of outdoor PM\({}_{2.5}\) seen among NYCHA buildings \\
5 (Parallel trends) & In absence of the SFH policy, at a given level of outdoor PM\({}_{2.5}\), absolute changes in log-transformed air nicotine levels over time would have been equal between NYCHA and Section 8 buildings \\
6 (Exchangeability of selection) & Among public housing buildings nationally, at a given level of outdoor PM\({}_{2.5}\), the distribution of potential nicotine levels is equal between NYC and the rest of the USA \\
7 (Positivity of selection) & There are NYCHA buildings at all levels of PM\({}_{2.5}\) seen outside public housing buildings nationally \\
8 (Latent exchangeability of treatment) & In NYC only, at a given level of outdoor PM\({}_{2.5}\) and ventilation, potential air nicotine distributions are independent of Section 8 vs. NYCHA \\
9 (Latent positivity of treatment) & There are both NYCHA and Section 8 buildings at all levels of outdoor PM\({}_{2.5}\) and ventilation seen in the study sample \\
10 (Latent exchangeability of selection) & Among public housing buildings nationally, at a given level of outdoor PM\({}_{2.5}\) and ventilation, the distribution of potential nicotine levels is equal between NYC and the rest of the USA \\
11 (Latent positivity of selection) & There are NYCHA buildings at all levels of outdoor PM\({}_{2.5}\) and ventilation seen for public housing buildings nationally \\
12 (U-homogeneity) & Building ventilation is not an additive effect measure modifier after conditioning on outdoor PM\({}_{2.5}\) and indicators of NYC vs. remaining USA and Section 8 vs. public housing. \\ \hline \end{tabular}
\end{table}
Table 1: Interpretation of assumptions in the motivating example. Recall that in the example, \(A\) denotes residence in public housing vs. Section 8 (and hence exposure to the smoke-free housing policy vs. not); \(Y_{t}\) denotes log-transformed air nicotine in stairwells; \(W\) denotes outdoor ambient PM\({}_{2.5}\); \(S\) denotes residence in NYC vs. the remaining US; and \(U\) denotes building ventilation (and possibly other unmeasured confounders).
Estimators
In the section, we presume identification holds according to one of the sets of assumptions presented Theorem 1, and consider the problem of estimating the statistical parameter
\[\psi(a^{*})=\mathbb{E}[\eta(W)|A=a^{*},S=0],a^{*}=0,1.\]
(See equation (1) for the definition of \(\eta(\cdot)\).) From Theorem 1, we have that under Assumptions 1-5 and 10-12, \(\psi(1)\) equals the PATT; with the addition of Assumptions 8-9, \(\psi(0)\) equals the PATU and \(E[\psi(A)|S=0]\) equals the PATE. To simplify notation in this section, let \(\Delta Y=Y_{1}-Y_{0}\) denote differenced outcomes, \(m_{a}(W)=\mathbb{E}[\Delta Y|W,S=1,A=a]\) denote the true outcome-difference model, and \(g_{a,s}(W)=f(A=a,S=s|W)\), denote the true propensity scores for treatment assignment and selection. We use \(\widehat{m}_{a}\) and \(\widehat{g}_{a,s}\) to denote estimators of those quantities, which may or may not be correctly specified. A correctly specified model is one that converges in probability to the true population moments. We also use \(P_{n}\{h(O)\}=n^{-1}\sum_{i=1}^{n}h(O_{i})\) to denote the sample average of a function \(h(\cdot)\) of the observed data.
### G-computation estimator
A g-computation estimator (also called a substitution estimator or plug-in estimator) is constructed by plugging in estimators of the empirical counterparts of the population quantities into the identifying formula in Theorem 1:
\[\widehat{\psi}_{geomp}(a^{*})=P_{n}\bigg{\{}\frac{I(A=a^{*},S=0)}{P_{n}\{I(A =a^{*},S=0)\}}\{\widehat{m}_{1}(W)-\widehat{m}_{0}(W)\}\bigg{\}}\]
The estimator \(\widehat{\psi}_{geomp}\) will be consistent and asymptotically normal if \(\widehat{m}_{a}\) is correctly parametrically specified, but not necessarily otherwise.
### Inverse-odds weighted estimator
Instead, one may have more information about the functional form of the propensity scores. The following inverse-odds weighted estimator will be consistent and asymptotically normal if \(\widehat{g}_{a,s}\) are correctly parametrically specified, but not necessarily otherwise:
\[\widehat{\psi}_{iou}(a^{*})=P_{n}\bigg{\{}\bigg{[}\frac{I(A=a^{*},S=1) \widehat{g}_{a^{*},0}(W)}{P_{n}\{I(A=a^{*},S=0)\}\widehat{g}_{1,1}(W)}-\frac{ I(A=0,S=1)\widehat{g}_{a^{*},0}(W)}{P_{n}\{I(A=a^{*},S=0)\}\widehat{g}_{0,1}(W)} \bigg{]}\Delta Y\bigg{\}}\]
### Doubly robust estimator
Lastly, we provide a doubly robust estimator, meaning in this case that the estimator is consistent and asymptotically normal if either \(\widehat{g}_{a,s}\) or \(\widehat{m}_{a}\) consistent of correctly-specified parametric models; it need not be the case that both are correct. A doubly robust estimator for \(\psi\) is given by:
\[\widehat{\psi}_{dr}(a^{*})=P_{n}\bigg{\{}\frac{I(A=a^{*},S=1) \widehat{g}_{a^{*},0}(W)}{P_{n}I(A=a^{*},S=0)\widehat{g}_{1,1}(W)}\{\Delta Y- \widehat{m}_{1}(W)\}-\frac{I(A=0,S=1)\widehat{g}_{1,0}(W)}{P_{n}I(A=a^{*},S=0 )\widehat{g}_{0,1}(W_{i})}\{\Delta Y-\widehat{m}_{0}(W)\}\] \[\qquad\qquad\qquad\qquad+\frac{I(A=a^{*},S=0)}{P_{n}I(A=a^{*},S=0 )}\{\widehat{m}_{1}(W)-\widehat{m}_{0}(W)\}\bigg{\}}\]
In Appendix B, we show the derivation of \(\widehat{\psi}_{dr}(a^{*})\) as a "one-step" estimator based on the efficient influence function for \(\psi(a^{*})\), which implies that \(\widehat{\psi}_{dr}(a^{*})\) is asymptotically efficient. The fact that \(\widehat{\psi}_{dr}(a^{*})\) corresponds to the efficient influence function also leads to an estimator of the asymptotic variance under the assumption that \(\widehat{g}_{a,s}\) and \(\widehat{m}_{a}\) are both correctly specified, which we also provide in Appendix B. In Appendix C, the double robust property of \(\widehat{\psi}_{dr}(a^{*})\) is demonstrated, and proof of the consistency of the g-computation and IOW estimators are provided as a bi-product of the double robust property. Code to implement the proposed estimators is available at [https://github.com/audreyrenson/did_generalizability](https://github.com/audreyrenson/did_generalizability).
## 5 Simulation study
We generated \(nsims=200\) datasets of \(nobs=10,000\) each, according to the following data generating mechanism:
\[S \sim Bernoulli(0.5)\] \[U \sim Bernoulli(logit^{-1}[-1+S])\] \[W \sim Bernoulli(0.5-0.25S)\] \[A \sim Bernoulli(0.3+0.1S+0.1W+0.1U)\] \[Y_{0} \sim N(1+W+U,0.1)\] \[Y_{1} \sim N(0.5W+U+A+0.5WA,0.1)\]
To see that parallel trends holds in the simulation, note that for \(a=0,1\):
\[\mathbb{E}[Y_{1}(0)-Y_{0}(0)|A=a,S=1,W] =\mathbb{E}[[0.5W+U+(0)+0.5W(0)]-[1+W+U]|A=a,S=1,W\}\] \[=\mathbb{E}[[0.5W-1-W]|A=a,S=1,W\}\] \[=-1-0.5W\]
We applied each of the three proposed estimators for the PATT to each dataset with all models correctly specified, all models incorrectly specified, only outcomes models misspecified, and both selection and treatment models misspecified. We treated \(U\) as an unmeasured variable in all analysis. For correctly specified models, all variables except \(U\) were included with the above functional form, in misspecified outcome models we only include main terms for W and A, and in misspecified propensity models we dropped terms for S. The true PATT\(=1.28\) was calculated by generating potential outcomes for 1 million observations. Results shown in Figure 2 illustrate that IOW is biased whenever the propensity score for treatment and selection is misspecified, g-compuation is biased whenever the outcome model is misspecified, and that the doubly robust estimator is approximately unbiased if either model is correct. Code to implement the simulation is available at [https://github.com/audreyrenson/did_generalizability](https://github.com/audreyrenson/did_generalizability).
## 6 Discussion
This paper introduced an approach to estimating treatment effects in a target population based on DID conducted in a study sample that differs from the target population. Under certain assumptions, some of which may be understood with the aid of causal diagrams, we can identify the PATT, PATE, and PATU. We also propose several estimators of the aforementioned effects in the target population that only require measurement of covariates and/or treatments in the target population, not necessarily outcomes. This approach may be useful when, as is the case in our motivating example involving air nicotine, measurement of outcomes in the target population (in this case, the entire U.S.) may not be feasible, and unobserved confounding is present (in this case, ventilation) but those unobserved confounders do
Figure 2: Results of each estimator applied to \(nsims=200\) simulated datasets of \(nobs=10,000\) each. \(bfal\) indicates all models misspecified, \(qfal\) only outcomes models misspecified, \(gfal\) both selection and treatment models misspecified, and \(true\) all models correctly specified.
not modify the additive treatment effect. Though our approach assumed the same set of covariates were sufficient for internal and external validity, the methods can easily be adapted to settings where the covariates needed for external validity are a subset of those needed for internal validity. Though our approach has been framed around the problem of transportability (i.e., the study sample is not a subset of target population), our methods can easily be adapted to generalizability problems (when the study sample is nested in the target population). The approach may therefore also prove useful when a select group of jurisdictions (such as states or provinces) implement a policy, but decisions need to made a higher level of organization (such as national governments).
It is important to note that our motivating example was greatly simplified for illustrative purposes; a full analysis to address the motivating question would likely be more complex. For example, one would need to carefully consider how the exclusion criteria may impact the plausibility of assumptions, whether other covariates would need to be measured, and whether differences in building management between public housing in NYC and other areas might violate treatment version irrelevance.
Causal diagrams have rarely been employed to understand identification in DID designs, but have been essential for elucidating the way causal structure impacts generalizability and transportability problems. By employing causal diagrams, we highlighted that the causal structure implied by unmeasured confounding that often motivates DID creates particular complexities for generalizability and transportability. Specifically, we were able to identify transported treatment effects under an assumption that the unmeasured confounders are not additive effect measure modifiers, but not necessarily otherwise.
The validity of an assumption that unmeasured confounders are not additive effect measure modifiers may be difficult to assess in practice. In our example, we possess no _a priori_ substantive information to suggest that the additive effect of a smoking ban on log-transformed air nicotine would be constant according to the building's level of ventilation (a presumed confounder). This suggests that, when transportability is of interest in DID studies, investigators should measure and adjust for as many potential confounders as possible (even if not formally needed for parallel trends) in order to reduce the number of variables for which we must make homogeneity assumptions. Future work will seek to develop bounds under violations of effect homogeneity along with methods to assess the sensitivity of conclusions to this key assumption.
|
2307.14718
|
Towards a New Interface for Music Listening: A User Experience Study on
YouTube
|
In light of the enduring success of music streaming services, it is
noteworthy that an increasing number of users are positively gravitating toward
YouTube as their preferred platform for listening to music. YouTube differs
from typical music streaming services in that they provide a diverse range of
music-related videos as well as soundtracks. However, despite the increasing
popularity of using YouTube as a platform for music consumption, there is still
a lack of comprehensive research on this phenomenon. As independent researchers
unaffiliated with YouTube, we conducted semi-structured interviews with 27
users who listen to music through YouTube more than three times a week to
investigate its usability and interface satisfaction. Our qualitative analysis
found that YouTube has five main meanings for users as a music streaming
service: 1) exploring musical diversity, 2) sharing unique playlists, 3)
providing visual satisfaction, 4) facilitating user interaction, and 5)
allowing free and easy access. We also propose wireframes of a video streaming
service for better audio-visual music listening in two stages: search and
listening. By these wireframes, we offer practical solutions to enhance user
satisfaction with YouTube for music listening. These findings have wider
implications beyond YouTube and could inform enhancements in other music
streaming services as well.
|
Ahyeon Choi, Eunsik Shin, Haesun Joung, Joongseek Lee, Kyogu Lee
|
2023-07-27T09:19:20Z
|
http://arxiv.org/abs/2307.14718v1
|
# Towards a new interface for music listening:
###### Abstract
In light of the enduring success of music streaming services, it is noteworthy that an increasing number of users are positively gravitating toward YouTube as their preferred platform for listening to music. YouTube differs from typical music streaming services in that they provide a diverse range of music-related videos as well as soundtracks. However, despite the increasing popularity of using YouTube as a platform for music consumption, there is still a lack of comprehensive research on this phenomenon. As independent researchers unaffiliated with YouTube, we conducted semi-structured interviews with 27 users who listen to music through YouTube more than three times a week to investigate its usability and interface satisfaction. Our qualitative analysis found that YouTube has five main meanings for users as a music streaming service: 1) exploring musical diversity, 2) sharing unique playlists, 3) providing visual satisfaction, 4) facilitating user interaction, and 5) allowing free and easy access. We also propose wireframes of a video streaming service for better audio-visual music listening in two stages: search and listening. By these wireframes, we offer practical solutions to enhance user satisfaction with YouTube for music listening. These findings have wider implications beyond YouTube and could inform enhancements in other music streaming services as well.
Ahyeon Choi Eunsik Shin Haesun Joung Joongseek Lee Kyogu Lee Department of Intelligence and Information, Seoul National University
{chah0623, eesshin, gotjs3841, joonlee8, kglee}@snu.ac.kr
## 1 Introduction
In recent years, the music streaming industry has witnessed a significant surge in popularity, with market leaders such as Spotify, Apple Music, and Amazon Music dominating the market [1]. Alongside this trend, YouTube has solidified its position as a prominent platform for diverse video content, including documentaries, daily vologs, entertainment shows, and more. As users flocked to YouTube for various types of content, the platform naturally became a hub for music-related videos as well. Users now have easy access to a wide range of music video content on YouTube, contributing to the growing trend of consuming music through video formats [2].
Indeed, YouTube delivers a distinctive multi-sensory experience by showcasing a vast variety of music-related videos such as music videos, live performances, curated playlists with visual artworks, and cover performances, enabling users to enjoy music through a fusion of visual and auditory elements. Despite Spotify's global prominence based on subscribers, YouTube has seen an increasing number of users turning to its platform for music consumption [1, 3]. This trend is evident in regions like South Korea [4, 5] and Latin America [6], where YouTube dominates as a preferred music platform.
Given YouTube's current dominance in music consumption, there's a need for a more comprehensive investigation into this behavior and patterns. Earlier studies have explored YouTube's role as a streaming service [7], compared its usability with Spotify [3], and analyzed music consumption behavior on YouTube [8]. However, the elements contributing to YouTube's rise as a primary music platform and the actual levels of user satisfaction are still not fully understood, indicating a need for further user-focused research.
Thus, this study aims to conduct in-depth interviews with music consumers on YouTube, examining their behavior, comparing the advantages and disadvantages of using YouTube as a music consumption tool with other music streaming services, and reevaluating YouTube's standing as a tool for music consumption. Additionally, we propose a new interface design that enhances the usability of music-related searches and listening. This research was conducted independently by our team, with no financial backing or data provided by YouTube or any associated organization. With our study, we aim to contribute to the ongoing conversation on YouTube's role as a music platform and offer insights into developing an innovative interface that elevates the user's music listening experience.
## 2 Related work
In the field of music information retrieval (MIR), research on music streaming services includes studies on improving recommendation algorithms [9, 10, 11], understanding user behavior and patterns of use [12, 13, 14, 15, 16], and studying user experiences and interfaces [17, 18, 19, 20, 21]. These studies aimed to enhance overall user satisfaction and engagement with music streaming services by providing personalized recommendations, improving the user interface, and identify
ing the factors that influenced user behaviors and preferences.
Compared to other music streaming services, research on music consumption through YouTube has only recently gained attention due to the platform's relatively late recognition as a music consumption platform. Early studies on YouTube's music videos have revealed that music is the most consumed content category on YouTube, and researchers have classified the types of YouTube's music content while analyzing their differences [7]. Furthermore, [3] reported that YouTube is used as frequently as Spotify and is perceived as superior to Spotify in terms of its shareability and accessibility.
As YouTube's influence in music consumption grows, recent research has examined three types of online music practices according to the role YouTube plays: default, soundtrack, and complementary platforms [8]. Authors report that one of the main results is that YouTube's music videos are listened to, rather than watched. However, the significance of visual elements in music listening can differ based on the genre or content. Additionally, it is worth mentioning that the participants in the study reported only occasional use of YouTube for music, which may limit the generalizability of the findings to other contexts, such as frequent YouTube users.
Therefore, this study aims to examine the usage behavior of users who use YouTube more than three times a week in everyday situations, report on the characteristics of the subject group, and classify the content used. In addition, we draw out advantages and disadvantages through usability tests to newly consider the role of YouTube as a music-listening tool. Moreover, the study proposes interface improvement measures to fill the research gap on "how to improve the music listening environment through YouTube." Considering the diverse range of devices used to access YouTube, including mobile devices, PCs, tablets, and TVs, we primarily focus on the mobile device, taking into account its widespread usage among participants.
## 3 Methods
### Participant
We recruited 27 Seoul National University students (12 males, 15 females) aged 18 or older (mean=23.40, sd=3.13). Our recruitment focused on participants who listen to music on YouTube at least three times a week while excluding those who rely solely on YouTube Music without using YouTube. This approach allowed us to concentrate on the distinct characteristics of consuming music through videos on YouTube, which encompass both visual elements and audio. Participants were compensated with a cash payment of KRW 10,000. Ethics approval was obtained from the Institutional Review Board of SNU.
### Study Design
Informed by previous studies' methodologies and the specific needs of our research, we designed our interview in two stages: a preliminary questionnaire [22, 23], followed by a semi-structured interview [22, 23, 24] that includes a brief ice-breaking session [25]. The preliminary questionnaire collects demographic data and music consumption habits of the participants, such as their academic majors, relationship with music, frequency and duration of YouTube use for music, and specific contexts of YouTube music consumption (excluding YouTube Music). Additionally, we also sought information regarding their subscription to YouTube Premium or usage of YouTube Music.
Following an ice-breaking session, the semi-structured interview proceeded with three main segments (Table 1). First, we explored participants' regular music consumption habits, such as frequency, platform preference, and content preferences. Second, participants were asked to demonstrate the process of searching and listening to music on YouTube, which allowed for a natural exploration of the platform's advantages and disadvantages in comparison to other music streaming services. Third, participants utilized empty interface templates on iPad to design a new interface for music searching and listening, enabling them to customize the screen ratio, functions, buttons, and more. Each interview, lasting roughly 30-40 minutes, was recorded and transcribed using NAVER Clova Note, with participant consent.
### Analysis
We identified the overarching themes and trends of the participants' responses and organized the data accordingly. The data were categorized into the following topics: primary streaming service, weekly listening time, music listening type, preferred music genres or content on YouTube, situations YouTube is used for music listening, reasons for using YouTube as a music consumption tool, music search methods on YouTube, criteria for video selection, advantages and disadvantages of YouTube compared to other services, and a summary of interface proposal sessions.
We generated a list of keywords for the qualitative analysis, which includes the advantages and disadvantages of using YouTube for music listening and user interface proposals from interviewees. To validate our classifications and identify commonalities, we repeated the process of analysis and consensus-building three times among the researchers similar to the analysis process in [22, 26, 27].
\begin{table}
\begin{tabular}{c|c} \hline Phase & Requirements \\ \hline \hline A Verbal & \begin{tabular}{c} Asking about participants’ music \\ listening habits and preferences along \\ with the motivation to use YouTube. \\ \end{tabular} \\ \hline Usability & \begin{tabular}{c} Comparison of YouTube and other \\ music streaming services and \\ feedback on the interface of YouTube \\ for searching and listening to music. \\ \end{tabular} \\ \hline UI proposal &
\begin{tabular}{c} Propose YouTube interface design for \\ music listening freely, and explain \\ yourself. \\ \end{tabular} \\ \hline \end{tabular}
\end{table}
Table 1: Three steps of semi-structured interview
Grounded theory [28] and content analysis [29] were also used as a guide throughout the process of keyword generation. We also referred to previous qualitative studies in the field of MIR [8, 15, 24, 26, 27] to guide our data analysis, as well as to ensure consistency in our reporting and citation practices. Finally, we thoroughly reviewed the keyword lists to extract the main findings of how YouTube is used as a music consumption tool by the participants based on the method of theme analysis [30].
To better understand the participants' interface design proposals, we compared the proposals from the participants and reviewed the summary of the interface designing sessions. From this process, we synthesized useful design implications and arrived at wireframe designs for the music search and listening screens.
## 4 Result
### Behavior and Characteristics of Music Consumption on YouTube
As the current study investigates interview data from a sample of 27 users, it is important to take into account the unique characteristics of this group. Therefore, information concerning the participants' music consumption behaviors and preferences was gathered through preliminary surveys and interviews. The results showed that participants typically used YouTube about five times per week (mean = 4.89, sd = 1.93), for a total of approximately five hours (mean = 5.35, sd = 3.76), to listen to music while engaging in various activities, such as studying, relaxing, commuting, and exercising. No one specialized in music. The majority of participants used the free version of YouTube and did not subscribe to YouTube Premium. Additionally, some participants supplemented their music listening with other platforms such as YouTube Music, Melon, Spotify, and Genie.
Participants enjoyed a diverse range of music genres on YouTube. The top five genres mentioned most frequently were OST (original soundtrack of movies or dramas, 13 times), pop (12 times), K-pop (11 times), classical music (7 times), and indie music (7 times). Other genres mentioned in order of frequency include J-pop, ballads, old-fashioned music (mid-20th-century Korean pop and ballads), jazz, rock, band music (with live instrumentation and elements of rock, pop, and indie), new age, hip-hop, EDM, and R&B. Music content can be broadly divided into three categories: 1) Official music content such as music videos, 2) Live music content such as performances, concerts, festivals, and 3) User-generated content such as playlists and cover videos. In terms of frequency of mention, the order was 3-2-1 (27 times, 25 times, 8 times) respectively.
### Advantages of using YouTube for music listening
Alongside our anticipation that YouTube serves as an audiovisual music listening tool, we found that YouTube possesses various strengths compared to other streaming services (Table 2). Musical diversity was the most frequently mentioned category, with two main points: the availability of non-official music in addition to official releases, and the diversity of playlist content compared to other streaming services.
_With streaming services, 1 can only listen to official releases, **but with YouTube, 1 can listen to not only official releases but covers and other user-generated content**._ (P11)
_Unlike other services, **YouTube's diverse playlists prevent repetitive listening by offering a wide range of songs within similar genres.**_ (P26)
Also, convenience was mentioned as an advantage, with familiarity, accessibility, no subscription fee, and user customization.
_I use it because I'm used to it. I've used Melon and YouTube Music before, **but I settled with YouTube because it was more convenient.**_ (P12)
_Since YouTube is free, there's no need to pay for other services._ (P8)
As for user interaction, most users mentioned the recommendation algorithm itself and the ability to view other users' opinions through comments.
_The recommendation algorithm is good. I often find great new songs through it._ (P17)
_It's good to be able to see other people's opinions and sympathize by reading comments._ (P4)
Lastly, the visual content of thumbnails and videos was mentioned as an advantage.
_I can use both sight and sound when listening to music with videos._ (P4)
_When I play playlists with thumbnails, like at a house-warming party, it adds to the atmosphere, and it's good for interior purposes too._ (P6)
While we initially expected the inclusion of visual elements to be a significant advantage of YouTube, the participants' usage patterns proved more diverse. Some appreciated the visual components, while others turned to YouTube strictly for audio during activities like work or sleep (P2, P5, P22, P23, P24). These observations align with prior research [8], showing the varied ways users utilize YouTube for music. Although some mentioned listen
\begin{table}
\begin{tabular}{c|c|c|c} \hline Category & Keyword & Freq & Total \\ \hline \hline \multirow{2}{*}{Musical Diversity} & official soundtrack + \(\alpha\) & 14 & 23 \\ \cline{2-4} & playlist & 9 & \\ \hline \multirow{4}{*}{Convenience} & familiarity & 4 & \multirow{4}{*}{15} \\ \cline{2-4} & accessibility & 4 & \\ \cline{2-4} & subscription fee & 4 & \\ \cline{2-4} & Customizing & 3 & \\ \hline \multirow{2}{*}{User Interaction} & recommendation & 10 & \multirow{2}{*}{13} \\ \cline{2-4} & comments & 3 & \\ \hline \multirow{2}{*}{Visual Contents} & thumbnail & 6 & \multirow{2}{*}{10} \\ \cline{2-4} & video & 4 & \\ \hline \multicolumn{2}{c|}{etc.} & etc. & 1 & 1 \\ \hline \end{tabular}
\end{table}
Table 2: Pros. keywords of usability test
ing to audio with the screen off (P7, P23, P24), we excluded this aspect from our analysis as it's a feature exclusive to YouTube Premium subscribers.
### Disadvantages of using YouTube for music listening
The inconveniences and disadvantages of listening to music on YouTube were categorized into seven major themes (Table 3). The most frequently mentioned inconvenience was related to user interaction, with many complaints about the inconvenience of filtering the desired information while exploring recommended videos and comments.
_In other music streaming services, genre separation is clearly done, **but YouTube recommends based on the videos you watch, so there is a tendency to lean towards a specific genre.**_(P26)_
_When watching music videos and reading comments, it's hard to find South Korean users' reactions when most comments are in foreign languages._ (P11)
The second most frequently mentioned disadvantage was related to screen manipulation, such as fixed thumbnails, the ratio of videos, and accidental button presses.
_It would be nice if I could reduce the screen ratio. I want to watch the small screen when exercising or doing other things._ (P8)_
_There are cases where I accidentally press the Shorts button and the music stops._ (P22)
Regarding playlists, users complained about not having timestamps for individual songs, the content of playlists made by others, the process of creating playlists themselves, and the mixes provided by YouTube.
_It's inconvenient to switch to another song **if there is no timestamp in the playlist.**_ (P18)_
_Since playlists are made by others, there are few cases where all songs suit my taste, and there are mediocre songs in between._ (P18)_
_It's inconvenient to save songs one by one in my library. It feels slow every time I press the save button, and it is a hassle to press the button several times to save._ (P27)
Some users mentioned the lack of information about album or song information and lyrics, as well as the lack of log information about previously watched videos as a disadvantage.
_It's hard to find album or song information, and it's frustrating not knowing the information of the concert I am watching._ (P7)_
_When I use the autolay function, it is hard to find which song I thought I liked._ (P17)_
Despite the existence of autolay and volume control features on YouTube, user complaints arose from a lack of information about these functions. Some users viewed them as drawbacks, unaware of their existence or location. Specifically, enabling autolay requires navigating to the settings, while fine-tuning volume necessitates physical device button use. This complexity may have heightened user frustration and dissatisfaction.
_I wish there is a autolay button._ (P12)_
_I want to make minute adjustments, but even if I increase the volume level by just one, the volume suddenly becomes too loud._ (P16)_
The quality of the content is related to the audio or video quality. Some responses showed low reliability in audio quality when used for music listening.
_There are cases where the sound quality is poor in content uploaded by individual users._ (P12)
Aside from that, there were four mentions of concerns about mobile data usage due to large video data size (P2, P9, P11, P25), one mention of discomfort with provocative titles (P23) and experiencing an error when randomly playing saved videos (P14). There were nine mentions related to ads or background playback (P2, P5, P6, P8, P9, P12, P17, P19, P25), but these were excluded from the analysis since they can be resolved with a YouTube premium subscription.
### User Feedback for Interface Improvements
We analyzed users' explanations and drawings of the searching and listening screens, categorizing their demands for interface improvement into three categories: addition, modification, and deletion. These categories, along with relevant quotes, provide specific descriptions of users' interface improvement suggestions.
The first is to request the addition of new information or functions that are currently absent on YouTube, such as a new button or tab, new sorting and filtering criteria, or more information about contents and songs.
_It would be great if there was a detailed search button under the search bar, where you could search by year,
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Category & Keyword & Freq & Total \\ \hline \hline \multirow{2}{*}{User Interaction} & comments & 12 & \multirow{2}{*}{21} \\ \cline{2-3} & recommendation & 9 & \\ \hline \multirow{2}{*}{Manipulation} & button / tap & 10 & \multirow{2}{*}{17} \\ \cline{2-3} & display ratio & 7 & \\ \hline \multirow{3}{*}{Playlist} & playlist contents & 4 & \multirow{3}{*}{10} \\ \cline{2-3} & making playlist & 4 & \\ \cline{2-3} & mix playlist & 2 & \\ \hline \multirow{2}{*}{Section Search} & timestamp & 6 & \multirow{2}{*}{9} \\ \cline{2-3} & playback bar & 3 & \\ \hline \multirow{2}{*}{Lack of Info.} & song information & 5 & \multirow{2}{*}{8} \\ \cline{2-3} & log information & 3 & \\ \hline \multirow{2}{*}{Underutilization} & replay & 5 & \multirow{2}{*}{8} \\ \cline{2-3} & volume control & 3 & \\ \hline \multirow{2}{*}{Contents Quality} & sound quality & 5 & \multirow{2}{*}{6} \\ \cline{2-3} & video quality & 1 & \\ \hline \multirow{2}{*}{etc.} & (video) data size & 4 & \multirow{2}{*}{6} \\ \cline{2-3} & etc. & 2 & \\ \hline \end{tabular}
\end{table}
Table 3: Cons. keywords of usability test
alum, composer, etc._ (P7)
_It would be nice if I can choose options between "all videos recommended" and "music-related recommendations" in the recommendation section._ (P9)
The second is to modify the existing functions or configuration of YouTube to increase operability and efficiency when searching and listening to music, such as changing the ratio of various spaces on the interface or changing the positions of existing buttons and information.
_It would be great **if the thumbnail (album cover) could be smaller,** and the title, artist, etc. could be displayed next to it._ (P12)
_It would be nice to adjust the ratio of the comment box and recommended videos so that you can view them together._ (P4)
Lastly, there were cases where demands were made to remove things from the existing YouTube interface that are not directly related to music searching or listening.
_We don't need the buttons for uploading videos on the bottom menu bar. It would be great if we could freely configure this menu bar._ (P15)
_If we could hide the buttons we don't use often and press the detailed button to show them, it would be neat._ (P19)
## 5 Findings and Discussion
### Role of YouTube as a music streaming service
Users desire an improved interface for YouTube to maximize its potential as a music consumption tool. We have identified five key roles that YouTube plays in music listening, and based on this, we propose design implications to enhance the user experience.
#### 5.1.1 Exploring musical diversity
YouTube offers users a wide variety of musical genres, artists, and songs to discover and explore, including rare or unreleased music not found on other streaming services. Users can also enjoy various versions of the same song through covers or live performances by different artists.
**Design Implication:** To improve search efficiency, music content should be categorized by genre, artist, and mood, and album information such as lyrics should be provided to reduce the need to search for information on other platforms.
#### 5.1.2 Sharing unique playlists
YouTube creators can create and share playlists, simplifying the search process and enabling them to select playlists based on keywords like mood or activity (e.g. warm spring day, driving playlist).
**Design Implication:** Playlists should provide song and timeline information, and allow users to switch to the next song with a button. Allowing users to customize songs within the playlist, such as adding or removing them, and saving these changes, would enhance the playlist's functionality.
#### 5.1.3 Providing visual satisfaction
By offering sensory satisfaction beyond just videos, YouTube's visual content enhances the music listening experience. Users appreciate being able to observe the musicians' expressions, gestures, and style, and sometimes even watch music videos solely for visual gratification like repetitive animations or thumbnail images paired with the music.
**Design Implication:** The screen size and ratio of the video should be customizable based on the content's characteristics and users' listening environment. For example, users would like the option to decrease the video screen size in public settings or enlarge it to focus on a particular idol member or musician's finger movements.
#### 5.1.4 Facilitating user interaction
YouTube's likes, dislikes, subscriptions, and comments features enable users to interact with the platform and foster a sense of community, resulting in a more engaging music listening experience. Additionally, the recommendation algorithm lets users explore new content and see how others react to music, which is a key motivation for users to use YouTube.
**Design Implication:** Users should be able to sort and filter recommendations and comments based on various criteria, such as timeline, keyword, lyrics or the most frequently mentioned, to expand YouTube's social function. Pinning specific comments that users like or refer to frequently could also reduce search time.
#### 5.1.5 Allowing free and easy access
YouTube's accessibility, cost-effectiveness, and cross-device compatibility make it a convenient option for users to listen to music in various situations. This versatility has led some users to cease subscribing to other streaming services. Primarily, the appeal lies in the free access to a diverse library of music videos, live performances, covers, and user-generated content, resonating with users disinclined to pay for music subscriptions.
**Design Implication:** While device-specific interfaces are important, consistent usability is crucial to prevent user confusion or inconvenience.
It is crucial to acknowledge that while some of the proposed features (e.g., album and artist filters, lyrics, smaller screen mode) have already been implemented in the YouTube Music App, users still rely on YouTube to access a diverse range of music videos that are not available on the YouTube Music App. Therefore, our design implications hold the potential to differentiate YouTube from YouTube Music by catering to the experience of video streaming alongside music consumption.
### UI for Audio-Visual Music Streaming Platform
Taking into account the role of YouTube as a music streaming service, the needs of its users, and the interface designs of typical music streaming services, we have developed an ideal wireframe for an audio-visual music listening platform. It consists of two stages: (a) searching and (b) listening screens (Figure 1).
**(a) Searching** To display diverse music content tailored to users' interests, we added 1) advanced search functionality to the top keyword search bar, allowing users to filter by era, genre, artist, and other details. Additionally, we added 2) a button to easily add multiple videos to a user's playlist, and 3) reduced the thumbnail size to show more videos on one screen. Next to the thumbnail, we included 4) information about the songs in the video, and if the video is a playlist, we added 5) a timeline and information about the included songs. Finally, we made 6) the bottom menu buttons customizable, allowing users to remove buttons when they feel unnecessary and create their own menu.
**(b) Listening** While maintaining the current structure of the interface, we adjusted the layout and added new features to enhance the music listening experience. 1) Adding a toggle button that allows users to exchange between video watching and music listening. Users can use their fingers to 2) zoom in or out of the video to adjust its size. Previously, users had to click the video to access playback and skip buttons, but we located 3) the playback bar and related functions at the bottom of the video. We also made the 4) repeat button more visible. We added 5) a toggle button to expand or shorten album information or lyrics, and made 6) comments expandable in a similar manner, with a function for users to pin comments they want to keep visible. We added 7)a filtered recommendation feature to suggest reduced-size videos based on specific user-selected filters. This allows for easier exploration of related content through horizontal scrolling.
The findings of this research hold potential for application across a variety of streaming services. Features such as advanced search functions, customizable menus, and enhanced playlist capabilities can improve user engagement and satisfaction. Effective presentation of music-related information enriches the listening experience, while additional functionalities such as video zoom or comment pinning foster a personalized user experience. These findings can significantly benefit YouTube, as well as aid other music streaming platforms like Spotify, Apple Music, and Amazon Music, and video streaming services including music videos, like Bilibili and Vimeo, in optimizing their interfaces according to their unique characteristics and users' needs.
## 6 Conclusion
This study explored the music listening behaviors of YouTube users and analyzed the advantages and disadvantages of YouTube as a music streaming service. We proposed new interface wireframes to improve usability and re-examined YouTube's role as a tool for music listening. Undoubtedly, there are constraints in actualizing the proposed interface fully on YouTube. Nevertheless, some suggestions on improving visual satisfaction, comment exploration, and toggle button to exchange the interface between video watching and music listening mode could be considered in designing the overall interface of video streaming platforms.
Our study has limitations depending on the small sample size of Korean users, and it is essential to consider several important factors. Firstly, our interface design primarily focused on mobile environments, which may limit its direct applicability to other devices like PCs and TVs. Secondly, the relatively narrow age range and educational levels of our participants may affect the generalizability of our findings. Thirdly, the absence of comparative studies on similar video platforms and services hinders our understanding of YouTube's performance as a video streaming service. However, these limitations present opportunities for future research to explore and address the diverse needs of users across different devices, demographics, and video services. Overall, our study provides valuable insights and paves the way for further advancements in user-centered design for music streaming services.
Figure 1: A Wireframe of youtube UI for music listening
|
2306.14431
|
Moments of Parton Distributions Functions from Lattice QCD at the
Physical Point
|
We present a Lattice QCD calculation of the second Mellin moments of the
nucleon axial, vector and tensor parton distribution functions (PDFs). The
calculation is performed at the physical pion mass with two different lattice
spacings, and includes both zero and non-zero nucleon momenta. In our
preliminary analysis, we identify operators that greatly reduce excited-state
contamination.
|
Marcel Rodekamp, Michael Engelhardt, Jeremy R. Green, Stefan Krieg, Stefan Meinel, John W. Negele, Andrew Pochinsky, Sergey Syritsyn
|
2023-06-26T05:59:49Z
|
http://arxiv.org/abs/2306.14431v2
|
# Moments of Parton Distributions Functions from Lattice QCD at the Physical Point
###### Abstract
We present a Lattice QCD calculation of the second Mellin moments of the nucleon axial, vector and tensor parton distribution functions (PDFs). The calculation is performed at the physical pion mass with two different lattice spacings, and includes both zero and non-zero nucleon momenta. In our preliminary analysis, we identify operators that greatly reduce excited-state contamination.
DIS2023: XXX International Workshop on Deep-Inelastic Scattering and Related Subjects, Michigan State University, USA, 27-31 March 2023
## I Introduction
Parton distribution functions (PDFs) have proved to be a valuable tool in describing the structure of hadrons and making predictions for high-energy processes at hadron colliders. First-principles calculations of PDFs are very difficult due to their non-perturbative nature. Lattice QCD provides a way of calculating (non-perturbative) observables by introducing a four-dimensional Euclidean hypercubic lattice to discretise the space-time, serving as a regulator. The path integral is then calculated with a Monte Carlo algorithm.
In the past years the Lattice QCD community has made tremendous progress in calculating PDFs by directly assessing their Bjorken-\(x\) dependence from the leading-twist contribution to bilocal matrix elements at high momentum. In this work, we concentrate on the second Mellin moment \(\langle x\rangle\)[1, 2, 3] via matrix elements of local twist-two operators, which does not require large momenta to suppress higher-twist contributions and thus simplifies the numerical estimation. We aim to understand the excited-state contamination and identify a set of matrix elements that have particularly low contributions from exited states. This requires the study of matrix elements at finite but small momenta as some have contributions only at non-zero momentum. The study of forward matrix elements
of local operators at non-zero momentum is uncommon but has been done in references [4; 5; 6].
This contribution is organized as follows. In section II we explain our analysis chain and discuss in detail which operators are considered. In section III we show our preliminary results of the different steps of the analysis and discuss their significance in terms of excited-state contamination. Last, in IV we summarize our findings.
## II Method
Moments of PDFs can be obtained by calculating forward matrix elements of local leading twist operators [7; 8; 9; 10]
\[\mathcal{O}^{X}\equiv\mathcal{O}^{X}_{\{\alpha,\mu\}}=\overline{q}\Gamma^{X} _{\{\alpha}\overset{\leftrightarrow}{D}_{\mu\}}q, \tag{1}\]
where \(X=V,A,T\) indicates the vector, axial or tensor channel leading to unpolarized, polarized or transversity PDFs respectively. We symmetrize the indices and take the traceless part, denoted by \(\{\alpha,\mu\}\), and restrict ourselves to the isovector channel, \(\mathcal{O}^{X}(q=u)-\mathcal{O}^{X}(q=d)\), to avoid calculating disconnected diagrams. The left-right acting covariant derivative \(\overset{\leftrightarrow}{D}\) is constructed on the Euclidean lattice by finite differences of neighbouring points connected by an appropriate gauge link \(U_{\mu}(\mathbf{x})\). One can show that the forward matrix element is proportional to the desired moment \(\langle x\rangle\)[1; 10; 11]
\[\mathcal{M} \equiv\,\langle N(p)|\mathcal{O}^{X}_{\{\alpha,\mu\}}|N(p)\rangle \tag{2}\] \[=\langle x\rangle\,\overline{u}_{N(p)}\Gamma^{X}_{\{\alpha}\text{ i}p_{\mu\}}u_{N(p)},\]
where \(p\) is the nucleon's 4-momentum.
In the continuum, the operators (1) are classified according to irreducible representations of the Lorentz group, which in Euclidean space is replaced by the orthogonal group [9]. On the lattice, the orthogonal group is further reduced to the hypercubic group \(H(4)\). This explicit breaking causes some operators to mix with lower-dimensional ones; however, for a one-derivative operator as used here this does not happen. Still, the Euclidean irreducible representations to which our operators belong split into multiple hypercubic irreps; we use the typical notation where \(\tau_{a}^{(b)}\) denotes the \(a\)th \(b\)-dimensional irrep. Each of the latter has a different renormalization factor, so we construct operators with definite hypercubic irreducible representation to keep the renormalization diagonal [9]. In practice this means for each \(\tau_{a}^{(b)}\) we have to calculate the renomalization factor \(Z_{\tau_{a}^{(b)}}\) to multiply matrix elements of an operator that transforms irreducibly under it, consequently we denote \(Z_{\mathcal{O}^{X}}\equiv Z_{\tau_{a}^{(b)}}\).
The matrix element of (2) can be obtained from the lattice by considering ratios of three-point and two-point correlation functions [10; 11]. The two-point correlation function \(\text{C}_{\text{2pt}}\left(\tau\right)\) measures the correlation of a nucleon source and a nucleon sink separated by a time \(\tau\), while the three-point correlation function \(\text{C}_{\text{3pt}}^{\mathcal{O}^{X}}\left(T,\,\tau\right)\) separates the source and sink nucleons by a time \(T\) and inserts an operator of interest, here \(\mathcal{O}^{X}\), at time \(\tau\). For a graphical representation consider figure 1. The matrix element is then obtained in the limit
\[\mathcal{M} =\lim_{T-\tau,\tau\rightarrow\infty}R(T,\tau) \tag{3}\] \[\equiv\lim_{T-\tau,\tau\rightarrow\infty}\frac{\text{C}_{\text{ 3pt}}^{\mathcal{O}^{X}}\left(T,\,\tau\right)}{\text{C}_{\text{2pt}}\left(T \right)}. \tag{4}\]
Doing a spectral analysis of this ratio reveals the matrix element of the ground state
\[R(T,\tau)=\mathcal{M}+\text{Excited States}. \tag{5}\]
Expanding further, including the first excited state, shows the dominant excited-state contamination
\[\mathcal{M}\frac{1+c_{1}e^{-\frac{T}{2}\Delta E}\cosh\left[\left(T/2-\tau \right)\Delta E\right]+c_{2}e^{-T\Delta E}}{1+c_{3}e^{-T\Delta E}}, \tag{6}\]
where we use \(\Delta E=E_{1}-E_{0}\). Naturally, one would consider large \(T,\tau\) approaching the limit of (4). The statistical noise increases with \(T\), implying increased numerical costs for this approach. The constants \(c_{i}\) depend on the operator \(\mathcal{O}^{X}\); thus, if they appear to be small or obey some symmetry, the excited-state contamination of the matrix element is further reduced.
Considering the sum of ratios
\[S(T,\tau_{\text{skip}})=a\sum_{\tau=\tau_{\text{skip}}}^{T-\tau _{\text{skip}}}R(T,\tau) \tag{7}\] \[=\mathcal{M}\left(T-\tau_{\text{skip}}\right)+c+\text{Excited States},\]
excited-state contamination is exponentially suppressed with \(T\) compared to \(\nicefrac{{T}}{{2}}\) for the ratios themselves [12; 13]. Increasing \(\tau_{\text{skip}}\) reduces excited-state contamination, here typically a value of \(\nicefrac{{\tau_{\text{skip}}}}{{a}}=1,2,3\) is enough. In order to extract the matrix element from the ratio sums, up to excited states, we can either fit the slope according to (7) or use finite differences
\[\mathcal{M}=\frac{S(T+\delta,\tau_{\text{skip}})-S(T,\tau_{\text{skip}})}{ \delta}. \tag{8}\]
Due to the available data we use a combination of \(\nicefrac{{\delta}}{{a}}\in\{1,2,3\}\) depending on whether a neighbour \(T+\delta\) is available.
Having the basic quantities of interest, we can summarize the analysis as follows. First estimate the ratios \(R(T,\tau)\) and ratio sums \(S(T,\tau_{\text{skip}})\). Currently, we extract matrix elements \(\mathcal{M}\) in two ways. First, fitting the slope of \(S(T,\tau_{\text{skip}})\) at fixed \(\tau_{\text{skip}}\) limiting \(T\geq T^{\prime}\) for various minimal source-sink separations \(T^{\prime}\). Second, extracting the slope via finite differences at a source-sink separation \(T=T^{\prime}\). A matrix element extracted with either those is denoted by \(\mathcal{M}|_{T^{\prime},\mathfrak{m}}\) where \(\mathfrak{m}\) denotes one of the two above methods. For both methods, as we increase \(T^{\prime}\) excited states are expected to decay. Dividing the kinematic factor results in a \(T^{\prime}\)-dependent moment for a given operator \(\mathcal{O}^{X}\) and momentum \(p\) using the matrix element extraction method \(\mathfrak{m}\)
\[\mathfrak{X}_{\mathcal{O}^{X},p,\mathfrak{m}}(T^{\prime})=\frac{\mathcal{M}|_{ T^{\prime},\mathfrak{m}}}{\overline{u}_{N(p)}\Gamma_{\{\alpha}}^{X}p_{\mu} )u_{N(p)}}. \tag{9}\]
To simplify the following equations, we define a compound index \(j=\left(\mathcal{O}^{X},p,\mathfrak{m}\right)\) that runs over all operators and momenta with nonzero kinematic factors as well as the different methods to obtain the matrix element. Determining the renormalization factors in RI-(S)MOM and matching it to \(\overline{\text{MS}}\)(2 GeV) allows us to express the renormalized moment \(\mathfrak{X}_{j}^{\text{ren}}(T^{\prime})=Z_{\mathcal{O}^{X}}\cdot\mathfrak{X} _{j}(T^{\prime})\). With these we define the central value as weighted average of the different results
\[\langle x\rangle^{\text{ren}}=\sum_{j,T^{\prime}\geq T_{\text{ plat}}^{j}}\mathfrak{W}_{j}(T^{\prime})\mathfrak{X}_{j}^{\text{ren}}(T^{\prime}). \tag{10}\]
Here \(T_{\text{plat}}^{j}\) denotes the smallest source-sink separation such that \(\mathfrak{X}_{j}(T^{\prime})\) agree for all \(T^{\prime}\geq T_{\text{plat}}^{j}\). The weights \(\mathfrak{W}_{j}(T^{\prime})\propto\nicefrac{{1}}{{\sigma_{j}^{2}(T^{\prime})}}\) are normalised such that they sum to 1. The used variances are estimated via bootstrap over \(\mathfrak{X}_{j}(T^{\prime})\) and the errors of the renormalization constants are propagated. Last we estimate a systematic error by taking the weighted standard deviation over the different results
\[\sigma_{syst}^{2}=\sum_{j,T^{\prime}\geq T_{\text{plat}}^{j}} \mathfrak{W}_{j}(T^{\prime})\left[\mathfrak{X}_{j}^{\text{ren}}(T^{\prime})- \langle x\rangle^{\text{ren}}\right]^{2}. \tag{11}\]
## III Results
We use a tree-level Symanzik-improved gauge action with 2+1 flavour tree-level improved Wilson Clover fermions coupling via 2-level HEX-smearing. The details can be found in [14; 15; 16] and relevant parameters are summarized in table 1. Two ensembles, coarse and fine, have been generated at the physical pion mass corresponding to lattice spacings of \(0.1163(4)\,\text{fm}\) and \(0.0926(6)\,\text{fm}\) respectively. On each ensemble two-point and three-point correlation functions are calculated with source-sink separations ranging from \(\approx 0.3\,\text{fm}\) to \(1.4\,\text{fm}\) and \(\approx 0.9\,\text{fm}\) to \(\approx 1.5\,\text{fm}\). For each ensemble two momenta are chosen; \(\vec{p}=(p_{x},0,0)\) with \(p_{x}=0,-2[\nicefrac{{2\pi}}{{L}}]\) and \(p_{x}=0,-1[\nicefrac{{2\pi}}{{L}}]\) respectively.
Figures 2 to 4 show the different steps of the analysis. For a given channel X, the figures (a)a and (b)b show results using one possible operator \(\mathcal{O}^{X}_{\{\alpha,\mu\}}\). Here we multiply with the kinematic factor \(\overline{R}(T,\tau)=\nicefrac{{1}}{{\overline{u}_{N(p)}\Gamma_{\{\alpha}}^{X }\text{i}p_{\mu\}}u_{N(p)}}\cdot R(T,\tau)\) such that a plateau corresponds to the bare moment. These plots omit the largest
Figure 1: Graphical representation of \(\text{C}^{\mathcal{O}^{X}}_{\text{3pt}}\) (\(T\), \(\tau\)), a source nucleon inserted at time \(t=0\) and a sink nucleon removed at time \(t=T\). A local leading twist operator (1) is inserted on a given time slice \(\tau\). The nucleons on the lattice are represented by interpolating operators while \(\mathcal{O}^{X}\) is determined by finite differences connected with gauge links.
source sink separation due to its enormous statistical uncertainty. Two different operators \(\mathcal{O}^{X}_{\{\alpha,\mu\}}\) are used for each channel \(X\) going from figure 2a to 2b. For the axial channel (\(X=A\)) both operators transform under \(\tau_{4}^{(6)}\). The lower excited-state contamination for some operators can be deduced directly from these figures as 2a obey the cosh behaviour of (6) while 2b are perfectly flat within uncertainty. Notably, those operators have a contribution only at finite momentum (\(p_{x}\neq 0\) here) which increases the statistical noise.
A similar rescaling has been done for the ratio sums \(\overline{S}(T,\tau_{\text{skip}})=\nicefrac{{1}}{{\overline{u}_{N(p)}}} \Gamma^{X}_{\{\alpha\text{i}p_{\mu}\}}u_{N(p)}\cdot S(T,\tau_{\text{skip}})\) in figure 3. The exited state contamination is indicated by the slight curvature though much more obscured compared to the ratios. The slopes of these lines are used in the current analysis shown in figure 4. In future work we want to include a 2-state analysis as in (6), improving on the central value (10) as well as the systematic error estimation (11).
In Figure 4 the gray points correspond to the different renormalized moments \(\mathfrak{X}^{\text{ren}}_{j}(T^{\prime})\) from finite differences plotted against \(T^{\prime}\) but slightly displaced to increase readability. The blue points represent the preliminary result, computed using (10). The inner errorbars represent the statistical - bootstrap - uncertainty while the outer ones add the estimate of systematic errors, \(\sqrt{\sigma^{2}_{stat}+\sigma^{2}_{syst}}\). The upper and lower row collect results from the coarse and fine ensemble respectively. Encouragingly, the central values agree within the uncertainties.
## IV Summary
We calculate the second Mellin moment \(\langle x\rangle\) of axial, vector and tensor PDFs from lattice QCD with two lattice spacings at the physical pion mass. The study includes nucleon matrix elements at zero and finite momentum, boosted in the \(x\)-direction. We identified a set operators that contribute only at finite momentum and have particularly low excited-state contamination. For the future, we are working on a direct 2-state analysis of the ratios to improve the quantitative analysis of the excited-state contamination.
###### Acknowledgements.
We thank the Budapest-Marseille-Wuppertal Collaboration for making their configurations available to us and Nesreen Hasan for calculating the correlation functions analysed here during the course of a different project. Calculations for this project were done using the Qlua software suite [17], and some of them made use of the QOPQDP adaptive multigrid solver [18; 19]. We gratefully acknowledge the computing time granted by the JARA Vergabegremium and provided on the JARA Partition part of the supercomputer JURECA [20] at Julich Supercomputing Centre (JSC); computing time granted by the John von Neumann Institute for Computing (NIC) on the supercomputers JUQUEEN [21], JURECA, and JUWELS [22] at JSC; and computing time granted by the HLRS Steering Committee on Hazel Hen at the High Performance Computing Centre Stuttgart (HLRS). M.R. was supported under the RWTH Exploratory Research Space (ERS) grant PF-JARA-SDS005 and MKW NRW un
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Ensemble & Size & \(\beta\) & \(a\)[fm] & \(m_{\pi}\)[MeV] & \(m_{\pi}L\) & \(\nicefrac{{T}}{{a}}\) & \(p_{x}\)[\(\nicefrac{{2\pi}}{{L}}\)] & \(N_{\text{cfg}}\) \\ \hline Coarse & \(48^{4}\) & \(3.31\) & \(0.1163(4)\) & \(136(2)\) & \(3.9\) & \(3,4,5,6,7,8,10,12\) & \(0,-2\) & \(212\) \\ Fine & \(64^{4}\) & \(3.5\) & \(0.0926(6)\) & \(133(1)\) & \(4.0\) & \(10,13,16\) & \(0,-1\) & \(427\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Details of the used ensembles. The ensembles are at the physical pion mass, \(m_{\pi}\approx m_{\pi}^{phys}\). A larger and a smaller lattice spacing, labelled as "Coarse" and "Fine" respectively, are available. The ensembles are generated with a tree-level Symanzik-improved gauge action with 2+1 flavour tree-level improved Wilson Clover fermions coupled via 2-level HEX-smearing [14; 15; 16]. Furthermore, the available source-sink separations (\(T\)) and momenta (\(p_{x}\)) which are used in the calculation of the ratios, equation (4), are displayed.
der the funding code NW21-024-A. M.E., J.N. A.P. are supported by the U.S. DOE Office of Science, Office of Nuclear Physics, through grant DE-FG02-96ER40965, DE-SC-0011090 and DE-SC0023116, respectively. S.M. is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics under Award Number DE-SC0009913. S.S. is supported by the National Science Foundation under CAREER Award PHY-1847893"
Figure 3: ratio sums \(\overline{S}(T,\tau_{\rm skip})\) on the coarse lattice. The chosen operators \(\mathcal{O}^{X}\) are the same as in 2a. Each \(\overline{S}(T,\tau_{\rm skip})\) is plotted at fixed \(\tau_{\rm skip}\), indicated by colour, over different source-sink separations. As in 2a different momenta are displayed with hollow and filled markers.
Figure 2: Ratios, cf. eq. (4), for the coarse ensemble. Different source-sink separations \(T\) are shown in different colours and the two momenta, in 2a, are distinguished by hollow and filled markers. Different sets of operators were chosen for the two subfigures.
Figure 4: Results for the renormalized moments computed from the ratio sums (7). Moments are computed from finite differences at fixed \(T^{\prime}\) and \(\tau_{\text{skip}}=1\). The grey points represent \(\mathfrak{X}_{j}^{\text{ren}}(T^{\prime})\). The moments are plotted against \(T^{\prime}\) and slightly displaced for clarity. The red points represent the preliminary result obtained using (10). The inner and outer errorbars indicate statistical and total uncertainty respectively.
|
2304.01123
|
Homogenization in perforated domains at the critical scale
|
We describe the asymptotic behaviour of the minimal heterogeneous
$d$-capacity of a small set, which we assume to be a ball for simplicity, in a
fixed bounded open set $\Omega\subseteq \mathbb{R}^d$, with $d\geq2$. Two
parameters are involved: $\varepsilon$, the radius of the ball, and $\delta$,
the length scale of the heterogeneity of the medium. We prove that this
capacity behaves as $C|\log \varepsilon|^{d-1}$, where $C=C(\lambda)$ is an
explicit constant depending on the parameter
$\lambda:=\lim_{\varepsilon\to0}|\log \delta|/|\log\varepsilon|$.
Applying this result, we determine the $\Gamma$-limit of oscillating integral
functionals subjected to Dirichlet boundary conditions on periodically
perforated domains. In this instance, our first result is used to study the
behaviour of the functionals near the perforations which are exactly balls of
radius $\varepsilon$. We prove that, as in the homogeneous case, these lead to
an additional term that involves $C(\lambda)$.
|
Giuseppe Cosma Brusca
|
2023-04-03T16:39:39Z
|
http://arxiv.org/abs/2304.01123v1
|
# Homogenization in perforated domains at the critical scale
###### Abstract
We describe the asymptotic behaviour of the minimal heterogeneous \(d\)-capacity of a small set, which we assume to be a ball for simplicity, in a fixed bounded open set \(\Omega\subseteq\mathbb{R}^{d}\), with \(d\geq 2\). Two parameters are involved: \(\varepsilon\), the radius of the ball, and \(\delta\), the length scale of the heterogeneity of the medium. We prove that this capacity behaves as \(C|\log\varepsilon|^{d-1}\), where \(C=C(\lambda)\) is an explicit constant depending on the parameter \(\lambda:=\lim_{\varepsilon\to 0}|\log\delta|/|\log\varepsilon|\).
Applying this result, we determine the \(\Gamma\)-limit of oscillating integral functionals subjected to Dirichlet boundary conditions on periodically perforated domains. In this instance, our first result is used to study the behaviour of the functionals near the perforations which are exactly balls of radius \(\varepsilon\). We prove that, as in the homogeneous case, these lead to an additional term that involves \(C(\lambda)\).
**Keywords:** capacity, homogenization, \(\Gamma\)-convergence, perforated domains.
**AMS Class:** 49J45, 35B27, 31A15.
+
Footnote †: Preprint SISSA 02/2023/MATE
## 1 Introduction
A prototypical variational problem in Sobolev spaces involving scaling-invariant functionals concerns the \(d\)-capacity of a set \(E\) contained in a fixed bounded open set \(\Omega\subseteq\mathbb{R}^{d}\) with \(d\geq 2\). If we assume \(E\) to have diameter of size \(\varepsilon\ll 1\), an explicit computation proves that the asymptotic behaviour of such capacity equals \(|\log\varepsilon|^{1-d}\), up to a dimensional factor.
In this paper, we introduce a dependence on \(x\), which in the model describes the heterogeneity of a medium, and we analyse the asymptotic behaviour of minima
\[m_{\varepsilon,\delta}:=\min\Bigl{\{}\int_{\Omega}f\left(\frac{x}{\delta}, \nabla u(x)\right)\,dx:u\in W^{1,d}_{0}(\Omega),u=1\text{ on }B(z, \varepsilon),z\in\Omega\Bigr{\}}, \tag{1}\]
where \(\delta=\delta(\varepsilon)\) is positive and vanishing as \(\varepsilon\to 0\), and \(f(x,\xi)\) is a function with suitable assumptions of periodicity and homogeneity.
We assume \(f:\mathbb{R}^{d}\times\mathbb{R}^{d}\to[0,+\infty)\) to be a Borel function with the following properties:
(P1) (periodicity) \(f(\cdot,\xi)\) is 1-periodic for every \(\xi\in\mathbb{R}^{d}\), i.e., denoting by \(e_{k}\) an element of the canonical basis, \(f(x+e_{k},\xi)=f(x,\xi)\) for every \(x\) and \(\xi\) in \(\mathbb{R}^{d}\), and \(k=1,...,d\) ;
(P2) (positive \(d\)-homogeneity) \(f(x,t\,\xi)=t^{d}f(x,\xi)\) for every \(t>0\), for every \(x\) and \(\xi\) in \(\mathbb{R}^{d}\);
(P3) (standard growth conditions of order \(d\)) there exist \(\alpha,\beta\) such that \(0<\alpha<\beta\) and \(\alpha|\xi|^{d}\leq f(x,\xi)\leq\beta|\xi|^{d}\) for every \(x\) and \(\xi\) in \(\mathbb{R}^{d}\).
In light of (P1) and (P2), the minima defined in (1) stand for the minimal _heterogeneous capacity_ of a small set (which is not restrictive to assume to be a ball) of size \(\varepsilon\), while \(\delta\) is the period of the heterogeneity modelled by oscillating terms.
Assumption (P3) is technical as it is needed to apply the Homogenization Theorem.
We remark that, by a relaxation argument, we may assumw \(f(x,\xi)\) being convex on the second variable so that the associated energy functional is \(W^{1,d}(\Omega)\)-weakly lower semicontinuous and the terms defined by (1) are actual minima.
The first result we achieve is the asymptotic estimate for the minima in (1). To this end, we work along subsequences (not relabeled) for which it exists
\[\lambda:=\lim_{\varepsilon\to 0}\frac{|\log\delta|}{|\log\varepsilon|}\wedge 1 \in[0,1].\]
We introduce a function describing the asymptotic concentration of the heterogeneous capacity at a point \(z\in\mathbb{R}^{d}\); it is given by
\[\begin{split}\Phi(z):=\lim_{R\to+\infty}(\log R)^{d-1}\min\Bigl{\{} \int_{B(0,R)\setminus B(0,1)}f(z,\nabla u(x))\,dx:&\,u\in W^{1, d}_{0}(B(0,R)),\\ &\,u=1\text{ on }B(0,1)\Bigr{\}},\end{split} \tag{2}\]
then we define a constant portraying the effect of homogenization, which is
\[\begin{split} C_{\text{hom}}:=\lim_{R\to+\infty}(\log R)^{d-1} \min\Bigl{\{}\int_{B(0,R)\setminus B(0,1)}f_{\text{hom}}(\nabla u(x))\,dx:& \,u\in W^{1,d}_{0}(B(0,R)),\\ &\,u=1\text{ on }B(0,1)\Bigr{\}},\end{split} \tag{3}\]
where \(f_{\text{hom}}\) is the positively \(d\)-homogeneous function
\[f_{\text{hom}}(\xi)=\min\Bigl{\{}\int_{(0,1)^{d}}f(y,\xi+\nabla\varphi(y))\, dy:\varphi\in W^{1,d}_{loc}(\mathbb{R}^{d}),\varphi\text{ 1-periodic}\Bigr{\}} \tag{4}\]
determined by the Homogenization Theorem.
We prove that if there exists a point \(x_{0}\in\mathbb{R}^{d}\) for which \(\Phi\) is consistently concentrated in an optimal way, then it holds
\[\lim_{\varepsilon\to 0}|\log\varepsilon|^{d-1}m_{\varepsilon,\delta}=\Phi(x_{0})C_{ \mathrm{hom}}\Big{[}\lambda\Phi(x_{0})^{\frac{1}{d-1}}+(1-\lambda)C_{\mathrm{ hom}}^{\frac{1}{d-1}}\Big{]}^{1-d}. \tag{5}\]
As an example, we refer to the quadratic case, already treated in [5]. If \(d=2\) and \(f(x,\xi)=a(x)|\xi|^{2}\), where \(a(x)\) is a \(1\)-periodic continuous function bounded form below by a constant \(\alpha\), then we can pick \(x_{0}\) so that \(\Phi(x_{0})=2\pi\alpha\), and denoting the homogenized matrix by \(A_{\mathrm{hom}}\), we obtain \(C_{\mathrm{hom}}=2\pi\sqrt{\det A_{\mathrm{hom}}}\). We eventually find
\[\lim_{\varepsilon\to 0}|\log\varepsilon|m_{\varepsilon,\delta}=2\pi\frac{ \alpha\sqrt{\det A_{\mathrm{hom}}}}{\lambda\alpha+(1-\lambda)\sqrt{\det A_{ \mathrm{hom}}}}\,.\]
A fundamental tool in the proof of this result is a method elaborated by De Giorgi, which allows to impose boundary conditions on converging sequences. In this work, it is presented and proved in a version (Lemma 2.2) which is suitable to our purposes, similar to that in [2].
The second result concerns homogenization on perforated domains. The argument follows the work of Laura Sigalotti in [15]; that is, a nonlinear version at the critical exponent of the homogenization on perforated domains with Dirichlet boundary conditions originally studied, e.g., by Marchenko and Khruslov in [14] and by Cioranescu and Murat in [8]. Works about the asymptotic behaviour of Dirichlet problems in varying domains are, e.g., [3, 7, 10], or also [13] for the numerical perspective.
Denoting by \(B\) the open unit ball, and by \(d(\varepsilon)\) the period of the perforations, we define a periodically perforated domain as
\[\Omega_{\varepsilon}:=\Omega\setminus\bigcup_{i\in\mathbb{Z}^{d}}id( \varepsilon)+\varepsilon B\]
and we consider functionals \(F_{\varepsilon}:L^{d}(\Omega)\to[0,+\infty]\) given by
\[F_{\varepsilon}(u):=\begin{cases}\int_{\Omega}f\left(\frac{x}{\delta},\nabla u (x)\right)\,dx&\text{ if }u\in W^{1,d}(\Omega)\text{ and }u=0\text{ on }\Omega\setminus\Omega_{\varepsilon}\\ +\infty&\text{ otherwise.}\end{cases}\]
In the above mentioned works [8] and [14], it is analysed the homogeneous case \(f(x,\xi)=|\xi|^{p}\) for \(p>1\), and it is provided a critical choice for the period, which is exactly \(d(\varepsilon)=|\log\varepsilon|^{\frac{1-d}{d}}\), if \(p=d\). Moreover, the \(\Gamma\)-limit with respect to the strong convergence in \(L^{d}(\Omega)\) is proved to be
\[\int_{\Omega}|\nabla u(x)|^{d}\,dx+\kappa_{d}\int_{\Omega}|u(x)|^{d}\,dx\]
for every \(u\in W^{1,d}(\Omega)\), with \(\kappa_{d}\) a dimensional constant, showing that internal boundary conditions disappear with the arising of a so-called _strange term_.
We prove an analogous statement: for simplicity, we assume that \(d(\varepsilon)\) is an integer multiple of \(\delta(\varepsilon)\) so that the periodicity of the perforation is 'compatible' with that of the energy. As an oscillating term is introduced in our study, we expect our result to be affected by the different rate of vanishing of \(\varepsilon,\delta\) and since this information is encoded by the parameter \(\lambda\), we show that for every \(u\in W^{1,d}(\Omega)\) it holds
\[\Gamma\text{-}\lim_{\varepsilon}F_{\varepsilon}(u)=\int_{\Omega}f_{\text{hom} }(\nabla u(x))\,dx+C(\lambda)\int_{\Omega}|u(x)|^{d}\,dx,\]
where \(C(\lambda)\) is the constant
\[C(\lambda):=\Phi(0)C_{\text{hom}}\Big{[}\lambda\Phi(0)^{\frac{1}{d-1}}+(1- \lambda)C_{\text{hom}}^{\frac{1}{d-1}}\Big{]}^{1-d},\]
and the term \(\Phi(0)\) is due to the asymptotic analysis of the problems
\[\min\Bigl{\{}\int_{B}f\left(\frac{x}{\delta},\nabla u(x)\right)\,dx:u\in W^{1, d}_{0}(B),u=1\text{ on }B(0,\varepsilon)\Bigr{\}},\]
where the centres of the perforations have been fixed at \(0\) for every \(\varepsilon\).
### Preliminaries
In this section and the following ones, let \(d\geq 2\), \(\Omega\subseteq\mathbb{R}^{d}\) be a bounded open set and \(\lambda:=\lim_{\varepsilon\to 0}|\log\delta|/|\log\varepsilon|\).
We start by justifying the definitions given in (2) and (3) thorugh the following lemma which takes advantage of a scaling invariance argument.
**Lemma 1.1**.: _Let \(g:\mathbb{R}^{d}\to\mathbb{R}\) be a Borel function which is positively homogeneous of degree \(d\) and assume there exist positive constants \(C_{1}<C_{2}\) such that \(C_{1}|\xi|^{d}\leq g(\xi)\leq C_{2}|\xi|^{d}\) for every \(\xi\in\mathbb{R}^{d}\). Define_
\[m_{R}:=\min\Bigl{\{}\int_{B(0,R)\setminus B(0,1)}g(\nabla u(x))\,dx:u\in W^{1, d}_{0}(B(0,R)),u=1\text{ on }B(0,1)\Bigr{\}},\]
_then it exists \(\lim_{R\to+\infty}(\log R)^{d-1}\,m_{R}\) and this limit is finite._
Proof.: Fix \(S>R\) and put \(T:=\lfloor\log S/\log R\rfloor\) so that the annuli \(B(0,R^{k})\setminus B(0,R^{k-1})\) are contained in \(B(0,S)\setminus B(0,1)\) for every \(k=1,...,T\).
Let \(u\) be a solution of the problem
\[\min\Bigl{\{}\int_{B(0,R)\setminus B(0,1)}g(\nabla u(x))\,dx:u\in W^{1,d}_{0} (B(0,R)),u=1\text{ on }B(0,1)\Bigr{\}},\]
and for \(k=1,...,T\), define functions \(u^{k}\in W^{1,d}(B(0,R^{k})\setminus\overline{B}(0,R^{k-1}))\) as
\[u^{k}(x):=\frac{1}{T}u\left(\frac{x}{R^{k-1}}\right)+\frac{T-k}{T};\]
then put \(u_{S}\in W^{1,d}_{0}(B(0,S))\) as
\[u_{S}(x):=\begin{cases}1&\text{ if }x\in B(0,1)\\ u^{k}(x)&\text{ if }x\in B(0,R^{k})\setminus B(0,R^{k-1}),\,k=1,...,T\\ 0&\text{ if }x\in B(0,S)\setminus B(0,R^{T}).\end{cases}\]
We have
\[(\log S)^{d-1}m_{S} \leq (\log S)^{d-1}\int_{B(0,S)\setminus B(0,1)}g(\nabla u_{S}(x))\,dx\] \[= (\log S)^{d-1}\sum_{k=1}^{T}\int_{B(0,R^{k})\setminus B(0,R^{k-1} )}g(\nabla u^{k}(x))\,dx\] \[= (\log S)^{d-1}\sum_{k=1}^{T}\frac{1}{T^{d}}\int_{B(0,R)\setminus B (0,1)}g(\nabla u(x))\,dx\] \[= (\log S)^{d-1}\frac{1}{T^{d-1}}m_{R}\] \[\leq (\log S)^{d-1}\Big{(}\frac{\log R}{\log S-\log R}\Big{)}^{d-1}m_ {R}.\]
If we pass to the \(\limsup\) as \(S\to+\infty\), and then we pass to the \(\liminf\) as \(R\to+\infty\), we obtain
\[\limsup_{S\to+\infty}(\log S)^{d-1}m_{S}\leq\liminf_{R\to+\infty}(\log R)^{d- 1}m_{R}.\]
In order to check the limit is finite, consider the function
\[u(x):=1-\frac{\log|x|}{\log R},\qquad x\in B(0,R)\setminus\overline{B}(0,1),\]
and note that the estimate
\[(\log R)^{d-1}m_{R}\leq\int_{B(0,R)\setminus B(0,1)}g(\nabla u(x ))=(\log R)^{d-1}\int_{B(0,R)\setminus B(0,1)}g\bigg{(}\frac{x}{-|x|^{2}\log R }\bigg{)}\,dx\] \[=(\log R)^{-1}\int_{B(0,R)\setminus B(0,1)}g\bigg{(}-\frac{x}{|x |^{2}}\bigg{)}\,dx\leq(\log R)^{-1}C_{2}\int_{B(0,R)\setminus B(0,1)}\frac{1} {|x|^{d}}\,dx=C_{2}\sigma_{d-1}\]
holds, completing the proof.
We state the Homogenization Theorem (see [4, 6, 9, 12]) in a slightly modified version which allows to take into account also translations in the following sense.
**Theorem 1.2**.: _Let \(A\) be a bounded open subset of \(\mathbb{R}^{d}\) with Lipschitz boundary and \((\tau_{\eta})_{\eta>0}\subseteq\mathbb{R}^{d}\). Then_
\[\Gamma\text{-}\lim_{\eta\to 0}\int_{A}f\left(\frac{x}{\eta}+\tau_{\eta}, \nabla u(x)\right)\,dx=\int_{A}f_{\hom}(\nabla u(x))\,dx\,,\]
_for every \(u\in W^{1,d}(A)\), where the \(\Gamma\)-limit is meant with respect to the strong convergence in \(L^{d}(A)\) and \(f_{\hom}\) is the function given by (4)._
_In particular, for every \(\phi\in W^{1,d}(A)\) we have_
\[\lim_{\eta\to 0}\inf\Bigl{\{}\int_{A}f\left(\frac{x}{\eta}+\tau_{ \eta},\nabla u(x)\right)\,dx: u\in\phi+W^{1,d}_{0}(A)\Bigr{\}}\\ =\min\Bigl{\{}\int_{A}f_{\hom}(\nabla u(x))\,dx:u\in\phi+W^{1,d}_ {0}(A)\Bigr{\}}\,.\]
At this point, Lemma 1.1 and assumptions (P2), (P3), make well defined the function
\[\Phi(z):=\lim_{R\to+\infty}(\log R)^{d-1}\min\Bigl{\{}\int_{B(0,R )\setminus B(0,1)}f(z,\nabla u(y))\,dy: u\in W^{1,d}_{0}(B(0,R)),\] \[u=1\text{ on }B(0,1)\Bigr{\}},\]
while, to introduce properly the constant
\[C_{\hom}:=\lim_{R\to+\infty}(\log R)^{d-1}\min\Bigl{\{}\int_{B(0, R)\setminus B(0,1)}f_{\hom}(\nabla u(x))\,dx: u\in W^{1,d}_{0}(B(0,R)),\] \[u=1\text{ on }B(0,1)\Bigr{\}},\]
we also need to rely on Theorem 1.2: this, combined with the fact that the growth conditions on \(f(x,\xi)\) posed in (P3) are inherited by the function \(f_{\hom}(\xi)\), ensures that the above lemma applies.
## 2 Asymptotic analysis of minima
We first aim at estimating the asymptotic behaviour of the minima with fixed centres modulo a translation. More precisely, let \(z\) be a point in \(\Omega\), for every \(\varepsilon\) positive sufficiently small, let \((z_{\varepsilon})_{\varepsilon}\) be a family of points in \(\Omega\) of the form \(z_{\varepsilon}=\delta z+\delta i_{\varepsilon}\), where \((i_{\varepsilon})_{\varepsilon}\subseteq\mathbb{Z}^{d}\). Also assume that such family of points is well contained in \(\Omega\), i.e., that \(\inf_{\varepsilon}\operatorname{dist}(z_{\varepsilon},\partial\Omega)>0\); we put
\[\mu_{\varepsilon,\delta}=\min\Bigl{\{}\int_{\Omega}f\left(\frac{x}{\delta}, \nabla u(x)\right)\,dx:u\in W^{1,d}_{0}(\Omega),u=1\text{ on }B(z_{\varepsilon}, \varepsilon)\Bigr{\}}. \tag{6}\]
The asymptotic behaviour of these minima is the main subject of this section; we prove the following.
**Proposition 2.1**.: _Let \(z\in\Omega\) be a fixed point, and let \((z_{\varepsilon})_{\varepsilon}\) be a family of points equal to \(z\) modulo \(\delta\) as above. Assume that for every \(\nu>0\), there exists \(r_{\nu}>0\) such that for every \(x\in B(z,r_{\nu})\) it holds_
\[|f(z,\xi)-f(x,\xi)|\leq\nu|\xi|^{d}\,\text{ for every }\xi\in\mathbb{R}^{d}. \tag{7}\]
_Then_
\[\lim_{\varepsilon\to 0}|\log\varepsilon|^{d-1}\mu_{\varepsilon,\delta}=\Phi(z)C_{ \mathrm{hom}}\Big{[}\lambda\Phi(z)^{\frac{1}{d-1}}+(1-\lambda)C_{\mathrm{hom}} ^{\frac{1}{d-1}}\Big{]}^{1-d}.\]
The proof is divided in two parts: the bound from below and the construction of an optimal sequence. In the first, the main tool we use is the following lemma which allows to modify a function in order to attain constant values (in the sense of the trace) on the boundary of a thin annulus, still controlling the value of the associated energy.
**Lemma 2.2**.: _Let \(f:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\) be a Borel function satisfying the standard growth conditions property (P3). Let \(z\in\mathbb{R}^{d}\), \(R>0\) and define_
\[F(u,A):=\,\int_{A}f(x,\nabla u(x))\,dx\]
_for every \(u\in W^{1,d}(B(z,R))\) and \(A\subseteq B(z,R)\) Borel subset._
_Let \(\eta>0\), put \(S:=\max\,\{s\in\mathbb{N}:\eta 2^{s}\leq R\}\) and assume \(S\geq 3\). Take \(N\) natural number such that \(2\leq N<S\) and \(r\) positive real number such that \(r\leq\eta 2^{S-N}\). Then there exists a function \(v\) with the following properties:_
(i)_\(v\in W^{1,d}(B(z,R)\setminus\overline{B}(z,r))\),_
(ii) _there exists \(\,j\in\{1,...,N-1\}\) such that_
\[v=u\text{ on }B(z,\eta 2^{S-j-1})\setminus\overline{B}(z,r))\cup B(z,R) \setminus\overline{B}(z,\eta 2^{S-j+1}),\]
(iii) _for the same \(j,\text{ the function }v\) is constant on \(\partial B(z,\eta 2^{S-j})\),_
(iv) _There exists a positive constant \(C\) depending on \(\alpha,\beta\) and the dimension \(d\) such that_
\[F(v,B(z,R)\setminus B(z,r))\leq\Bigr{(}1+\frac{C}{N-1}\Bigr{)}F(u,B(z,R) \setminus B(z,r)).\]
Proof.: Assume \(z=0\), if not, center the construction around \(z\) and repeat the argument.
For \(k=1,...,N-1\), we define annuli \(A_{k}:=B(0,\eta 2^{S-N+k+1})\setminus B(0,\eta 2^{S-N+k-1})\) and radial cutoff functions
\[\phi_{k}(\rho):=\begin{cases}0&\text{if }\rho\in[0,\eta 2^{S-N+k-1}]\\ \frac{\rho-\eta 2^{S-N+k-1}}{\eta 2^{S-N+k-1}}&\text{if }\rho\in(\eta 2^{S-N+k-1}, \eta 2^{S-N+k}]\\ \frac{\eta 2^{S-N+k+1}-\rho}{\eta 2^{S-N+k}}&\text{if }\rho\in(\eta 2^{S-N+k}, \eta 2^{S-N+k+1}]\\ 0&\text{if }\rho\in(\eta 2^{S-N+k+1},R],\end{cases}\]
then we put \(\psi_{k}:=1-\phi_{k}\) and define \(v_{k}:=\psi_{k}u+(1-\psi_{k})u_{A_{k}}\), where we denote by \(u_{A_{k}}\) the integral average of \(u\) on \(A_{k}\).
At each fixed \(k\), taking into account that \(|\psi_{k}|\leq 1\) and
\[|\nabla\psi_{k}|^{d}=|\nabla\phi_{k}|^{d}\leq\Big{(}\frac{1}{\eta 2^{S-N+k-1}} \Big{)}^{d},\]
we exploit (P2) to have
\[\int_{A_{k}}f(x,\nabla v_{k}(x))\,dx \leq\beta\int_{A_{k}}|\nabla v_{k}(x)|^{d}\,dx\] \[=\beta\int_{A_{k}}|\psi_{k}\nabla u(x)+(u-u_{A_{k}})\nabla\psi_{ k}(x)|^{d}\,dx\] \[\leq\beta 2^{d-1}\Big{[}\int_{A_{k}}|\nabla u|^{d}\,dx+\Big{(} \frac{1}{\eta 2^{S-N+k-1}}\Big{)}^{d}\int_{A_{k}}|u(x)-u_{A_{k}}|^{d}\,dx\Big{]}. \tag{8}\]
Consider now the following well known scaling property of the Poincare-Wirtinger inequality: given \(A\) open, bounded, connected, with Lipschitz boundary and \(\lambda>0\), it holds
\[\frac{1}{\lambda^{d}}\int_{\lambda A}|u-u_{\lambda A}|^{d}\,dx\leq P(A)\int_{ \lambda A}|\nabla u|^{d}\,dx,\]
where \(u_{\lambda A}\) is the integral average of \(u\) on \(\lambda A\) and \(P(A)\) is the Poincare-Wirtinger constant related to \(A\).
We apply this result with \(A=B(0,4)\setminus\overline{B}(0,1)\) and \(\lambda=\eta 2^{S-N+k-1}\), obtaining
\[\Big{(}\frac{1}{\eta 2^{S-N+k-1}}\Big{)}^{d}\int_{A_{k}}|u(x)-u_{A_{k}}|^{d}\, dx\leq P^{d}\int_{A_{k}}|\nabla u|^{d}\,dx,\]
being \(P:=P(A)\) a constant which does not depend on \(k\).
As a consequence (8) turns into
\[\int_{A_{k}}f(x,\nabla v_{k}(x))\,dx \leq\beta 2^{d-1}\big{(}1+P^{d}\big{)}\int_{A_{k}}|\nabla u|^{d}\,dx\] \[\leq\frac{\beta}{\alpha}2^{d-1}\big{(}1+P^{d}\big{)}\int_{A_{k}} f(x,\nabla u(x))\,dx,\]
and summing over \(k\), we deduce
\[\sum_{k=1}^{N-1}\int_{A_{k}}f(x,\nabla v_{k}(x))\,dx\leq C\int_{B(0,R)\setminus B (0,r)}f(x,\nabla u(x))\,dx,\]
where we put \(C:=\beta 2^{d-1}\big{(}1+P^{d}\big{)}/\alpha\). It follows there exists \(j\in\{1,...,N-1\}\) such that
\[\int_{A_{j}}f(x,\nabla v_{j}(x))\,dx\leq\frac{C}{N-1}\int_{B(0,R)\setminus B(0,r) }f(x,\nabla u(x))\,dx,\]
and then it holds
\[\int_{B(0,R)\setminus B(0,r)}f(x,\nabla v_{j}(x))\,dx =\int_{(B(0,R)\setminus B(0,r))\setminus A_{j}}f(x,\nabla u(x)) \,dx+\int_{A_{j}}f(x,\nabla v_{j}(x))\,dx\] \[\leq\left(1+\frac{C}{N-1}\right)\int_{B(0,R)\setminus B(0,r)}f(x, \nabla u(x))\,dx,\]
which concludes the proof.
The estimate in (iv) is more precise as \(N\to\infty\), i.e., as \(\eta\to 0\). Our strategy will consist in parting the open set \(\Omega\) through many annuli having small inner and outer radii, say of order \(\varepsilon^{\lambda}\sim\delta\), and there modifying a function \(u\in W^{1,d}_{0}(\Omega)\) to achieve some constant Dirichlet boundary conditions as a consequence of (iii). The error introduced by the modification will be negligible in light of (iv).
### Lower bound
In what follows, we systematically identify a function \(u\in W^{1,d}_{0}(\Omega)\) with the the extension obtained by setting \(u=0\) on \(\mathbb{R}^{d}\setminus\Omega\), which belongs to \(W^{1,d}(\mathbb{R}^{d})\).
For simplicity of notation, given \(A\) a Borel subset of \(\mathbb{R}^{d}\) and \(u\in W^{1,d}(\mathbb{R}^{d})\), we put
\[F_{\varepsilon}(u,A):=\int_{A}f\left(\frac{x}{\delta},\nabla u(x)\right)\,dx\]
and denote by \(R_{\Omega}\) the maximum among the diameter of \(\Omega\) and \(1\).
We consider separately the cases \(\lambda=0,\,\lambda\in(0,1)\) and \(\lambda=1\); we obtain for each instance the same kind of estimate and then we conclude by the same argument.
If \(\lambda=0\), fix a parameter \(\lambda_{2}\in(\lambda,1)\) so that
\[\frac{\varepsilon^{\lambda_{2}}}{\delta}\to 0\text{ as }\varepsilon\to 0.\]
For every \(u\in W^{1,d}_{0}(\Omega)\) such that \(u=1\) on \(B(z_{\varepsilon},\varepsilon)\), the inclusion \(\Omega\subseteq B(z_{\varepsilon},R_{\Omega})\) leads to the equality
\[F_{\varepsilon}(u,\Omega)=F_{\varepsilon}(u,B(z_{\varepsilon},R_{\Omega})),\]
then we apply Lemma 2.2 to the function \(u\in W^{1,d}_{0}(B(z_{\varepsilon},R_{\Omega}))\), with
\[f(x,\xi)=f\left(\frac{x}{\delta},\xi\right),\,\eta=\varepsilon,\,R= \varepsilon^{\lambda_{2}}\,,N\in\mathbb{N}\cap\left(1,\left\lfloor\frac{(1- \lambda_{2})|\log\varepsilon|}{\log 2}\right\rfloor=S\right)\ \text{ and }r=\varepsilon.\]
We get a function \(v\in W^{1,d}_{0}(B(z_{\varepsilon},R_{\Omega}))\) such that \(v=1\) on \(B(z_{\varepsilon},\varepsilon)\), \(v=c\) on \(\partial B(z_{\varepsilon},\varepsilon 2^{S-j})\) for some constant \(c\) and some index \(j\in\{1,...,N-1\}\), and \(v=u\) on \(B(z_{\varepsilon},R_{\Omega})\setminus B(z_{\varepsilon},\varepsilon^{\lambda_ {2}})\); hence, it holds
\[\begin{split}\left(1+\frac{C}{N-1}\right)F_{\varepsilon}(u,\Omega) &=\left(1+\frac{C}{N-1}\right)F_{\varepsilon}(u,B(z_{\varepsilon },R_{\Omega}))\geq F_{\varepsilon}(v,B(z_{\varepsilon},R_{\Omega}))\\ &=F_{\varepsilon}(v,B(z_{\varepsilon},\varepsilon 2^{S-j}))+F_{ \varepsilon}(v,B(z_{\varepsilon},R_{\Omega})\setminus B(z_{\varepsilon}, \varepsilon 2^{S-j})).\end{split} \tag{9}\]
Now we set
\[w^{1}:=\begin{cases}v&\text{on }B(z_{\varepsilon},\varepsilon 2^{S-j})\\ c&\text{on }B(z_{\varepsilon},\varepsilon^{\lambda_{2}})\setminus B(z_{ \varepsilon},\varepsilon 2^{S-j})\end{cases}\qquad w^{2}:=\begin{cases}c&\text{on }B(z_{ \varepsilon},\varepsilon 2^{S-j})\setminus\overline{B}(z_{\varepsilon}, \varepsilon 2^{S-N})\\ v&\text{on }B(z_{\varepsilon},R_{\Omega})\setminus B(z_{\varepsilon}, \varepsilon 2^{S-j}),\end{cases}\]
and we note that
\[F_{\varepsilon}(w^{1},B(z_{\varepsilon},\varepsilon^{\lambda_{2}}))=F_{ \varepsilon}(v,B(z_{\varepsilon},\varepsilon 2^{S-j}))\]
and
\[F_{\varepsilon}(w^{2},B(z_{\varepsilon},R_{\Omega})\setminus B(z_{\varepsilon },\varepsilon 2^{S-N}))=F_{\varepsilon}(v,B(z_{\varepsilon},R_{\Omega})\setminus B(z_{ \varepsilon},\varepsilon 2^{S-j})),\]
thus
\[F_{\varepsilon}(v,B(z_{\varepsilon},R_{\Omega}))=F_{\varepsilon}(w^{1},B(z_{ \varepsilon},\varepsilon^{\lambda_{2}}))+F_{\varepsilon}(w^{2},B(z_{ \varepsilon},R_{\Omega})\setminus B(z_{\varepsilon},\varepsilon 2^{S-N})).\]
At this point we take advantage of the fact that both the functions \(w^{1}\) and \(w^{2}\) attain constant values on the components of the boundary of their domain.
We rewrite inequality (9) as
\[\left(1+\frac{C}{N-1}\right)F_{\varepsilon}(u,\Omega)\geq\]
\[\geq\min\{F_{\varepsilon}(v,B(z_{\varepsilon},\varepsilon^{\lambda_{2}})):v \in W^{1,d}(B(z_{\varepsilon},\varepsilon^{\lambda_{2}})),v=1\text{ on }B(z_{ \varepsilon},\varepsilon),v=c\text{ on }\partial B(z_{\varepsilon}, \varepsilon^{\lambda_{2}})\}\\ +\min\{F_{\varepsilon}(v,B(z_{\varepsilon},R_{\Omega})\setminus \overline{B}(z_{\varepsilon},\varepsilon 2^{S-N})):v\in W^{1,d}(B(z_{\varepsilon},R_{\Omega}) \setminus\overline{B}(z_{\varepsilon},\varepsilon^{\lambda_{2}}2^{S-N})), \\ v=c\text{ on }\partial B(z_{\varepsilon},\varepsilon 2^{S-N}),v=0 \text{ on }\partial B(z_{\varepsilon},R_{\Omega})\},\]
and taking into account the transformations
\[v(x)\mapsto\frac{v(x)-c}{1-c}\,,\qquad\qquad v(x)\mapsto\frac{v(x)}{c}\,,\]
and the property of homogeneity (P2), we have that the last expression equals
\[\min\{F_{\varepsilon}(v,B(z_{\varepsilon},\varepsilon^{\lambda_ {2}})):v\in W^{1,d}_{0}(B(z_{\varepsilon},\varepsilon^{\lambda_{2}})),v=1 \text{ on }B(z_{\varepsilon},\varepsilon)\}|1-c|^{d} \tag{10}\] \[+\min\{F_{\varepsilon}(v,B(z_{\varepsilon},R_{\Omega})\setminus \overline{B}(z_{\varepsilon},\varepsilon 2^{S-N})):v\in W^{1,d}(B(z_{\varepsilon},R_{\Omega}) \setminus\overline{B}(z_{\varepsilon},\varepsilon^{\lambda_{2}}2^{S-N})),\] \[v=1\text{ on }B(z_{\varepsilon},\varepsilon 2^{S-N}),v=0 \text{ on }\partial B(z_{\varepsilon},R_{\Omega})\}|c|^{d}. \tag{11}\]
To treat the minimum in (10), we apply the transformation \(v(x)\mapsto v(z_{\varepsilon}+x\varepsilon)\), getting
\[\min\Bigl{\{}\int_{B(0,\varepsilon^{\lambda_{2}-1})}f\Bigl{(}\frac{x}{\delta} \varepsilon+\frac{z_{\varepsilon}}{\delta},\nabla v(x)\Bigr{)}:v\in W^{1,d}_{0 }(B(0,\varepsilon^{\lambda_{2}-1})),v=1\mbox{ on }B(0,1)\Bigr{\}}|1-c|^{d}\,.\]
As \(z_{\varepsilon}=\delta z+\delta i_{\varepsilon}\), we exploit the periodicity assumption (P1) to get
\[f\left(\frac{x}{\delta}\varepsilon+\frac{z_{\varepsilon}}{\delta},\xi\right) =f\left(\frac{x}{\delta}\varepsilon+z,\xi\right)\mbox{ for every }\xi\in \mathbb{R}^{d};\]
also note that if \(x\in B(0,\varepsilon^{\lambda_{2}-1})\), then \(\frac{\varepsilon}{\delta}|x|<\frac{\varepsilon^{\lambda_{2}}}{\delta}\to 0\) as \(\varepsilon\to 0\). Hence, for every \(\nu>0\), given \(r_{\nu}\) as in (7), it holds that \(B(0,\varepsilon^{\lambda_{2}-1})\subseteq B(0,r_{\nu})\), so that for every \(\varepsilon\) sufficiently small we have
\[f\Bigl{(}\frac{x}{\delta}\varepsilon+\frac{z_{\varepsilon}}{\delta},\xi\Bigr{)} \geq f(z,\xi)-\nu|\xi|^{d}\mbox{ for every }\xi\in\mathbb{R}^{d}.\]
Combining these observations with the growth condition (from below) in (P3), we get
\[\int_{B(0,\varepsilon^{\lambda_{2}-1})}f\Bigl{(}\frac{x}{\delta}\varepsilon+ \frac{z_{\varepsilon}}{\delta},\nabla v(x)\Bigr{)}\,dx\geq\Bigl{(}1-\frac{ \nu}{\alpha}\Bigr{)}\int_{B(0,\varepsilon^{\lambda_{2}-1})}f(z,\nabla v(x))\,dx\]
for every \(v\in W^{1,d}_{0}(B(0,\varepsilon^{\lambda_{2}-1}))\) such that \(v=1\) on \(B(0,1)\). By the application of Lemma 1.1, which is possible due to the fact that \(\lambda_{2}<1\), we obtain
\[\min\{F_{\varepsilon}(v,B(z_{\varepsilon},\varepsilon^{\lambda_{2}})):v\in W^ {1,d}_{0}(B(z_{\varepsilon},\varepsilon^{\lambda_{2}})),v=1\mbox{ on }B(z_{ \varepsilon},\varepsilon)\}|1-c|^{d}\]
\[\geq\frac{\Phi(z)+o_{\varepsilon}(1)}{(1-\lambda_{2})^{d-1}|\log\varepsilon|^ {d-1}}|1-c|^{d}, \tag{12}\]
where we get rid of the term in \(\nu\) since \(1-\nu/\alpha\) may be taken arbitrarily close to \(1\) as \(\varepsilon\to 0\).
In order to deal with the minimum in (11), we apply once more property (P3), and in particular the inequality \(f(x,\xi)\geq\alpha|\xi|^{d}\). We get a lower bound in terms of the d-capacity of the inclusion \(B(z_{\varepsilon},\varepsilon 2^{S-N})\subseteq B(z_{\varepsilon},R_{ \Omega})\) which is explicitly computed; more precisely, we have
\[\min\{F_{\varepsilon}(v,B(z_{\varepsilon},R_{\Omega})\setminus \overline{B}(z_{\varepsilon},\varepsilon 2^{S-N})):v\in W^{1,d}(B(z_{\varepsilon},R_{ \Omega})\setminus\overline{B}(z_{\varepsilon},\varepsilon^{\lambda_{2}}2^{S-N })),\] \[v=1\mbox{ on }B(z_{\varepsilon},\varepsilon 2^{S-N}),v=0\mbox{ on }\partial B(z_{ \varepsilon},R_{\Omega})\}|c|^{d}\]
\[\geq \alpha\mbox{Cap}_{d}(B(z_{\varepsilon},\varepsilon 2^{S-N}),B(z_{ \varepsilon},R_{\Omega}))|c|^{d} \tag{13}\] \[= \frac{\alpha\sigma_{d-1}}{[\log R_{\Omega}+|\log\varepsilon|-(S-N )\log 2]^{d-1}}|c|^{d}\] \[\geq \frac{\alpha\sigma_{d-1}}{[\log R_{\Omega}+\lambda_{2}|\log \varepsilon|+(N+2)\log 2]^{d-1}}|c|^{d},\]
where the last inequality follows recalling that \(S=\lfloor\frac{(1-\lambda_{2})|\log\varepsilon|}{\log 2}\rfloor\).
Gathering (12) and (13), and multiplying by \(|\log\varepsilon|^{d-1}\), we get
\[\begin{split}\left(1+\frac{C}{N-1}\right)|\log\varepsilon|^{d-1}F_{ \varepsilon}(u,\Omega)&\geq\frac{\Phi(z)+o_{\varepsilon}(1)}{(1- \lambda_{2})^{d-1}}|1-c|^{d}\\ &+\frac{\alpha\sigma_{d-1}|\log\varepsilon|^{d-1}}{[\log R_{ \Omega}+\lambda_{2}|\log\varepsilon|+(N+2)\log 2]^{d-1}}|c|^{d}\,.\end{split} \tag{14}\]
We recall that, by construction, the constant boundary value \(c\) actually depends on \(\varepsilon\), being the mean value of the function \(u\) in an annulus whose radii are \(\varepsilon\)-dependent. In order to pass to the lower limit as \(\varepsilon\to 0\), we make precise the fact that we can assume \(c(\varepsilon)\to c\in\mathbb{R}\).
An easy way to see this is observing that we may assume that \(u\) takes values in \([0,1]\) as trivially follows by the estimate
\[F_{\varepsilon}(u,\Omega)\geq F_{\varepsilon}((u\lor 0)\wedge 1,\Omega)\text{ for every }u\in W_{0}^{1,d}(\Omega),\]
so that \(c(\varepsilon)\in[0,1]\) as well. Then we find a sequence \(\varepsilon_{k}\to 0\), and correspondingly \(c_{k}:=c(\varepsilon_{k})\), such that
\[\liminf_{k\to+\infty}\frac{\Phi(z)+o_{k}(1)}{(1-\lambda_{2})^{d-1}}(1-c_{k})^ {d}+\frac{\alpha\sigma_{d-1}|\log\varepsilon_{k}|^{d-1}}{[\log R_{\Omega}+ \lambda_{2}|\log\varepsilon_{k}|+(N+2)\log 2]^{d-1}}(c_{k})^{d}\]
is achieved as a limit; as \((c_{k})_{k}\subseteq[0,1]\), we extract a further subsequence \(c_{k_{h}}\to c\in[0,1]\), to get
\[\left(1+\frac{C}{N-1}\right)\liminf_{\varepsilon\to 0}|\log\varepsilon|^{d-1}F_{ \varepsilon}(u,\Omega)\geq\frac{\Phi(z)}{(1-\lambda_{2})^{d-1}}|1-c|^{d}+ \frac{\alpha\sigma_{d-1}}{\lambda_{2}^{d-1}}|c|^{d}\,.\]
Finally, we pass to the limit as \(N\to+\infty\) and recall that \(u\) was arbitrary among the admissible functions for the minimization; we conclude that
\[\liminf_{\varepsilon\to 0}|\log\varepsilon|^{d-1}\mu_{\varepsilon,\delta}\geq \frac{\Phi(z)}{(1-\lambda_{2})^{d-1}}|1-c|^{d}+\frac{\alpha\sigma_{d-1}}{ \lambda_{2}^{d-1}}|c|^{d}\,, \tag{15}\]
for every \(\lambda_{2}\in(0,1)\).
If \(\lambda\in(0,1)\), we introduce a further parameter \(\lambda_{1}\in(0,\lambda)\) so that
\[\frac{\delta}{\varepsilon^{\lambda_{1}}}\to 0\text{ as }\varepsilon\to 0.\]
Our construction relies on the definition of several concentric annuli. To this end, let
\[T:=\max\left\{t\in\mathbb{N}:\varepsilon^{\lambda_{1}}2^{t}\leq R_{\Omega} \right\}=\lfloor\frac{\lambda_{1}|\log\varepsilon|+\log R_{\Omega}}{\log 2}\rfloor\]
and assume in particular that \(T\) is larger than \(4\) as \(\varepsilon\) is small enough. Then pick a natural number \(M\in(2,T)\) and define annuli centered in \(z_{\varepsilon}\) having radii \(\varepsilon^{\lambda_{1}}2^{kM}\), with \(k=0,1,...,\lfloor\frac{T}{M}\rfloor+1\).
We have \(\Omega\subseteq B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{(\lfloor T/M \rfloor+1)M})\); hence, for every \(u\in W^{1,d}_{0}(\Omega)\) such that \(u=1\) on \(B(z_{\varepsilon},\varepsilon)\), it holds that
\[F_{\varepsilon}(u,\Omega) =F_{\varepsilon}(u,B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{ (\lfloor T/M\rfloor+1)M}))\] \[=F_{\varepsilon}(u,B(z_{\varepsilon},\varepsilon^{\lambda_{2}})) +F_{\varepsilon}(u,B(z_{\varepsilon},\varepsilon^{\lambda_{1}})\setminus B(z_ {\varepsilon},\varepsilon^{\lambda_{2}}))\] \[+\sum_{k=1}^{\lfloor T/M\rfloor+1}F_{\varepsilon}(u,B(z_{ \varepsilon},\varepsilon^{\lambda_{1}}2^{kM})\setminus B(z_{\varepsilon}, \varepsilon^{\lambda_{1}}2^{(k-1)M})).\]
In the last equality, we carefully separated three summands in order to treat each of them in accordance with the different exponential scales described by the parameters \(\lambda_{1},\lambda_{2}\).
Apply Lemma 2.2 to the first summand with
\[f(x,\xi)=f\left(\frac{x}{\delta},\xi\right),\,\eta=\varepsilon,\,R= \varepsilon^{\lambda_{2}},\,N\in\mathbb{N}\cap\left(1,\left\lfloor\frac{(1- \lambda_{2})|\log\varepsilon|}{\log 2}\right\rfloor\right)\text{ and }r=\varepsilon.\]
Apply Lemma 2.2 to the second summand with
\[f(x,\xi)=f\left(\frac{x}{\delta},\xi\right),\,\eta=\varepsilon^{\lambda_{2}}, \,R=\varepsilon^{\lambda_{1}},\,N\in\mathbb{N}\cap\left(1,\left\lfloor\frac{( \lambda_{2}-\lambda_{1})|\log\varepsilon|}{\log 2}\right\rfloor\right)\text{ and }r= \varepsilon^{\lambda_{2}}.\]
Apply Lemma 2.2 to the terms of the third summand for \(k=1,...,\lfloor T/M\rfloor\) with
\[f(x,\xi)=f\left(\frac{x}{\delta},\xi\right),\,\eta=\varepsilon^{\lambda_{1}},\,R=\varepsilon^{\lambda_{1}}2^{kM},\,N\in\mathbb{N}\cap(1,kM)\text{ and }r=\varepsilon^{\lambda_{1}}2^{(k-1)M}.\]
Set for simplicity of notation
\[S^{\prime}:=\left\lfloor\frac{(1-\lambda_{2})|\log\varepsilon|}{\log 2} \right\rfloor\qquad\text{and}\qquad S^{\prime\prime}:=\left\lfloor\frac{( \lambda_{2}-\lambda_{1})|\log\varepsilon|}{\log 2}\right\rfloor.\]
Since \(S^{\prime},S^{\prime\prime}\) and \(M\) will get arbitrarily large, we may assume we fix the same \(N\) in each of the above applications of the lemma.
We obtain functions \(v^{-1}\in W^{1,d}(B(z_{\varepsilon},\varepsilon^{\lambda_{2}}))\), \(v^{0}\in W^{1,d}(B(z_{\varepsilon},\varepsilon^{\lambda_{1}})\setminus \overline{B}(z_{\varepsilon},\varepsilon^{\lambda_{2}}))\) and \(v^{k}\in W^{1,d}(B(z,\varepsilon^{\lambda_{1}}2^{kM})\setminus\overline{B}(z, \varepsilon^{\lambda_{1}}2^{(k-1)M}))\), \(k=1,...,\lfloor T/M\rfloor\) with the properties stated in Lemma 2.2. We put
\[v:=\begin{cases}v^{-1}&\text{ on }B(z_{\varepsilon},\varepsilon^{\lambda_{2}}) \\ v^{0}&\text{ on }B(z_{\varepsilon},\varepsilon^{\lambda_{1}})\setminus B(z_{ \varepsilon},\varepsilon^{\lambda_{2}})\\ v^{k}&\text{ on }B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{kM})\setminus B(z_{ \varepsilon},\varepsilon^{\lambda_{1}}2^{(k-1)M}),\,k=1,...,\lfloor T/M\rfloor \\ u&\text{ otherwise,}\end{cases}\]
and note that \(v\in W^{1,d}_{0}(B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{(\lfloor T/M \rfloor+1)M}))\) since the modifications provided by the lemma occur far from the boundary of each annulus; moreover it holds
\[\bigg{(}1+\frac{C}{N-1}\bigg{)}F_{\varepsilon}(u,B(z_{\varepsilon},\varepsilon^{ \lambda_{1}}2^{(\lfloor T/M\rfloor+1)M}))\geq F_{\varepsilon}(v,B(z_{ \varepsilon},\varepsilon^{\lambda_{1}}2^{(\lfloor T/M\rfloor+1)M})).\]
In order to highlight that \(v\) is constant of value \(c_{k}\) on spheres centered in \(z_{\varepsilon}\) of radii of the form \(\varepsilon 2^{S^{\prime}-j_{-1}}\), \(\varepsilon^{\lambda_{2}}2^{S^{\prime\prime}-j_{0}}\) and \(\varepsilon^{\lambda_{1}}2^{kM-j_{k}}\), where \(j_{k}\in\{1,...,N-1\}\) for \(k=-1,0,1,...,\lfloor T/M\rfloor\), we write
\[F_{\varepsilon}(v,B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^ {(\lfloor T/M\rfloor+1)M})) =F_{\varepsilon}(v,B(z_{\varepsilon},\varepsilon 2^{S^{\prime}-j_{-1}}))\] \[+F_{\varepsilon}(v,B(z_{\varepsilon},\varepsilon^{\lambda_{2}}2^ {S^{\prime\prime}-j_{0}})\setminus B(z_{\varepsilon},\varepsilon 2^{S^{\prime}-j_{-1}}))\] \[+F_{\varepsilon}(v,B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^ {M-j_{1}})\setminus B(z_{\varepsilon},\varepsilon^{\lambda_{2}}2^{S^{\prime \prime}-j_{0}}))\] \[+\sum_{k=2}^{\lfloor T/M\rfloor}F_{\varepsilon}(v,B(z_{\varepsilon },\varepsilon^{\lambda_{1}}2^{kM-j_{k}})\setminus B(z_{\varepsilon},\varepsilon ^{\lambda_{1}}2^{(k-1)M-j_{k-1}}))\] \[+F_{\varepsilon}(v,B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^ {(\lfloor T/M\rfloor+1)M})\setminus B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2 ^{\lfloor T/M\rfloor M-j_{\lfloor T/M\rfloor}})). \tag{16}\]
Then we define functions \(w^{k},\;k=-1,0,1,...\lfloor T/M\rfloor+1\) as follows: \(w^{-1}\in W^{1,d}(B(z_{\varepsilon},\varepsilon^{\lambda_{2}}))\) is defined as
\[w^{-1}:=\begin{cases}v&\text{ on }B(z_{\varepsilon},\varepsilon 2^{S^{\prime}-j_{-1 }})\\ c_{-1}&\text{ otherwise,}\end{cases}\]
so that
\[F_{\varepsilon}(w^{-1},B(z_{\varepsilon},\varepsilon^{\lambda_{2}}))=F_{ \varepsilon}(v,B(z_{\varepsilon},\varepsilon 2^{S^{\prime}-j_{-1}})).\]
Similarly, set
\[w^{0}:=\begin{cases}c_{-1}&\text{ on }B(z_{\varepsilon},\varepsilon 2^{S^{ \prime}-j_{-1}}))\setminus\overline{B}(z_{\varepsilon},\varepsilon 2^{S^{\prime}-N}))\\ v&\text{ on }B(z_{\varepsilon},\varepsilon^{\lambda_{2}}2^{S^{\prime\prime}-j_{ 0}})\setminus B(z_{\varepsilon},\varepsilon 2^{S^{\prime}-j_{-1}}))\\ c_{0}&\text{ on }B(z_{\varepsilon},\varepsilon^{\lambda_{1}})\setminus B(z_{ \varepsilon},\varepsilon^{\lambda_{2}}2^{S^{\prime\prime}-j_{0}}),\end{cases}\]
so that
\[F_{\varepsilon}(w^{0},B(z_{\varepsilon},\varepsilon^{\lambda_{1}})\setminus B (z_{\varepsilon},\varepsilon 2^{S^{\prime}-N})))=F_{\varepsilon}(v,B(z_{ \varepsilon},\varepsilon^{\lambda_{2}}2^{S^{\prime\prime}-j_{0}})\setminus B (z_{\varepsilon},\varepsilon 2^{S^{\prime}-j_{-1}}))\]
and
\[w^{1}:=\begin{cases}c_{0}&\text{ on }B(z_{\varepsilon},\varepsilon 2^{S^{\prime \prime}-j_{0}})\setminus\overline{B}(z_{\varepsilon},\varepsilon 2^{S^{\prime\prime}-N}))\\ v&\text{ on }B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{M-j_{1}})\setminus B (z_{\varepsilon},\varepsilon 2^{S^{\prime\prime}-j_{0}}))\\ c_{1}&\text{ on }B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{M}))\setminus B(z_{ \varepsilon},\varepsilon^{\lambda_{1}}2^{M-j_{1}}),\end{cases}\]
so that
\[F_{\varepsilon}(w^{1},B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{M})\setminus B (z_{\varepsilon},\varepsilon 2^{S^{\prime\prime}-N})))=F_{\varepsilon}(v,B(z_{ \varepsilon},\varepsilon^{\lambda_{1}}2^{M-j_{1}})\setminus B(z_{\varepsilon}, \varepsilon 2^{S^{\prime\prime}-j_{0}})).\]
For \(k=2,...,\lfloor T/M\rfloor+1\), we define annuli
\[A^{N}_{M,k}:=B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{kM})\setminus \overline{B}(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{(k-1)M-N}).\]
For \(k=2,...,\lfloor T/M\rfloor\), we define functions \(w^{k}\in W^{1,d}(A^{N}_{M,k})\) as
\[w^{k}:=\begin{cases}c_{k-1}&\text{on }B(z_{\varepsilon},\varepsilon^{ \lambda_{1}}2^{(k-1)M-j_{k-1}})\setminus\overline{B}(z_{\varepsilon}, \varepsilon^{\lambda_{1}}2^{(k-1)M-N})\\ v&\text{on }B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{kM-j_{k}})\setminus B(z, \varepsilon^{\lambda_{1}}2^{(k-1)M-j_{k-1}})\\ c_{k}&\text{on }B(z,\varepsilon^{\lambda_{1}}2^{kM})\setminus B(z,\varepsilon^{ \lambda_{1}}2^{kM-j_{k}}),\end{cases}\]
and for \(k=\lfloor T/M\rfloor+1\),
\[w^{\lfloor T/M\rfloor+1}:=\begin{cases}c_{\lfloor T/M\rfloor}&\text{on }B(z_{ \varepsilon},\varepsilon^{\lambda_{1}}2^{(k-1)M-j_{k-1}})\setminus\overline{B }(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{(k-1)M-N})\\ v&\text{otherwise},\end{cases}\]
so that
\[F_{\varepsilon}(w^{k},A^{N}_{M,k})=F_{\varepsilon}(v,B(z,\varepsilon^{\lambda _{1}}2^{kM-j_{k}})\setminus B(z,\varepsilon^{\lambda_{1}}2^{(k-1)M-j_{k-1}}))\]
for all \(k=2,...,\lfloor T/M\rfloor+1\).
If we set \(A^{N}_{M,-1}:=B(z_{\varepsilon},\varepsilon^{\lambda_{2}})\), \(A^{N}_{M,0}:=B(z_{\varepsilon},\varepsilon^{\lambda_{1}})\setminus\overline{B }(z_{\varepsilon},\varepsilon 2^{S^{\prime}-N})\) and \(A^{N}_{M,1}:=B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{M})\setminus \overline{B}(z_{\varepsilon},\varepsilon 2^{S^{\prime\prime}-N})\), then we can rewrite (16) simply as
\[F_{\varepsilon}(v,B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{(\lfloor T/M \rfloor+1)M}))=\sum_{k=-1}^{\lfloor T/M\rfloor+1}F_{\varepsilon}(w^{k},A^{N}_{ M,k}).\]
Once more, we take advantage of the fact that the functions \(w^{-1},...,w^{\lfloor T/M\rfloor+1}\) attain constant value on the components of their annuli of definition. Also, exploiting (P2) and suitable affine transformation (as in the case \(\lambda=0\)), we get
\[\bigg{(}1+\frac{C}{N-1}\bigg{)}F_{\varepsilon}(u,\Omega)\geq\]
\[\geq\min\{F_{\varepsilon}(v,B(z_{\varepsilon},\varepsilon^{\lambda_{2}})):v \in W^{1,d}_{0}(B(z_{\varepsilon},\varepsilon^{\lambda_{2}})),v=1\text{ on }B(z_{\varepsilon},\varepsilon)\}|1-c_{-1}|^{d} \tag{17}\] \[+\min\{F_{\varepsilon}(v,B(z_{\varepsilon},\varepsilon^{\lambda_{ 1}})\setminus B(z_{\varepsilon},\varepsilon 2^{S^{\prime}-N})):v\in W^{1,d}(B(z_{ \varepsilon},\varepsilon^{\lambda_{1}})\setminus\overline{B}(z_{\varepsilon}, \varepsilon 2^{S^{\prime}-N})),\] \[v=1\text{ on }\partial B(z_{\varepsilon},\varepsilon 2^{S^{\prime}-N}),v =0\text{ on }\partial B(z_{\varepsilon},\varepsilon^{\lambda_{1}})\}|c_{-1}-c_{0}|^{d}\] (18) \[+\min\{F_{\varepsilon}(v,B(z_{\varepsilon},\varepsilon^{\lambda_{ 1}}2^{M})\setminus B(z_{\varepsilon},\varepsilon 2^{S^{\prime\prime}-N})):v\in W^{1,d}(B(z_{ \varepsilon},\varepsilon^{\lambda_{1}}2^{M})\setminus\overline{B}(z_{ \varepsilon},\varepsilon 2^{S^{\prime\prime}-N})),\] \[v=1\text{ on }\partial B(z_{\varepsilon},\varepsilon 2^{S^{\prime\prime}-N}),v =0\text{ on }\partial B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{M})\}|c_{0}-c_{1}|^{d}\] (19) \[+\sum_{k=2}^{\lfloor T/M\rfloor+1}\min\{F_{\varepsilon}(v,A^{N}_{ M,k}):v\in W^{1,d}(A^{N}_{M,k}),\,v=1\text{ on }\partial B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{(k-1)M-N}),\] \[v=0\text{ on }\partial B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2 ^{kM})\}|c_{k-1}-c_{k}|^{d}\,, \tag{20}\]
where we put \(c_{\lfloor\frac{T}{M}\rfloor+1}:=0\).
Since \(\lambda_{2}<\lambda\), the minimum in (17) is estimated as for (12) in the case \(\lambda=0\), thus it is greater than or equal to
\[\frac{\Phi(z)+o_{\varepsilon}(1)}{(1-\lambda_{2})^{d-1}|\log \varepsilon|^{d-1}}|1-c_{-1}|^{d}. \tag{21}\]
The bounds for the (18) and (19) follow again by the growth condition from below in (P3), in particular, recalling how we defined \(S^{\prime}\) and \(S^{\prime\prime}\), we have
\[\alpha\text{Cap}_{d}(B(z_{\varepsilon},\varepsilon 2^{S^{\prime}-N}),B(z_ {\varepsilon},\varepsilon^{\lambda_{1}})) \geq\frac{\alpha\sigma_{d-1}}{[(1-\lambda_{1})|\log\varepsilon|-( S^{\prime}-N)\log 2]^{d-1}} \tag{22}\] \[\geq\frac{\alpha\sigma_{d-1}}{[(\lambda_{2}-\lambda_{1})|\log \varepsilon|+(N+1)\log 2]^{d-1}}\]
while
\[\alpha\text{Cap}_{d}(B(z_{\varepsilon},\varepsilon 2^{S^{\prime \prime}-N}),B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{M})) \geq\frac{\alpha\sigma_{d-1}}{[M\log 2+(1-\lambda_{1})|\log \varepsilon|-(S^{\prime\prime}-N)\log 2]^{d-1}}\] \[\geq\frac{\alpha\sigma_{d-1}}{[M\log 2+(1-\lambda_{2})|\log \varepsilon|+(N+1)\log 2]^{d-1}}\,. \tag{23}\]
Concerning the summands in (20), we proceed fixing \(k=2,...,\lfloor T/M\rfloor+1\) and applying \(v(x)\mapsto v(z_{\varepsilon}+x\varepsilon^{\lambda_{1}}2^{(k-1)M-N})\), so that each term equals
\[\min\Bigl{\{}\int_{B(0,2^{M+N})\setminus B(0,1)} f\Bigl{(}\frac{x}{\delta}\varepsilon^{\lambda_{1}}2^{(k-1)M-N}+\frac{z_{ \varepsilon}}{\delta},\nabla v(x)\Bigr{)}\,dx:\] \[v\in W^{1,d}_{0}(B(0,2^{M+N})),v=1\text{ on }B(0,1)\Bigr{\}}|c_{k-1}-c_{k}|^{d}\,.\]
By \(\lambda_{1}<\lambda\) it follows that
\[\frac{\delta}{\varepsilon^{\lambda_{1}}2^{(k-1)M-N}}\to 0\text{ as } \varepsilon\to 0;\]
hence, we can apply Theorem 1.2 with
\[A=B(0,2^{M+N})\setminus\overline{B}(0,1)\,,\qquad\eta=\frac{\delta}{\varepsilon^{ \lambda_{1}}2^{(k-1)M-N}}\,,\qquad\tau_{\eta}=\frac{z_{\varepsilon}}{\delta}\]
and \(\phi\) any function in \(W^{1,d}(B(0,2^{M+N})\setminus\overline{B}(0,1))\) such that \(\phi=1\) on \(\partial B(0,1)\) and \(\phi=0\) on \(\partial B(0,2^{M+N})\). We get that each of the above minima equals
\[\biggl{[}\min\Bigl{\{}\int_{B(0,2^{M+N})\setminus B(0,1)}f_{ \mathrm{hom}}(\nabla v(x))\,dx:\,v\in W^{1,d}(B(0,2^{M+N})),\\ v=1\text{ on }B(0,1),v=0\text{ on }\partial B(0,2^{M+N})\Bigr{\}}+o_{ \varepsilon}(1)\biggr{]}|c_{k-1}-c_{k}|^{d}, \tag{24}\]
where \(f_{\mathrm{hom}}\) is the \(d\)-homogeneous function given by (4), which does not depend on \(k\).
Recalling the definition of the constant \(C_{\mathrm{hom}}\) given in (3), (24) turns into
\[\biggl{[}\frac{C_{\mathrm{hom}}+o_{M}(1)}{((M+N)\log 2)^{d-1}}+o_{\varepsilon }(1)\biggr{]}|c_{k-1}-c_{k}|^{d}.\]
We sum over \(k\) and use the convexity of \(x\mapsto|x|^{d}\), the fact that \(\sum_{k=2}^{\lfloor T/M\rfloor+1}(c_{k-1}-c_{k})=c_{1}\) and that \(T\leq\frac{\lambda_{1}|\log\varepsilon|+\log R_{\Omega}}{\log 2}\); we obtain
\[\sum_{k=2}^{\lfloor T/M\rfloor+1}|c_{k-1}-c_{k}|^{d}\geq\frac{(M\log 2)^{d-1}}{( \lambda_{1}|\log\varepsilon|+\log R_{\Omega}+M\log 2)^{d-1}}|c_{1}|^{d},\]
and in turn
\[\sum_{k=2}^{\lfloor T/M\rfloor+1} \biggl{[}\frac{C_{\mathrm{hom}}+o_{M}(1)}{((M+N)\log 2)^{d-1}}+o_{ \varepsilon}(1)\biggr{]}|c_{k-1}-c_{k}|^{d}\] \[\geq\biggl{[}\frac{C_{\mathrm{hom}}+o_{M}(1)}{((M+N)\log 2)^{d-1} }+o_{\varepsilon}(1)\biggr{]}\frac{(M\log 2)^{d-1}}{(\lambda_{1}|\log \varepsilon|+\log R_{\Omega}+M\log 2)^{d-1}}|c_{1}|^{d}\,. \tag{25}\]
Gathering (21), (22), (23) and (25), and multiplying by \(|\log\varepsilon|^{d-1}\), we get
\[\biggl{(}1+\frac{C}{N-1}\biggr{)} |\log\varepsilon|^{d-1}F_{\varepsilon}(u,\Omega)\geq\frac{\Phi(z) +o_{\varepsilon}(1)}{(1-\lambda_{2})^{d-1}}|1-c_{-1}|^{d}\] \[\qquad+\frac{\alpha\sigma_{d-1}|\log\varepsilon|^{d-1}}{[(\lambda _{2}-\lambda_{1})|\log\varepsilon|+(N+1)\log 2]^{d-1}}|c_{-1}-c_{0}|^{d}\] \[\qquad+\frac{\alpha\sigma_{d-1}|\log\varepsilon|^{d-1}}{[M\log 2 +(1-\lambda_{2})|\log\varepsilon|+(N+1)\log 2]^{d-1}}|c_{0}-c_{1}|^{d}\] \[+\biggl{[}\frac{C_{hom}+o_{M}(1)}{((M+N)\log 2)^{d-1}}+o_{ \varepsilon}(1)\biggr{]}\frac{|\log\varepsilon|^{d-1}(M\log 2)^{d-1}}{(\lambda_{1}| \log\varepsilon|+\log R_{\Omega}+M\log 2)^{d-1}}|c_{1}|^{d}\,.\]
Arguing as before, we stress that \(c_{-1},c_{0}\) and \(c_{1}\) depend on \(\varepsilon\) and can be picked inside the interval \([0,1]\). This lead us to assume that they all converge to some finite limits, say \(c_{-1},c_{0},c_{1}\), respectively. Moreover, such limits have to coincide; if not we would get a contradiction letting \(\lambda_{1},\lambda_{2}\to\lambda\) or \(\lambda_{2}\to 1\).
Eventually, the following estimate holds true:
\[\left(1+\frac{C}{N-1}\right)\liminf_{\varepsilon\to 0}|\log \varepsilon|^{d-1}F_{\varepsilon}(u,\Omega) \geq\frac{\Phi(z)}{(1-\lambda_{2})^{d-1}}|1-c|^{d}\] \[+\left[\frac{C_{\text{hom}}+o_{M}(1)}{((M+N)\log 2)^{d-1}} \right]\frac{(M\log 2)^{d-1}}{\lambda_{1}^{d-1}}|c|^{d}\]
and letting \(M\to+\infty,N\to+\infty\), by the arbitrariness of \(u\) we achieve
\[\liminf_{\varepsilon\to 0}|\log\varepsilon|^{d-1}\mu_{\varepsilon,\delta}\geq \frac{\Phi(z)}{(1-\lambda_{2})^{d-1}}|1-c|^{d}+\frac{C_{\text{hom}}}{\lambda_{ 1}^{d-1}}|c|^{d}\,. \tag{26}\]
If \(\lambda=1\), keeping the notation introduced throughout the proof, define annuli centered in \(z_{\varepsilon}\) having radii \(\varepsilon^{\lambda_{1}}2^{kM}\), with \(k=1,...,\lfloor\frac{T}{M}\rfloor+1\).
For every function \(u\in W^{1,d}_{0}(\Omega),u=1\) on \(B(z_{\varepsilon},\varepsilon)\), we have
\[F_{\varepsilon}(u,\Omega) =F_{\varepsilon}(u,B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{ (\lfloor T/M\rfloor+1)M})) \tag{27}\] \[=F_{\varepsilon}(u,B(z_{\varepsilon},\varepsilon^{\lambda_{1}}))\] \[+\sum_{k=1}^{\lfloor T/M\rfloor+1}F_{\varepsilon}(u,B(z_{ \varepsilon},\varepsilon^{\lambda_{1}}2^{kM})\setminus B(z_{\varepsilon}, \varepsilon^{\lambda_{1}}2^{(k-1)M})).\]
Apply Lemma 2.2 to the terms of the second summand for \(k=1,...,\lfloor T/M\rfloor\) with
\[f(x,\xi)=f\left(\frac{x}{\delta},\xi\right),\,\eta=\varepsilon^{\lambda_{1}},\,R=\varepsilon^{\lambda_{1}}2^{kM},\,N\in\mathbb{N}\cap(1,kM)\text{ and }r=\varepsilon^{\lambda_{1}}2^{(k-1)M}.\]
Arguing as in the previous instances, with \(\lambda\in[0,1)\), we get
\[\left(1+\frac{C}{N-1}\right)F_{\varepsilon}(u,\Omega)\geq\]
\[\geq\min\{F_{\varepsilon}(v,B(z_{\varepsilon},\varepsilon^{\lambda_{1}})):v \in W^{1,d}(B(z_{\varepsilon},\varepsilon^{\lambda_{1}})),v=1\text{ on }B(z_{ \varepsilon},\varepsilon),\]
\[v=0\text{ on }\partial B(z_{\varepsilon},\varepsilon^{\lambda_{1}}) \}|1-c_{0}|^{d} \tag{28}\] \[+\sum_{k=1}^{\lfloor T/M\rfloor+1}\min\{F_{\varepsilon}(v,A_{M,k} ^{N}):v\in W^{1,d}(A_{M,k}^{N}),v=1\text{ on }\partial B(z_{\varepsilon}, \varepsilon^{\lambda_{1}}2^{(k-1)M-N}),\]
\[v=0\text{ on }\partial B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{kM}) \}|c_{k-1}-c_{k}|^{d}\,, \tag{29}\]
where \(c_{\lfloor\frac{T}{M}\rfloor+1}:=0\).
Making use of (P3), (28) is bounded from below by
\[\alpha\text{Cap}_{d}(B(z_{\varepsilon},\varepsilon),B(z_{\varepsilon}, \varepsilon^{\lambda_{1}}))|1-c_{0}|^{d}=\frac{\alpha\sigma_{d-1}}{[(1- \lambda_{1})|\log\varepsilon|]^{d-1}}|1-c_{0}|^{d}\,,\]
while (29) can be estimated as in (25) since \(\delta/\varepsilon^{\lambda_{1}}\to 0\).
At the end, we get the inequality
\[\bigg{(}1+\frac{C}{N-1}\bigg{)}|\log\varepsilon|^{d-1}F_{\varepsilon }(u,\Omega)\geq\frac{\alpha\sigma_{d-1}}{(1-\lambda_{1})^{d-1}}|1-c_{0}|^{d}\\ +\bigg{[}\frac{C_{\text{hom}}+o_{M}(1)}{((M+N)\log 2)^{d-1}}+o_{ \varepsilon}(1)\bigg{]}\frac{|\log\varepsilon|^{d-1}(M\log 2)^{d-1}}{(\lambda_{1} |\log\varepsilon|+\log R_{\Omega}+M\log 2)^{d-1}}|c_{0}|^{d}\,.\]
Recall that we may assume that \(c_{0}=c_{0}(\varepsilon)\) converges to a finite value \(c\), hence we let \(\varepsilon\to 0,M\to+\infty\) and \(N\to+\infty\), then we take advantage of the arbitrariness of \(u\), to obtain
\[\liminf_{\varepsilon\to 0}|\log\varepsilon|^{d-1}\mu_{\varepsilon,\delta}\geq \frac{\alpha\sigma_{d-1}}{(1-\lambda_{1})^{d-1}}|1-c|^{d}+\frac{C_{\text{hom} }}{\lambda_{1}^{d-1}}|c|^{d}\,. \tag{30}\]
Once we gather (15), (26), (30), we have
\[\liminf_{\varepsilon\to 0}|\log\varepsilon|^{d-1}\mu_{\varepsilon,\delta} \geq\begin{cases}\frac{\Phi(z)}{(1-\lambda_{2})^{d-1}}|1-c|^{d}+\frac{\alpha \sigma_{d-1}}{\lambda_{2}^{d-1}}|c|^{d}&\text{ if }\lambda=0,\\ \frac{\Phi(z)}{(1-\lambda_{2})^{d-1}}|1-c|^{d}+\frac{C_{\text{hom}}}{\lambda_ {1}^{d-1}}|c|^{d}&\text{ if }\lambda\in(0,1),\\ \frac{\alpha\sigma_{d-1}}{(1-\lambda_{1})^{d-1}}|1-c|^{d}+\frac{C_{\text{hom} }}{\lambda_{1}^{d-1}}|c|^{d}&\text{ if }\lambda=1\end{cases}\]
for every \(\lambda_{1}\in(0,\lambda)\) and \(\lambda_{2}\in(\lambda,1)\).
These expressions can be estimated by the same argument concerning the minimization of the function \(a|1-x|^{d}+b|x|^{d}\) with \(a,b>0\). Indeed, the minimum is attained at
\[x=\bigg{[}\Big{(}\frac{b}{a}\Big{)}^{\frac{1}{d-1}}+1\bigg{]}^{-1}\]
with minimum value
\[b\bigg{[}\Big{(}\frac{b}{a}\Big{)}^{\frac{1}{d-1}}+1\bigg{]}^{1-d}.\]
In (15), we set
\[a=\frac{\Phi(z)}{(1-\lambda_{2})^{d-1}}\qquad\text{and}\qquad b=\frac{\alpha \sigma_{d-1}}{\lambda_{2}^{d-1}}\,,\]
to achieve
\[\liminf_{\varepsilon\to 0}|\log\varepsilon|^{d-1}\mu_{ \varepsilon,\delta} \geq\frac{\alpha\sigma_{d-1}}{\lambda_{2}^{d-1}}\left[\left(\frac{ \alpha\sigma_{d-1}/\lambda_{2}^{d-1}}{\Phi(z)/(1-\lambda_{2})^{d-1}}\right)^{ \frac{1}{d-1}}+1\right]^{1-d}\] \[=\Phi(z)\alpha\sigma_{d-1}\left[(1-\lambda_{2})(\alpha\sigma_{d-1 })^{\frac{1}{d-1}}+\lambda_{2}\Phi(z)^{\frac{1}{d-1}}\right]^{1-d}.\]
We conclude passing to the limit as \(\lambda_{2}\to 0\).
In (26), put
\[a=\frac{\Phi(z)}{(1-\lambda_{2})^{d-1}}\qquad\text{and}\qquad b=\frac{C_{\text {hom}}}{\lambda_{1}^{d-1}}\,, \tag{31}\]
and let \(\lambda_{1},\lambda_{2}\to\lambda\) getting
\[\liminf_{\varepsilon\to 0}|\log\varepsilon|^{d-1}\mu_{ \varepsilon,\delta} \geq\frac{C_{\text{hom}}}{\lambda^{d-1}}\left[\left(\frac{C_{ \text{hom}}/\lambda^{d-1}}{\Phi(z)/(1-\lambda)^{d-1}}\right)^{\frac{1}{d-1}}+1 \right]^{1-d}\] \[=\Phi(z)C_{\text{hom}}\Big{[}(1-\lambda)C_{\text{hom}}^{\frac{1} {d-1}}+\lambda\Phi(z)^{\frac{1}{d-1}}\Big{]}^{1-d}.\]
Finally, in (30) let
\[a=\frac{\alpha\sigma_{d-1}}{(1-\lambda_{1})^{d-1}}\qquad\text{and}\qquad b= \frac{C_{\text{hom}}}{\lambda_{1}^{d-1}}\,,\]
to have
\[\liminf_{\varepsilon\to 0}|\log\varepsilon|^{d-1}\mu_{ \varepsilon,\delta} \geq\frac{C_{\text{hom}}}{\lambda_{1}^{d-1}}\left[\left(\frac{C_{ \text{hom}}/\lambda_{1}}{\alpha\sigma_{d-1}/(1-\lambda_{1})}\right)^{\frac{1} {d-1}}+1\right]^{1-d}\] \[=\alpha\sigma_{d-1}C_{\text{hom}}\Big{[}(1-\lambda_{1})C_{\text {hom}}^{\frac{1}{d-1}}+\lambda_{1}(\alpha\sigma_{d-1})^{\frac{1}{d-1}}\Big{]} ^{1-d}.\]
Then, conclude letting \(\lambda_{1}\to 1\).
### Construction of optimal sequences
To finish the proof we define minimizing sequences providing the bound from above using suitable capacitary profiles.
If \(\lambda=0\), take \(\lambda_{2}\in(\lambda,1)\) and let \(v_{\varepsilon}^{0}\) be a solution of the minimum problem
\[\min\Bigl{\{}\int_{B(0,\varepsilon^{\lambda_{2}-1})}f(z,\nabla u(x))\,dx:u\in W _{0}^{1,d}(B(0,\varepsilon^{\lambda_{2}-1})),u=1\text{ on }B(0,1)\Bigr{\}}\,.\]
For \(\varepsilon\ll 1\), the function \(u^{0}_{\varepsilon}(x):=v^{0}_{\varepsilon}\left(\frac{x-z_{\varepsilon}}{ \varepsilon}\right)\) belongs to \(W^{1,d}_{0}(\Omega)\) and it is admissible for the minimum problem defining (6), thus \(\mu_{\varepsilon,\delta}\leq F_{\varepsilon}(u^{0}_{\varepsilon},\Omega)\) and by change of variables and homogeneity of \(f(x,\cdot)\), it holds
\[F_{\varepsilon}(u^{0}_{\varepsilon},\Omega) =F_{\varepsilon}(u^{0}_{\varepsilon},B(z_{\varepsilon},\varepsilon ^{\lambda_{2}}))=\int_{B(z_{\varepsilon},\varepsilon^{\lambda_{2}})}f\Big{(} \frac{x}{\delta},\nabla u^{0}_{\varepsilon}(x)\Big{)}\,dx\] \[=\int_{B(0,\varepsilon^{\lambda_{2}-1})}f\Big{(}\frac{x}{\delta} \varepsilon+\frac{z_{\varepsilon}}{\delta},\nabla v^{0}_{\varepsilon}(x) \Big{)}\,dx=\int_{B(0,\varepsilon^{\lambda_{2}-1})}f\Big{(}\frac{x}{\delta} \varepsilon+z,\nabla v^{0}_{\varepsilon}(x)\Big{)}\,dx.\]
Note that \(\varepsilon^{\lambda_{2}}/\delta\to 0\), hence, for every \(x\in B(0,\varepsilon^{\lambda_{2}-1})\), it holds \(|x|\frac{\varepsilon}{\delta}\to 0\) as \(\varepsilon\to 0\). In light of this, given any \(\nu>0\), by (7) we deduce that
\[f(x,\xi)\leq f(z,\xi)+\nu|\xi|^{d}\text{ for every }\xi\in\mathbb{R}^{d}\text{ and for every }x\in B(0,\varepsilon^{\lambda_{2}-1}),\]
as \(\varepsilon\) is sufficiently small. As a consequence
\[\int_{B(0,\varepsilon^{\lambda_{2}-1})}f\Big{(}\frac{x}{\delta}\varepsilon+z,\nabla v^{0}_{\varepsilon}(x)\Big{)}\,dx\leq\int_{B(0,\varepsilon^{\lambda_{2 }-1})}f(z,\nabla v^{0}_{\varepsilon}(x))\,dx+\nu\int_{B(0,\varepsilon^{\lambda _{2}-1})}|\nabla v^{0}_{\varepsilon}(x)|^{d}\,dx\]
which, by the growth condition, is bounded above by
\[\Big{(}1+\frac{\nu}{\alpha}\Big{)}\int_{B(0,\varepsilon^{\lambda_{2}-1})}f(z, \nabla v^{0}_{\varepsilon}(x))\,dx\,.\]
In light of the fact that \(\varepsilon^{\lambda_{2}-1}\to\infty\) as \(\varepsilon\to 0\), we apply Lemma 1.1 to deduce
\[F_{\varepsilon}(u^{0}_{\varepsilon},\Omega)=F_{\varepsilon}(u^{0}_{\varepsilon },B(z_{\varepsilon},\varepsilon^{\lambda_{2}}))\leq\Big{(}1+\frac{\nu}{ \alpha}\Big{)}\,\frac{\Phi(z)+o_{\varepsilon}(1)}{(1-\lambda_{2})^{d-1}|\log \varepsilon|^{d-1}}\,. \tag{32}\]
Thus, we conclude by the arbitrariness of \(\nu>0\) and \(\lambda_{2}\in(0,1)\), that
\[\limsup_{\varepsilon\to 0}|\log\varepsilon|^{d-1}\mu_{\varepsilon,\delta}\leq \inf_{\lambda_{2}\in(0,1)}\frac{\Phi(z)}{(1-\lambda_{2})^{d-1}}=\Phi(z).\]
If \(\lambda\in(0,1)\), introduce a further parameter \(\lambda_{1}\in(0,\lambda)\), put
\[T:=\max\{t\in\mathbb{N}:\varepsilon^{\lambda_{1}}2^{t}\leq\operatorname{dist}( z_{\varepsilon},\partial\Omega)\}=\left\lfloor\frac{\lambda_{1}|\log \varepsilon|+\log\operatorname{dist}(z_{\varepsilon},\partial\Omega)}{\log 2}\right\rfloor\]
and take \(M\in\mathbb{N}\cap(0,T)\). Since the family of points \(\{z_{\varepsilon},\,\varepsilon>0\}\) is contained in a ball, say \(B\), whose closure lays inside \(\Omega\), we have that \(\operatorname{dist}(z_{\varepsilon},\partial\Omega)\geq\operatorname{dist}( \partial B,\partial\Omega)>0\) so that \(T\) is well defined and can be assumed to be greater than \(2\) for every \(\varepsilon\).
Let \(v_{\eta}\) be a solution of the minimum problem
\[m_{\eta}:=\min\Bigl{\{}\int_{B(0,2^{M})}f\Bigl{(}\frac{x}{\eta}+\tau_{\eta},\nabla u (x)\Bigr{)}\,dx:u\in W^{1,d}_{0}(B(0,2^{M})),u=1\text{ on }B(0,1)\Bigr{\}}\]
and set
\[m_{0}:=\min\Bigl{\{}\int_{B(0,2^{M})}f_{\text{hom}}(\nabla u(x))\,dx:u\in W^{1, d}_{0}(B(0,2^{M})),u=1\text{ on }B(0,1)\Bigr{\}}.\]
By Theorem 1.2, there exists an increasing non negative function \(\omega\) such that
\[|m_{\eta}-m_{0}|\leq\omega(\eta)\ \text{ and }\ \omega(\eta)\to 0\text{ as }\eta\to 0;\]
thus, for \(k=1,...,\lfloor T/M\rfloor\), define \(u^{k}_{\varepsilon}\in W^{1,d}(B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^ {kM})\setminus\overline{B}(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{(k-1)M }))\) as
\[u^{k}_{\varepsilon}(x):=\frac{c}{\lfloor T/M\rfloor}v_{\eta}\left(\frac{x-z_{ \varepsilon}}{\varepsilon^{\lambda_{1}}2^{(k-1)M}}\right)+\frac{\lfloor T/M \rfloor-k}{\lfloor T/M\rfloor}c\]
for some constant \(c\) to be properly selected.
If we set \(\eta=\frac{\delta}{\varepsilon^{\lambda_{1}}2^{(k-1)M}},\tau_{\eta}=\frac{z_{ \varepsilon}}{\delta}\) and we apply a change of variables and the homogeneity of \(f(x,\cdot)\), it holds
\[\begin{split} F_{\varepsilon}(u^{k}_{\varepsilon},B(z_{ \varepsilon},&\varepsilon^{\lambda_{1}}2^{kM})\setminus B(z_{ \varepsilon},\varepsilon^{\lambda_{1}}2^{(k-1)M}))\\ &=\int_{B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{kM}) \setminus B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{(k-1)M})}f\left( \frac{x}{\delta},\nabla u^{k}_{\varepsilon}(x)\right)\,dx\\ &=\left|\frac{c}{\lfloor T/M\rfloor}\right|^{d}\int_{B(0,2^{M})}f \left(\frac{x}{\delta}\varepsilon^{\lambda_{1}}2^{k-1}+\frac{z_{\varepsilon} }{\delta},\nabla v_{\eta}(x)\right)\,dx\\ &=\left|\frac{c}{\lfloor T/M\rfloor}\right|^{d}m_{\eta}\\ &\leq\left|\frac{c}{\lfloor T/M\rfloor}\right|^{d}(m_{0}+\omega (\eta))\\ &\leq\left|\frac{c}{\lfloor T/M\rfloor}\right|^{d}\left(m_{0}+ \omega\left(\frac{\delta}{\varepsilon^{\lambda_{1}}}\right)\right).\end{split} \tag{33}\]
Then, considering the same \(u^{0}_{\varepsilon}\) introduced in the case \(\lambda=0\), set
\[u_{\varepsilon}(x):=\begin{cases}(1-c)u^{0}_{\varepsilon}(x)+c&\text{ if }x\in B(z_{\varepsilon},\varepsilon^{\lambda_{2}})\\ c&\text{ if }x\in B(z_{\varepsilon},\varepsilon^{\lambda_{1}})\setminus B(z_{ \varepsilon},\varepsilon^{\lambda_{2}})\\ u^{k}_{\varepsilon}(x)&\text{ if }x\in B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{kM}) \setminus B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{(k-1)M})\,,k=1,..., \lfloor T/M\rfloor\\ 0&\text{ if }x\in\Omega\setminus B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{ \lfloor T/M\rfloor M}).\end{cases}\]
Since the boundary conditions match, \(u_{\varepsilon}\in W^{1,d}_{0}(\Omega)\) and \(u_{\varepsilon}=1\) on \(B(z_{\varepsilon},\varepsilon)\); therefore, it is an admissible function for the minimum problem.
We estimate \(F_{\varepsilon}(u_{\varepsilon},B(z_{\varepsilon},\varepsilon^{\lambda_{1}}))\) and \(F_{\varepsilon}(u_{\varepsilon},\Omega\setminus B(z_{\varepsilon},\varepsilon^ {\lambda_{1}}))\) separately, neglecting those regions on which \(u_{\varepsilon}\) is constant.
By the same computation which led to (32),
\[F_{\varepsilon}(u_{\varepsilon},B(z_{\varepsilon},\varepsilon^{\lambda_{1}})) =F_{\varepsilon}(u_{\varepsilon}^{0},B(z_{\varepsilon},\varepsilon^{ \lambda_{2}}))|1-c|^{d}\leq\frac{\Phi(z)+o_{\varepsilon}(1)}{(1-\lambda_{2})^ {d-1}|\log\varepsilon|^{d-1}}|1-c|^{d}\,, \tag{34}\]
where \(\nu\) has been neglected as it gets arbitrarily small as \(\varepsilon\to 0\).
We focus on \(F_{\varepsilon}(u_{\varepsilon},\Omega\setminus B(z_{\varepsilon},\varepsilon^ {\lambda_{1}}))\). By (33), it holds
\[F_{\varepsilon}(u_{\varepsilon},\Omega\setminus B(z_{\varepsilon},\varepsilon^{\lambda_{1}}))=\sum_{k=1}^{\lfloor T/M\rfloor}F_{\varepsilon}( u_{\varepsilon}^{k},B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{kM}) \setminus B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{(k-1)M}))\\ \leq\left|\frac{c}{\lfloor T/M\rfloor}\right|^{d}\sum_{k=1}^{ \lfloor T/M\rfloor}\left(m_{0}+\omega\left(\frac{\delta}{\varepsilon^{\lambda_ {1}}}\right)\right)=\frac{|c|^{d}}{\lfloor T/M\rfloor^{d-1}}\left(m_{0}+ \omega\left(\frac{\delta}{\varepsilon^{\lambda_{1}}}\right)\right), \tag{35}\]
but \(f_{\mathrm{hom}}(0)=0\), while by the definition of the constant \(C_{\mathrm{hom}}\), we have
\[m_{0}=\frac{C_{\mathrm{hom}}+o_{M}(1)}{(M\log 2)^{d-1}}\,.\]
Thus, we substitute in (35) obtaining
\[F_{\varepsilon}(u_{\varepsilon},\Omega\setminus B(z_{\varepsilon},\varepsilon^ {\lambda_{1}}))\leq\frac{|c|^{d}}{\lfloor T/M\rfloor^{d-1}}\left(\frac{C_{ \mathrm{hom}}+o_{M}(1)}{(M\log 2)^{d-1}}+\omega\left(\frac{\delta}{ \varepsilon^{\lambda_{1}}}\right)\right),\]
and, as \(T\geq\frac{\lambda_{1}|\log\varepsilon|+\log d-\log 2}{\log 2}\) with \(d:=\)dist\((\partial B,\partial\Omega)\), it holds
\[F_{\varepsilon}(u_{\varepsilon},\Omega\setminus B(z_{\varepsilon},\varepsilon^ {\lambda_{1}}))\leq\frac{C_{\mathrm{hom}}+o_{M}(1)+(M\log 2)^{d-1}\omega( \delta/\varepsilon^{\lambda_{1}})}{(\lambda_{1}|\log\varepsilon|+\log d- \log 2)^{d-1}}|c|^{d}. \tag{36}\]
We gather estimates (34), (36) to conclude
\[|\log\varepsilon|^{d-1}\mu_{\varepsilon,\delta} \leq\frac{\Phi(z)+o_{\varepsilon}(1)}{(1-\lambda_{2})^{d-1}}|1-c |^{d}\] \[+\frac{|\log\varepsilon|^{d-1}[C_{\mathrm{hom}}+o_{M}(1)+(M\log 2 )^{d-1}\omega(\delta/\varepsilon^{\lambda_{1}})]}{(\lambda_{1}|\log\varepsilon |+\log d-\log 2)^{d-1}}|c|^{d}\,.\]
Since \(\frac{\delta}{\varepsilon^{\lambda_{1}}}\to 0\), let \(\varepsilon\to 0\) and then \(M\to+\infty\) to deduce
\[\limsup_{\varepsilon\to 0}|\log\varepsilon|^{d-1}\mu_{\varepsilon,\delta}\leq \frac{\Phi(z)}{(1-\lambda_{2})^{d-1}}|1-c|^{d}+\frac{C_{\mathrm{hom}}}{\lambda _{1}^{d-1}}|c|^{d}\,;\]
then, let \(\lambda_{1},\lambda_{2}\to\lambda\) so that
\[\limsup_{\varepsilon\to 0}|\log\varepsilon|^{d-1}\mu_{\varepsilon,\delta}\leq \frac{\Phi(z)}{(1-\lambda)^{d-1}}|1-c|^{d}+\frac{C_{\mathrm{hom}}+o_{M}(1)}{ \lambda^{d-1}}|c|^{d}.\]
Finally, put \(c:=\Big{[}\Big{(}\frac{b}{a}\Big{)}^{\frac{1}{d-1}}+1\Big{]}^{-1}\), with \(a=\Phi(z)/(1-\lambda)^{d-1}\), \(b=C_{\mathrm{hom}}/\lambda^{d-1}\). As we are exactly in the case discussed in (31) with \(\lambda=\lambda_{1}=\lambda_{2}\), the same computation holds, leading to
\[\limsup_{\varepsilon\to 0}|\log\varepsilon|^{d-1}\mu_{\varepsilon,\delta}\leq \Phi(z)C_{\mathrm{hom}}\Big{[}(1-\lambda)C_{\mathrm{hom}}^{\frac{1}{d-1}}+ \lambda\Phi(z)^{\frac{1}{d-1}}\Big{]}^{1-d}.\]
If \(\lambda=1\) we just set \(c=1\) and
\[u_{\varepsilon}(x):=\begin{cases}1&\text{ if }x\in B(z_{\varepsilon}, \varepsilon^{\lambda_{1}})\\ u_{\varepsilon}^{k}(x)&\text{ if }x\in B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{ kM})\setminus B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{(k-1)M}),k=1,...,\lfloor T/M\rfloor\\ 0&\text{ if }x\in\Omega\setminus B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{ \lfloor T/M\rfloor M}).\end{cases}\]
Now \(u_{\varepsilon}\) is an admissible function for the original problem, so the conclusion follows by (36); in particular
\[F_{\varepsilon}(u_{\varepsilon},\Omega)=F_{\varepsilon}(u_{\varepsilon}, \Omega\setminus B(z_{\varepsilon},\varepsilon^{\lambda_{1}}))\leq\frac{C_{ \mathrm{hom}}+o_{M}(1)+(M\log 2)^{d-1}\omega(\delta/\varepsilon^{\lambda_{1}})}{( \lambda_{1}|\log\varepsilon|+\log d-\log 2)^{d-1}}\,;\]
hence
\[\limsup_{\varepsilon\to 0}|\log\varepsilon|^{d-1}\mu_{\varepsilon,\delta}\leq \inf_{\lambda_{1}\in(0,1)}C_{\mathrm{hom}}/\lambda_{1}^{d-1}=C_{\mathrm{hom}}.\]
### Proof of the main result about convergence of minima
As a consequence of the previous section, we prove the main result on the asymptotic behaviour of minima defined in (1) by
\[m_{\varepsilon,\delta}:=\min\Bigl{\{}\int_{\Omega}f\left(\frac{x}{\delta}, \nabla u(x)\right)\,dx:u\in W^{1,d}_{0}(\Omega),u=1\text{ on }B(z, \varepsilon),z\in\Omega\Bigr{\}},\]
where also the centre of the small inclusion (a ball) is an argument of the minimization.
**Theorem 2.3**.: _Assume there exists a point \(x_{0}\in\Omega\) such that the following hold:_
(i)_\(f(x,\xi)\geq f(x_{0},\xi)\) for every \(x\in\mathbb{R}^{d}\) and for every \(\xi\in\mathbb{R}^{d}\);_
(ii) _for every \(\nu>0\), there exists \(r_{\nu}>0\) such that for every \(x\in B(x_{0},r_{\nu})\) and for every \(\xi\in\mathbb{R}^{d}\) we have \(f(x,\xi)\leq f(x_{0},\xi)+\nu|\xi|^{d}\). Then_
\[\lim_{\varepsilon\to 0}|\log\varepsilon|^{d-1}m_{\varepsilon,\delta}=\Phi(x_{0})C_{ \mathrm{hom}}\Big{[}\lambda\Phi(x_{0})^{\frac{1}{d-1}}+(1-\lambda)C_{\mathrm{ hom}}^{\frac{1}{d-1}}\Big{]}^{1-d}.\]
Proof.: Since we use the same argument presented in the proof of Proposition 2.1, we focus on highlighting the main differences, keeping the same notations.
_Bound from below_. In the case \(\lambda=0\), we introduce \(\lambda_{2}>0\), then we apply Lemma 2.2 to get the inequality
\[\left(1+\frac{C}{N-1}\right)F_{\varepsilon}(u,\Omega)\geq\]
\[\geq \min\{F_{\varepsilon}(v,B(z,\varepsilon^{\lambda_{2}})):v\in W_{0 }^{1,d}(B(z,\varepsilon^{\lambda_{2}})),v=1\text{ on }B(z,\varepsilon)\}|1-c|^{d}\] \[+ \min\{F_{\varepsilon}(v,B(z,R_{\Omega})\setminus\overline{B}(z, \varepsilon 2^{S-N})):v\in W^{1,d}(B(z,R_{\Omega})\setminus\overline{B}(z, \varepsilon^{\lambda_{2}}2^{S-N})),\] \[v=1\text{ on }B(z,\varepsilon 2^{S-N}),v=0\text{ on }\partial B(z,R_{ \Omega})\}|c|^{d}.\]
Note that the second summand is estimated exactly as (11); while, for the first summand, we cannot exploit the property of periodicity (P1) of the energy since minimization involves also the centre of the inclusion. To deal with this term, we consider a minimizer \(u\) and we simply apply (i) to get
\[\min\{F_{\varepsilon}(v,B(z,\varepsilon^{\lambda_{2}})):v\in W_{0 }^{1,d}(B(z,\varepsilon^{\lambda_{2}})),v=1\text{ on }B(z,\varepsilon)\}|1-c|^{d}\] \[\geq\int_{B(z,\varepsilon^{\lambda_{2}})}f(x_{0},\nabla u(x))\,dx \,|1-c|^{d}=\frac{\Phi(x_{0})+o_{\varepsilon}(1)}{(1-\lambda_{2})^{d-1}|\log \varepsilon|^{d-1}}|1-c|^{d}. \tag{37}\]
This is the same estimate we obtained in (12), with the point \(x_{0}\) in place of the fixed centre \(z\). Analogously to Proposition 2.1, we conclude that \(|\log\varepsilon|^{d-1}m_{\varepsilon,\delta}\to\Phi(x_{0})\).
If \(\lambda\in(0,1)\), we further introduce \(\lambda_{1}\in(0,\lambda)\) and we achieve the inequality
\[\bigg{(}1+\frac{C}{N-1}\bigg{)}F_{\varepsilon}(u,\Omega)\geq\]
\[\geq\min\{F_{\varepsilon}(v,B(z_{\varepsilon},\varepsilon^{\lambda_{2}})):v \in W_{0}^{1,d}(B(z_{\varepsilon},\varepsilon^{\lambda_{2}})),v=1\text{ on }B(z_{\varepsilon},\varepsilon)\}|1-c_{-1}|^{d} \tag{38}\] \[+\min\{F_{\varepsilon}(v,B(z_{\varepsilon},\varepsilon^{\lambda_ {1}})\setminus B(z_{\varepsilon},\varepsilon 2^{S^{\prime}-N})):v\in W^{1,d}(B(z_{ \varepsilon},\varepsilon^{\lambda_{1}})\setminus\overline{B}(z_{\varepsilon}, \varepsilon 2^{S^{\prime}-N})),\] \[v=1\text{ on }\partial B(z_{\varepsilon},\varepsilon 2^{S^{\prime}-N}),v=0 \text{ on }\partial B(z_{\varepsilon},\varepsilon^{\lambda_{1}})\}|c_{-1}-c_{0}|^{d}\] (39) \[+\min\{F_{\varepsilon}(v,B(z_{\varepsilon},\varepsilon^{\lambda_ {1}}2^{M})\setminus B(z_{\varepsilon},\varepsilon 2^{S^{\prime\prime}-N})):v\in W^{1,d}(B(z_{ \varepsilon},\varepsilon^{\lambda_{1}}2^{M})\setminus\overline{B}(z_{ \varepsilon},\varepsilon 2^{S^{\prime\prime}-N})),\] \[v=1\text{ on }\partial B(z_{\varepsilon},\varepsilon 2^{S^{\prime\prime}-N}),v=0 \text{ on }\partial B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{M})\}|c_{0}-c_{1}|^{d}\] (40) \[+\sum_{k=2}^{\lfloor T/M\rfloor+1}\min\{F_{\varepsilon}(v,A_{M,k}^ {N}):v\in W^{1,d}(A_{M,k}^{N}),\,v=1\text{ on }\partial B(z_{\varepsilon}, \varepsilon^{\lambda_{1}}2^{(k-1)M-N}),\] \[v=0\text{ on }\partial B(z_{\varepsilon},\varepsilon^{\lambda_{1}}2^{ kM})\}|c_{k-1}-c_{k}|^{d}\,, \tag{41}\]
where we put \(c_{\lfloor\frac{T}{M}\rfloor+1}:=0\).
The estimates for the terms (39), (40), (41) are achieved precisely as in (18), (19), (20) respectively, while (38) is estimated exploiting (i) as in (37). Once more, the outcome is the same of Proposition 2.1, with \(x_{0}\) in place of \(z\).
The case \(\lambda=1\) is analogous and can be proved starting by the estimate in (27); this might be expected since, at this scale, the only effect in the minimization is due to the homogenization (and not to the point in which we concentrate our inclusion).
_Bound frome above_. Take \(z_{\varepsilon}=\delta x_{0}\) modulo the \(\delta\)-cube in such a way that this family of points is contained in a ball \(B\subset\subset\Omega\). Condition (ii) allows to apply the bound from above given by Proposition 2.1, then we conclude observing that \(m_{\varepsilon,\delta}\leq\mu_{\varepsilon,\delta}\).
We remark that assumption (i) may be weakened. Note indeed that the key estimate we need to carry out our proof, and more specifically the bound from below, is
\[\min\{F_{\varepsilon}(v,B(z,\varepsilon^{\lambda_{2}})):v\in W^{ 1,d}_{0}(B(z,\varepsilon^{\lambda_{2}})),v=1\text{ on }B(z,\varepsilon)\}\\ \geq\int_{B(z,\varepsilon^{\lambda_{2}})}f(x_{0},\nabla u(x))\,dx,\]
where \(u\) is a minimizer for fixed \(\lambda_{2}\in(\lambda,1)\).
A plausible sufficient condition might seem to be that \(\Phi\) attains its minimum at the point \(x_{0}\). Yet, note that this requirement is inadequate if \(\Phi\) is not continuous at a minimum point. For instance, consider the function defined on \((0,1)^{d}\) as
\[f(x,\xi):=\begin{cases}\frac{1}{2}|\xi|^{d}&\text{ if }x=x_{0}:=\left(\frac{1}{2 },...,\frac{1}{2}\right)\\ |\xi|^{d}&\text{ otherwise}\end{cases}\]
and then extended by periodicity; we see that (1) reduces to the homogeneous problem and then, that \(|\log\varepsilon|^{d-1}m_{\varepsilon,\delta}\to\sigma_{d-1}\) as \(\varepsilon\to 0\). But this is a contradiction, indeed, we have \(f_{\text{hom}}(\xi)=|\xi|^{d}\) so that \(C_{\text{hom}}=\sigma_{d-1}\), while \(\Phi(x_{0})\) is equal to \(\sigma_{d-1}/2\); plugging these in (5) and assuming \(\lambda=0\), we get \(|\log\varepsilon|^{d-1}m_{\varepsilon,\delta}=\Phi(x_{0})=\sigma_{d-1}/2\).
## 3 Application to perforated domains
In this final section we maintain the setting and notation introduced in the previous ones. We will make use of Proposition 2.1 to compute the \(\Gamma\)-limit of a family of functionals defined with boundary conditions related to varying domains.
We fix \((\varepsilon_{k})_{k\in\mathbb{N}}\) a positive sequence converging to \(0\), we consider the sequence of critical periods \(d_{k}:=|\log\varepsilon_{k}|^{\frac{1-d}{d}}\), and for every \(i\in\mathbb{Z}^{d}\), we put \(x_{k}^{i}:=id_{k}\).
Then, we introduce a further scale which rules the periodic structure of the energy, say \(\delta=\delta(\varepsilon)\), and we define \(\delta_{k}:=\delta(\varepsilon_{k})\) for every \(k\in\mathbb{N}\), obtaining a positive sequence
vanishing as \(k\to+\infty\). In accordance with the previous sections, we will always assume that it exists
\[\lambda:=\lim_{k\to+\infty}\frac{|\log\delta_{k}|}{|\log\varepsilon_{k}|}\wedge 1. \tag{42}\]
Assuming that \(\Omega\) is a bounded open subset of \(\mathbb{R}^{d}\) such that \(|\partial\Omega|=0\), we define a periodically perforated domain as
\[\Omega_{k}:=\Omega\setminus\bigcup_{i\in\mathbb{Z}^{d}}B(x_{k}^{i},\varepsilon _{k}),\]
and we consider functionals \(F_{k}:L^{d}(\Omega)\to[0,+\infty]\) given by
\[F_{k}(u):=\begin{cases}\int_{\Omega}f\left(\dfrac{x}{\delta_{k}}, \nabla u(x)\right)\,dx&\text{ if }u\in W^{1,d}(\Omega)\text{ and }u=0\text{ on }\Omega\setminus\Omega_{k}\\ +\infty&\text{ otherwise.}\end{cases}\]
To prove our result, we assume that the perforations are related to the periodic structure of the heterogeneous medium, in particular we suppose that
\[\text{for every }k\text{ there exists a positive natural number }m_{k}\text{ such that }d_{k}=m_{k}\delta_{k} \tag{43}\]
and that
\[\dfrac{\delta_{k}}{d_{k}}\to 0\text{ as }k\to+\infty. \tag{44}\]
Condition (43) leads to the identity
\[f\left(\dfrac{x_{k}^{i}}{\delta_{k}}+y,\xi\right)=f\left(\dfrac{id_{k}}{ \delta_{k}}+y,\xi\right)=f(y,\xi)\text{ for every }i\in\mathbb{Z}^{d},\,y\in \mathbb{R}^{d},\xi\in\mathbb{R}^{d}. \tag{45}\]
If (43) is not fulfilled, then \(f\left(\dfrac{x_{k}^{i}}{\delta_{k}}+y,\xi\right)=f(y_{k}+y,\xi)\) for some \(y_{k}\in[0,1]^{d}\), and the result depends on the properties of \((y_{k})_{k}\) modulo \(\delta\), see [1] for the occurrence of a similar phenomenon.
In order to apply Proposition 2.1, we add suitable regularity assumptions on \(f\) at the point \(0\). Our statement reads as follows.
**Theorem 3.1**.: _Assume that for every \(\nu>0\), there exists \(r_{\nu}>0\) such that for every \(x\in B(0,r_{\nu})\) it holds_
\[|f(0,\xi)-f(x,\xi)|\leq\nu|\xi|^{d}\text{ for every }\xi\in\mathbb{R}^{d}. \tag{46}\]
_Then_
\[\Gamma\text{-}\lim_{k}F_{k}(u)=F(u):=\int_{\Omega}f_{\mathrm{hom}}(\nabla u(x ))\,dx+C(\lambda)\int_{\Omega}|u(x)|^{d}\,dx,\]
_for every \(u\in W^{1,d}(\Omega)\), where the \(\Gamma\)-limit is meant with respect to the strong convergence in \(L^{d}(\Omega)\) and \(C(\lambda)\) is given by_
\[C(\lambda):=\Phi(0)C_{\mathrm{hom}}\Big{[}\lambda\Phi(0)^{\frac{1}{d-1}}+(1- \lambda)C_{\mathrm{hom}}^{\frac{1}{d-1}}\Big{]}^{1-d},\]
_with \(\Phi,C_{\mathrm{hom}},f_{\mathrm{hom}}\) and \(\lambda\) defined as in (2), (3), (4) and (42) respectively._
We basically prove that, in the \(\Gamma\)-limit, internal boundary conditions imposed on the perforations vanish, being replaced by the additional term \(C(\lambda)\int_{\Omega}|u|^{d}dx\).
### The main construction and some auxiliary results
In our proof we will make wide use of Lemma 2.2, but its application is more delicate in this instance. To fit our arguments, it needs some refinement: we perform the modifications among annuli which are not only homothetic, but also such that their corresponding inner and outer radii are proportional to the period \(d_{k}\).
We introduce \(Z_{k}:=\{i\in\mathbb{Z}^{d}:\mathrm{dist}(x_{k}^{i},\partial\Omega)>d_{k}\}\), namely the set of the centres of those perforations which are uniformly far from the boundary.
Let \(M\in\mathbb{N}\), \(\alpha>0\) be such that \(\alpha 2^{M+1}<1/2\). Given a sequence \((u_{k})_{k}\) in \(W^{1,d}(\Omega)\), fix \(k\), and around each point \(x_{k}^{i}\) with \(i\in Z_{k}\) apply Lemma 2.2 to the function \(u_{k}\) with
\[f(x,\xi)=f\left(\frac{x}{\delta},\xi\right)\,,\eta=\alpha d_{k}\,,R=\alpha 2^{M +1}d_{k}\,,N=M\text{ and }r=\alpha d_{k}. \tag{47}\]
We obtain a function \(v_{k}\) having constant values \(u_{k}^{i}\) on the boundary of each ball centered at \(x_{k}^{i}\) with radius \(\alpha 2^{j_{i}}d_{k}\) for some \(j_{i}\in\{1,...,M\}\) and \(i\in Z_{k}\). Also recall that this function comes with the estimate
\[\int_{\Omega}f\left(\frac{x}{\delta_{k}},\nabla v_{k}(x)\right)\,dx\leq\left( 1+\frac{C}{M-1}\right)\int_{\Omega}f\left(\frac{x}{\delta_{k}},\nabla u_{k}(x) \right)\,dx\,.\]
We take advantage of the following result which is a simplified version of the discretization argument proved by Sigalotti (see [15, Proposition 3.3]).
**Proposition 3.2**.: _Let \((u_{k})_{k}\) be a sequence in \(W^{1,d}(\Omega)\cap L^{\infty}(\Omega)\) strongly converging to \(u\) in \(L^{d}(\Omega)\) and such that \((\nabla u_{k})_{k}\subseteq L^{d}(\Omega)\) is bounded. For every \(i\in Z_{k}\), let \(u_{k}^{i}\) be the mean values described above and put_
\[Q_{k}^{i}:=x_{k}^{i}+\left(-\frac{d_{k}}{2},\frac{d_{k}}{2}\right)^{d}.\]
_Then_
\[\lim_{k\to\infty}\int_{\Omega}\biggl{|}\sum_{i\in Z_{k}}|u_{k}^{i}|^{d}\chi_{ Q_{k}^{i}}(x)-|u(x)|^{d}\biggr{|}\,dx=0.\]
A useful tool to proceed will be also the following convergence result which is an application of the Riemann-Lebesgue lemma.
**Lemma 3.3**.: _The sequence_
\[\chi_{k}(x):=\chi_{\Omega\setminus\bigcup_{i\in Z_{k}}B(x_{k}^{i},d_{k}/2)}(x) \,,\qquad k\in\mathbb{N}\]
_weakly* converges to a positive constant in \(L^{\infty}(\Omega)\)._
### Liminf inequality
We prove that for every \(u\in W^{1,d}(\Omega)\) and for every sequence \((u_{k})_{k}\) in \(L^{d}(\Omega)\) such that \(u_{k}\to u\) in \(L^{d}(\Omega)\), it holds \(\liminf_{k}F_{k}(u_{k})\geq F(u)\).
The first step of the proof consists in applying the modification lemma as in (47). To simplify the notation in this section, we limit ourselves to denote the radii on which the modified function \(v_{k}\) attains the constant values \(u_{k}^{i}\) by \(\rho_{k}^{i}\) in place of \(\alpha 2^{j_{i}}d_{k}\).
Without loss of generality we may assume \((u_{k})_{k}\subseteq W^{1,d}(\Omega)\) and \(\sup_{k}F_{k}(u_{k})<+\infty\). Note that the last condition, combined with the equi-coerciveness of the functionals \((F_{k})_{k}\), implies that \(\sup_{k}\|\nabla u_{k}\|_{L^{d}(\Omega)}<\infty\), hence \(u_{k}\rightharpoonup u\) in \(W^{1,d}(\Omega)\).
In a first instance, also assume that \((u_{k})_{k}\) is bounded in \(L^{\infty}(\Omega)\). We aim to estimate
\[F_{k}(v_{k})=\int_{\Omega\setminus\bigcup_{i\in Z_{k}}B(x_{k}^{i},\rho_{k}^{ i})}f\left(\frac{x}{\delta_{k}},\nabla v_{k}(x)\right)\,dx+\sum_{i\in Z_{k}} \int_{B(x_{k}^{i},\rho_{k}^{i})}f\left(\frac{x}{\delta_{k}},\nabla v_{k}(x) \right)\,dx\,. \tag{48}\]
We perform another modification putting
\[w_{k}:=\begin{cases}v_{k}&\text{ on }\Omega\setminus\bigcup_{i\in Z_{k}}B(x_{k }^{i},\rho_{k}^{i}),\\ u_{k}^{i}&\text{ on }B(x_{k}^{i},\rho_{k}^{i}),\,i\in Z_{k}.\end{cases}\]
It trivially holds
\[\int_{\Omega\setminus\bigcup_{i\in Z_{k}}B(x_{k}^{i},\rho_{k}^{i})}f\left( \frac{x}{\delta_{k}},\nabla v_{k}(x)\right)\,dx=\int_{\Omega}f\left(\frac{x}{ \delta_{k}},\nabla w_{k}(x)\right)\,dx\,.\]
Note that, according to the proof of Lemma 2.2, \(\|v_{k}\|_{L^{\infty}(\Omega)}\leq\|u_{k}\|_{L^{\infty}(\Omega)}\), hence \(\|w_{k}\|_{L^{\infty}(\Omega)}\leq\|u_{k}\|_{L^{\infty}(\Omega)}\) so that \((w_{k})_{k}\) is bounded in \(L^{\infty}(\Omega)\) and then also bounded in \(L^{d}(\Omega)\). Moreover, as \(\left(1+\frac{C}{M-1}\right)F_{k}(u_{k})\geq F_{k}(v_{k})\geq F_{k}(w_{k})\), we deduce that \((w_{k})_{k}\) is bounded in \(W^{1,d}(\Omega)\); thus, we may extract a subsequence \((w_{k_{j}})_{j}\) weakly converging to a certain \(w\) in \(W^{1,d}(\Omega)\).
As \(w_{k}-u_{k}\in W^{1,d}_{0}(\Omega)\) for every \(k\) and since \(u_{k}\rightharpoonup u\) in \(W^{1,d}(\Omega)\) and \(u_{k}\to u\) in \(L^{d}(\Omega)\), it holds by Rellich's Theorem that \((w_{k_{j}})_{j}\) actually converges strongly to \(w\) in \(L^{d}(\Omega)\)
We claim that such \(w\) does not depend on the subsequence and that it coincides with \(u\). To prove this, note that for every \(k\)
\[w_{k}\chi_{\Omega\setminus\bigcup_{i\in Z_{k}}B(x_{k}^{i},d_{k}/2)}=u_{k}\chi_{ \Omega\setminus\bigcup_{i\in Z_{k}}B(x_{k}^{i},d_{k}/2)}\]
and also that by Lemma 3.3 and the previous observations, the following hold
\[\begin{cases}\chi_{\Omega\setminus\bigcup_{i\in Z_{k}}B(x_{k}^{i},d_{k}/2)} \stackrel{{*}}{{\rightharpoonup}}c&\text{ in }L^{\infty}(\Omega),\\ u_{k}\to u&\text{ in }L^{d}(\Omega),\\ w_{k_{j}}\to w&\text{ in }L^{d}(\Omega).\end{cases}\]
These facts imply
\[\begin{cases}\chi_{\Omega\setminus\bigcup_{i\in Z_{k}}B(x_{k}^{i},d_{k}/2)} u_{k}\rightharpoonup cu&\text{ in }L^{d}(\Omega),\\ \chi_{\Omega\setminus\bigcup_{i\in Z_{k_{j}}}B(x_{k_{j}}^{i},d_{k_{j}}/2)} w_{k_{j}}\rightharpoonup cw&\text{ in }L^{d}(\Omega),\end{cases}\]
hence, it follows that \(u=w\) in \(L^{d}(\Omega)\) for every subsequence, proving that \(w_{k}\to u\) in \(L^{d}(\Omega)\). By the Homogenization Theorem and the liminf inequality, we deduce
\[\liminf_{k}\int_{\Omega}f\left(\frac{x}{\delta_{k}},\nabla w_{k}(x)\right)\, dx\geq\int_{\Omega}f_{\hom}(\nabla u(x))\,dx\,. \tag{49}\]
To estimate the second contribution in (48), fix \(i\in Z_{k}\) and let \(\varphi_{k}^{i}\) be a function solving
\[\min\Bigl{\{}\int_{B(x_{k}^{i},\rho_{k}^{i})}f\left(\frac{x}{\delta_{k}}, \nabla u(x)\right)\,dx:u\in u_{k}^{i}+W_{0}^{1,d}(B(x_{k}^{i},\rho_{k}^{i})), u=0\text{ on }B(x_{k}^{i},\varepsilon_{k})\Bigr{\}}.\]
Up to extending the function \(\varphi_{k}^{i}\) to the constant \(u_{k}^{i}\) on \(B(x_{k}^{i},d_{k}/2)\setminus B(x_{k}^{i},\rho_{k}^{i})\), we have
\[\int_{B(x_{k}^{i},\rho_{k}^{i})}f\left(\frac{x}{\delta_{k}},\nabla v_{k}(x) \right)\,dx\geq\int_{B(x_{k}^{i},\rho_{k}^{i})}f\left(\frac{x}{\delta_{k}}, \nabla\varphi_{k}^{i}(x)\right)dx\]
\[\geq\min\Bigl{\{}\int_{B\left(\frac{x_{k}^{i},\frac{d_{k}}{2}}\right)}f\left( \frac{x}{\delta_{k}},\nabla u\right)dx:u\in u_{k}^{i}+W_{0}^{1,d}(B(x_{k}^{i},d_{k}/2)),u=0\text{ on }B(x_{k}^{i},\varepsilon_{k})\Bigr{\}}\]
\[=\min\Bigl{\{}\int_{B\left(0,\frac{1}{2}\right)}f\left(\frac{d_{k}x}{\delta_{k }},\nabla u\right)dx:u\in 1+W_{0}^{1,d}(B(0,1/2),u=0\text{ on }B(0,\varepsilon_{k}/d_{k}) \Bigr{\}}|u_{k}^{i}|^{d},\]
where the last equality follows by the change of variables \(x\mapsto x_{k}^{i}+d_{k}x\), the identity (45) and (P2).
Now put
\[\delta^{\prime}_{k}:=\frac{\delta_{k}}{d_{k}}\,,\qquad\varepsilon^{\prime}_{k}:= \frac{\varepsilon_{k}}{d_{k}}\,,\qquad\lambda^{\prime}:=\lim_{k}\frac{|\log \delta^{\prime}_{k}|}{|\log\varepsilon^{\prime}_{k}|},\]
and rewrite the previous inequality as
\[\int_{B(x^{i}_{k},\rho^{i}_{k})}f\left(\frac{x}{\delta_{k}},\nabla v _{k}(x)\right)\,dx\\ \geq\min\Bigl{\{}\int_{B\left(0,\frac{1}{2}\right)}f\left(\frac{x }{\delta^{\prime}_{k}},\nabla u\right)dx:u\in 1+W^{1,d}_{0}(B(0,1/2),u=0\text{ on }B(0, \varepsilon^{\prime}_{k})\Bigr{\}}|u^{i}_{k}|^{d}. \tag{50}\]
Note that \(\varepsilon^{\prime}_{k}=\varepsilon_{k}|\log\varepsilon_{k}|^{1-1/d}\to 0\), while \(\delta^{\prime}_{k}=\delta_{k}|\log\varepsilon_{k}|^{1-1/d}\to 0\) as \(k\to\infty\) by assumption (44); also observe that
\[\lambda^{\prime}=\lim_{k}\frac{|\log\delta_{k}+\log|\log\varepsilon_{k}|^{1-1 /d}|}{|\log\varepsilon_{k}+\log|\log\varepsilon_{k}|^{1-1/d}|}=\lim_{k}\frac{| \log\delta_{k}|}{|\log\varepsilon_{k}|}=\lambda.\]
In light of the assumption (46), we are in position to apply Proposition 2.1 (up to the transformation \(u\mapsto 1-u\)) to (50) with \(\Omega=B(0,1/2)\) and \(z_{\varepsilon}=0\) for every \(\varepsilon\). We get
\[\min\Bigl{\{}\int_{B(0,1/2)}f\left(\frac{x}{\delta^{\prime}_{k}},\nabla u(x) \right)\,dx:u\in 1+W^{1,d}_{0}(B(0,1/2)),u=0\text{ on }B(0,\varepsilon^{\prime}_{k}) \Bigr{\}}=\\ =\frac{C(\lambda)+o_{k}(1)}{|\log\varepsilon^{\prime}_{k}|^{d-1}}= \frac{C(\lambda)+o_{k}(1)}{|\log\varepsilon_{k}|^{d-1}}\,,\]
and by Proposition 3.2, it follows
\[\begin{split}\liminf_{k}\sum_{i\in Z_{k}}\int_{B(x^{i}_{k},d_{k} /2)}f\left(\frac{x}{\delta_{k}},\nabla v_{k}(x)\right)\,dx&\geq \liminf_{k}\frac{C(\lambda)}{|\log\varepsilon_{k}|^{d-1}}\sum_{i\in Z_{k}}|u^{ i}_{k}|^{d}+o_{k}(1)\\ &=C(\lambda)\int_{\Omega}|u(x)|^{d}\,dx\,.\end{split} \tag{51}\]
Finally, by (49) and (51), we deduce
\[\left(1+\frac{C}{M-1}\right)\liminf_{k}F_{k}(u_{k})\geq\liminf_{k}F_{k}(v_{k}) \geq\int_{\Omega}f_{\hom}(\nabla u(x))\,dx+C(\lambda)\int_{\Omega}|u(x)|^{d}\,dx.\]
Recall that \(\alpha\) and \(M\) have been chosen so that \(\alpha 2^{M+1}<1/2\) and, since the reasoning leading to the above estimate holds true for every \(\alpha>0\), we may let \(M\to+\infty\) getting the liminf inequality.
We conclude removing the boundedness assumption on \((u_{k})_{k}\subseteq L^{\infty}(\Omega)\) by a truncation argument: assume \(u_{k}\to u\) in \(L^{d}(\Omega)\) and put \(\overline{u}_{k}^{M}:=((-M)\lor u_{k})\wedge M\) for fixed \(M\in\mathbb{N}\); by dominated convergence, \(\overline{u}_{k}^{M}\to u\) in \(L^{d}(\Omega)\) as \(k,M\to+\infty\), moreover, since \(f(\cdot,0)=0\), it holds
\[\int_{\Omega}f\left(\frac{x}{\delta_{k}},\nabla u_{k}\right)\,dx\geq\int_{ \Omega}f\left(\frac{x}{\delta_{k}},\nabla\overline{u}_{k}^{M}\right)\,dx\]
for every \(k,M\in\mathbb{N}\), thus we immediately conclude by the previous instance.
Denoting by \(F^{\prime}:=\Gamma\text{-}\liminf_{k}F_{k}\), what we have proved so far is that \(F(u)\leq F^{\prime}(u)\) for every \(u\in W^{1,d}(\Omega)\).
### Limsup inequality
The goal of this section is to define a recovery sequence converging in \(L^{d}(\Omega)\) to a fixed function \(u\in W^{1,d}(\Omega)\). First we assume that \(u\in L^{\infty}(\Omega)\).
Start by a recovery sequence \(u_{k}\to u\) in \(L^{d}(\Omega)\) related to the functionals
\[F_{k}^{0}(u):=\begin{cases}\int_{\Omega}f\left(\frac{x}{\delta_{k} },\nabla u(x)\right)\,dx&\text{ if }u\in W^{1,d}(\Omega),\\ +\infty&\text{ if }u\in L^{d}(\Omega)\setminus W^{1,d}(\Omega)\end{cases}\]
which are known to \(\Gamma\)-converge to
\[F^{0}(u):=\int_{\Omega}f_{\hom}(\nabla u(x))\,dx\]
for every \(u\in W^{1,d}(\Omega)\) as stated in the Homogenization Theorem. By the equi-coerciveness of the functionals \((F_{k}^{0})_{k}\), we deduce \(u_{k}\rightharpoonup u\) in \(W^{1,d}(\Omega)\).
It is a known fact that, up to extract a subsequence, we can also assume that \((|\nabla u_{k}|^{d})_{k}\) is an equi-integrable family (see [11] and [6, Remark C.6]).
We claim that we can also make our recovery sequence bounded in \(L^{\infty}(\Omega)\). Let \(T:=\|u\|_{L^{\infty}(\Omega)}\) and define \(u_{k}^{\prime}:=(-(T+1)\lor u_{k})\wedge(T+1)\). We get a bounded sequence in \(L^{\infty}(\Omega)\) which converges to \(u\) in \(L^{d}(\Omega)\) with further property that \((|\nabla u_{k}^{\prime}|^{d})_{k}\) is still equi-integrable being obtained by truncation.
Note that
\[\left|\int_{\Omega}f_{\hom}(\nabla u_{k}(x))\,dx-\int_{\Omega}f_{ \hom}(\nabla u_{k}^{\prime}(x))\,dx\right|\leq\int_{\{|u_{k}|>T+1\}}|f_{\hom}( \nabla u_{k}(x))|\,dx\\ \leq\beta\int_{\{|u_{k}|>T+1\}}|\nabla u_{k}(x))|^{d}\,dx\leq \beta\int_{\{|u_{k}-u|>1\}}|\nabla u_{k}(x))|^{d}\,dx\,;\]
but since \(u_{k}\to u\) in measure and \((|\nabla u_{k}|^{d})_{k}\) is equi-intergable, the last term tends to \(0\) and the claim is proved.
For every \(k\), define modifications \(v_{k}\) by transformations around every point \(x_{k}^{i}\) with \(i\in Z_{k}\) as we did in (47). We recall the construction for clarity: fix \(M\in\mathbb{N}\) and let \(\alpha>0\) be such that \(\alpha 2^{M+1}<1/2\), then apply Lemma 2.2 with
\[f(x,\xi)=f\left(\frac{x}{\delta},\xi\right)\,,\eta=\alpha d_{k}\,,R=\alpha 2^{M+ 1}d_{k}\,,N=M\text{ and }r=\alpha d_{k}.\]
We have that
\[\int_{\Omega}f\left(\frac{x}{\delta_{k}},\nabla v_{k}(x)\right)\,dx\leq\left(1 +\frac{C}{M-1}\right)\int_{\Omega}f\left(\frac{x}{\delta_{k}},\nabla u_{k}(x) \right)\,dx\,,\]
and the function \(v_{k}\) attains the constant value \(u_{k}^{i}\) on \(\partial B(x_{k}^{i},\rho_{k}^{i})\), where \(\rho_{k}^{i}\) is of the form \(\alpha 2^{ju}d_{k}\) for some \(j_{i}\in\{1,...,M\}\).
Since \(\varepsilon_{k}/d_{k}\to 0\) as \(k\to+\infty\), we can also assume \(\varepsilon_{k}<\alpha d_{k}\) for every \(k\); hence, we define
\[w_{k}:=\begin{cases}v_{k}&\text{ on }\Omega\setminus\bigcup_{i\in Z_{k}}B(x_{k} ^{i},\rho_{k}^{i})\\ u_{k}^{i}&\text{ on }B(x_{k}^{i},\rho_{k}^{i})\setminus B(x_{k}^{i},\alpha d_{k}),i \in Z_{k}\\ \varphi_{k}^{i}&\text{ on }B(x_{k}^{i},\alpha d_{k}),\,i\in Z_{k},\end{cases}\]
where \(\varphi_{k}^{i}\) solves the minimum problem
\[\min\Bigl{\{}\int_{B(x_{k}^{i},\alpha d_{k})}f\left(\frac{x}{ \delta_{k}},\nabla u(x)\right)\,dx:u\in u_{k}^{i}+W_{0}^{1,d}(B(x_{k}^{i}, \alpha d_{k})),u=0\text{ on }B(x_{k}^{i},\varepsilon_{k})\Bigr{\}}\\ =\min\Bigl{\{}\int_{B(0,\alpha)}f\left(\frac{x}{\delta_{k}^{\prime }},\nabla u(x)\right)\,dx:u\in 1+W_{0}^{1,d}(B(0,\alpha)),u=0\text{ on }B(0, \varepsilon_{k}^{\prime})\Bigr{\}}|u_{k}^{i}|^{d}\\ =\frac{C(\lambda)+o_{k}(1)}{|\log\varepsilon_{k}|^{d-1}}|u_{k}^{ i}|^{d}\,.\]
Let \(A_{k}:=\bigcup_{i\in Z_{k}}B(x_{k}^{i},\rho_{k}^{i})\). We will treat with different arguments the contributions due to \(\Omega\setminus A_{k}\) and \(A_{k}\).
We estimate the contribution on \(A_{k}\) using Proposition 3.2,
\[\limsup_{k}\int_{A_{k}}f\left(\frac{x}{\delta_{k}},\nabla w_{k}(x )\right)\,dx=\limsup_{k}\sum_{i\in Z_{k}}\int_{B(x_{k}^{i},\alpha d_{k})}f \left(\frac{x}{\delta_{k}},\nabla\varphi_{k}^{i}(x)\right)\,dx\\ =\limsup_{k}\sum_{i\in Z_{k}}|u_{k}^{i}|^{d}\frac{C(\lambda)+o_{k} (1)}{|\log\varepsilon_{k}|^{d-1}}=C(\lambda)\int_{\Omega}|u(x)|^{d}\,dx\,. \tag{52}\]
To estimate the contribution on \(\Omega\setminus A_{k}\), we put
\[Z_{k}^{\prime}:=\{i\in\mathbb{Z}^{d}:B(x_{k}^{i},\varepsilon_{k})\cap\Omega \neq\emptyset,i\notin Z_{k}\}\qquad,\qquad r_{k}:=\alpha 2^{M+1}d_{k}\]
and define
\[A^{\prime}_{k}:=\bigcup_{i\in Z^{\prime}_{k}}B(x^{i}_{k},r_{k})\]
in order to study separately the behaviours on \(A^{\prime}_{k}\) and \(\Omega\setminus(A_{k}\cup A^{\prime}_{k})\).
Take into account the contribution of \(A^{\prime}_{k}\). We set
\[Q^{i}_{k}:=x^{i}_{k}+\left(-\frac{d_{k}}{2},\frac{d_{k}}{2}\right),\]
and we see preliminarily that
\[|\Omega\cap A^{\prime}_{k}|\leq\sum_{i\in Z^{\prime}_{k}}(r_{k})^{d}\sim\#Z^{ \prime}_{k}(d_{k})^{d}=\bigg{|}\bigcup_{i\in Z^{\prime}_{k}}Q^{i}_{k}\bigg{|} \rightarrow|\partial\Omega|=0 \tag{53}\]
by assumption.
For every \(i\in Z^{\prime}_{k}\), let \(\psi^{i}_{k}\) be the solution to the homogeneous capacitary problem
\[\min\Bigl{\{}\int_{B(x^{i}_{k},r_{k})}|\nabla u(x)|^{d}\,dx:u\in 1+W^{1,d}_{0} (B(x^{i}_{k},r_{k})),u=0\text{ on }B(x^{i}_{k},\varepsilon_{k})\Bigr{\}}\]
which is (known to be) equal to \(\sigma_{d-1}|\log r_{k}-\log\varepsilon_{k}|^{1-d}\).
Up to extending \(\psi^{i}_{k}\) with value \(1\) on \(\mathbb{R}^{d}\setminus B(x^{i}_{k},r_{k})\), we set as recovery sequence
\[w^{\prime}_{k}:=w_{k}\prod_{i\in Z^{\prime}_{k}}\psi^{i}_{k}\qquad\text{ on }\Omega.\]
Such \(w^{\prime}_{k}\) is a modification of \(w_{k}\) performed on \(A^{\prime}_{k}\), which is disjoint from \(A_{k}\) in virtue of the choice of the radii \(r_{k}\) and \(\rho^{i}_{k}\), for every \(k\) and \(i\in Z_{k}\). This means that the estimate in (52) is still valid replacing \(w_{k}\) with \(w^{\prime}_{k}\).
We prove that
\[\limsup_{k}\int_{\Omega\cap A^{\prime}_{k}}f\left(\frac{x}{\delta_{k}},\nabla w ^{\prime}_{k}(x)\right)\,dx=0. \tag{54}\]
For every \(i\in Z^{\prime}_{k}\), we have
\[\int_{\Omega\cap B(x^{i}_{k},r_{k})}f\left(\frac{x}{\delta_{k}}, \nabla w^{\prime}_{k}(x)\right)\leq\beta\int_{\Omega\cap B(x^{i}_{k},r_{k})}| \nabla w^{\prime}_{k}(x)|^{d}\,dx\\ \leq 2^{d-1}\beta\left[(1+\|u\|_{L^{\infty}(\Omega)})^{d}\int_{B(x^ {i}_{k},r_{k})}|\nabla\psi^{i}_{k}(x)|^{d}\,dx+\int_{\Omega\cap B(x^{i}_{k},r_ {k})}|\nabla w_{k}(x)|^{d}\,dx\right]\\ \leq C\left[|\log r_{k}-\log\varepsilon_{k}|^{1-d}+\int_{\Omega \cap B(x^{i}_{k},r_{k})}|\nabla w_{k}(x)|^{d}\,dx\right]\]
for a positive constant \(C\) which depends only on \(\|u\|_{L^{\infty}(\Omega)},\beta\) and the dimension \(d\).
Note that, since \(i\in Z^{\prime}_{k}\), by definition of \(w_{k}\) we have
\[\int_{\Omega\cap B(x^{i}_{k},r_{k})}|\nabla w_{k}(x)|^{d}\,dx=\int_{\Omega\cap B (x^{i}_{k},r_{k})}|\nabla v_{k}(x)|^{d}\,dx\,,\]
and by the property (ii) of Lemma 2.2, i.e., modifications on the starting function occur very close to the prescribed radius, it also holds
\[\int_{\Omega\cap B(x^{i}_{k},r_{k})}|\nabla v_{k}(x)|^{d}\,dx=\int_{\Omega\cap B (x^{i}_{k},r_{k})}|\nabla u_{k}(x)|^{d}\,dx\,.\]
Exploiting the equi-integrability of \((|\nabla u_{k}|^{d})_{k}\), by (53) we infer that
\[\limsup_{k}\sum_{i\in Z^{\prime}_{k}}\int_{\Omega\cap B(x^{i}_{k},r_{k})}| \nabla w_{k}(x)|^{d}\,dx=0.\]
At this point
\[\limsup_{k}\int_{\Omega\cap A^{\prime}_{k}}f\left(\frac{x}{\delta_{k}}, \nabla w^{\prime}_{k}(x)\right)\,dx\leq C\limsup_{k}\sum_{i\in Z^{\prime}_{k} }|\log r_{k}-\log\varepsilon_{k}|^{1-d},\]
but since \(\varepsilon_{k}\ll d_{k}\), we conclude that
\[\limsup_{k}\sum_{i\in Z^{\prime}_{k}}|\log\varepsilon_{k}|^{1-d}=\limsup_{k} \#Z^{\prime}_{k}(d_{k})^{d}=0\]
again by (53).
Finally, we deal with the contribution on \(\Omega\setminus(A_{k}\cup A^{\prime}_{k})\). It holds
\[\limsup_{k}\int_{\Omega\setminus(A_{k}\cup A^{\prime}_{k})}f \left(\frac{x}{\delta_{k}},\nabla w^{\prime}_{k}(x)\right)\,dx=\limsup_{k} \int_{\Omega\setminus(A_{k}\cup A^{\prime}_{k})}f\left(\frac{x}{\delta_{k}}, \nabla v_{k}(x)\right)\,dx\\ \leq\limsup_{k}\int_{\Omega}f\left(\frac{x}{\delta_{k}},\nabla v_ {k}(x)\right)\,dx\leq\left(1+\frac{C}{M-1}\right)\limsup_{k}\int_{\Omega}f \left(\frac{x}{\delta_{k}},\nabla u_{k}(x)\right)\,dx\\ \leq\left(1+\frac{C}{M-1}\right)\int_{\Omega}f_{\rm hom}(\nabla u (x))\,dx\,, \tag{55}\]
where the last inequality is due to the fact that \((u_{k})_{k}\) was originally picked as a recovery sequence to \(u\) for the functionals \((F^{0}_{k})_{k}\).
Gathering (52), (54) and (55), we get
\[\limsup_{k}\int_{\Omega}f\left(\frac{x}{\delta_{k}},\nabla w^{\prime}_{k} \right)dx\leq\left(1+\frac{C}{M-1}\right)\int_{\Omega}f_{\rm hom}(\nabla u)dx +C(\lambda)\int_{\Omega}|u|^{d}dx\,.\]
Since we can repeat the argument for every \(\alpha>0\), we are free to set \(M\) arbitrarily large, thus, the approximate limsup inequality is proved.
We still have to check that \(w^{\prime}_{k}\to u\) in \(L^{d}(\Omega)\), i.e., it is actually an (approximate) recovery sequence.
Note that \(\lim_{k}|\{w^{\prime}_{k}\neq w_{k}\}|=0\) and \(\sup_{k}\|w^{\prime}_{k}-w_{k}\|_{L^{\infty}(\Omega)}\leq\|u_{k}\|_{L^{\infty}( \Omega)}\leq 1+\|u\|_{L^{\infty}(\Omega)}\) imply that \(w^{\prime}_{k}-w_{k}\to 0\) in \(L^{d}(\Omega)\), hence, it suffices to prove that \(w_{k}\to u\) in \(L^{d}(\Omega)\).
Since \(\lim_{k}|\{w_{k}\neq v_{k}\}|=0\) and \(\sup_{k}\|w_{k}-v_{k}\|_{L^{\infty}(\Omega)}\leq\|u_{k}\|_{L^{\infty}(\Omega)} \leq 1+\|u\|_{L^{\infty}(\Omega)}\), it holds that \(w_{k}-v_{k}\to 0\) in \(L^{d}(\Omega)\), moreover \(v_{k}\to u\) in \(L^{d}(\Omega)\) by the same argument we used in the proof of the liminf inequality based on Lemma 3.3; hence, \(w_{k}\to u\) in \(L^{d}(\Omega)\).
To conclude, we remove the assumption \(u\in L^{\infty}(\Omega)\). Recall that the \(\Gamma\)-limsup of \((F_{k})_{k}\) is defined as
\[F^{\prime\prime}(u):=\inf\{\limsup_{k}F_{k}(u_{k}):u_{k}\to u\in L^{d}(\Omega)\}\]
for every \(u\in W^{1,d}(\Omega)\). \(F^{\prime\prime}\) is sequentially lower semicontinuous with respect to the strong convergence in \(L^{d}(\Omega)\) and by what we have already shown, it coincides with \(F\) on \(W^{1,d}(\Omega)\cap L^{\infty}(\Omega)\).
Hence, given a sequence \((u_{k})_{k}\subseteq W^{1,d}(\Omega)\cap L^{\infty}(\Omega)\) converging to \(u\) in \(W^{1,d}(\Omega)\), it holds
\[F^{\prime\prime}(u)\leq\liminf_{k}F^{\prime\prime}(u_{k})=\liminf_{k}F(u_{k}) =F(u)\]
by the continuity of \(F\) with respect to the strong convergence in \(W^{1,d}(\Omega)\), and this concludes the proof of the \(\Gamma\)-convergence.
|
2310.03873
|
Neuromorphic Robust Framework for Concurrent Estimation and Control in
Dynamical Systems using Spiking Neural Networks
|
Concurrent estimation and control of robotic systems remains an ongoing
challenge, where controllers rely on data extracted from states/parameters
riddled with uncertainties and noises. Framework suitability hinges on task
complexity and computational constraints, demanding a balance between
computational efficiency and mission-critical accuracy. This study leverages
recent advancements in neuromorphic computing, particularly spiking neural
networks (SNNs), for estimation and control applications. Our presented
framework employs a recurrent network of leaky integrate-and-fire (LIF)
neurons, mimicking a linear quadratic regulator (LQR) through a robust
filtering strategy, a modified sliding innovation filter (MSIF). Benefiting
from both the robustness of MSIF and the computational efficiency of SNN, our
framework customizes SNN weight matrices to match the desired system model
without requiring training. Additionally, the network employs a biologically
plausible firing rule similar to predictive coding. In the presence of
uncertainties, we compare the SNN-LQR-MSIF with non-spiking LQR-MSIF and the
optimal linear quadratic Gaussian (LQG) strategy. Evaluation across a workbench
linear problem and a satellite rendezvous maneuver, implementing the
Clohessy-Wiltshire (CW) model in space robotics, demonstrates that the
SNN-LQR-MSIF achieves acceptable performance in computational efficiency,
robustness, and accuracy. This positions it as a promising solution for
addressing dynamic systems' concurrent estimation and control challenges in
dynamic systems.
|
Reza Ahmadvand, Sarah Safura Sharif, Yaser Mike Banad
|
2023-10-05T20:05:47Z
|
http://arxiv.org/abs/2310.03873v1
|
Neuromorphic Robust Framework for Concurrent Estimation and Control in Dynamical Systems using Spiking Neural Networks
###### Abstract
Concurrent estimation and control of robotic systems remains an ongoing challenge, where controllers rely on data extracted from states/parameters riddled with uncertainties and noises. Framework suitability hinges on task complexity and computational constraints, demanding a balance between computational efficiency and mission-critical accuracy. This study leverages recent advancements in neuromorphic computing, particularly spiking neural networks (SNNs), for estimation and control applications. Our presented framework employs a recurrent network of leaky integrate-and-fire (LIF) neurons, mimicking a linear quadratic regulator (LQR) through a robust filtering strategy--modified sliding innovation filter (MSIF). Benefiting from both the robustness of MSIF and the computational efficiency of SNN, our framework customizes SNN weight matrices to match the desired system model without requiring training. Additionally, the network employs a biologically plausible firing rule similar to predictive coding. In the presence of uncertainties, we compare the SNN-LQR-MSIF with non-spiking LQR-MSIF and the optimal linear quadratic Gaussian (LQG) strategy. Evaluation across a workbench linear problem and a satellite rendezvous maneuver, implementing the Clohessy-Wiltshire (CW) model in space robotics, demonstrates that the SNN-LQR-MSIF achieves acceptable performance in computational efficiency, robustness, and accuracy. This positions it as a promising solution for addressing concurrent estimation and control challenges in dynamic systems.
Neuromorphic computing, Spiking neural network, Modified sliding innovation filter, Linear quadratic Gaussian, Satellite rendezvous maneuver.
## I Introduction
As the design and implementation of robotic manipulators/systems undertaking diverse real-world tasks grow more ambitious, the importance of computational efficiency, reliability, and accuracy escalates. Currently, all the implemented controllers rely heavily on the provision of accurate information about the system states/parameters obtained through various types of sensors, a task that often proves elusive due to the multifaceted uncertainties inherent to robotic systems. These uncertainties encompass environmental instability, unmodeled dynamics, and sensor noises, all of which can lead to data degradation, ultimately impacting controller performance. Furthermore, in some scenarios, obtaining complete measurements of all the states and parameters that describe the dynamics remains an impractical endeavor. Consequently, the ability to perform estimation simultaneously with control operations is paramount for ensuring the safe and accurate manipulation of robotic systems [1, 2].
In light of the constraints imposed by computing resources and energy consumption, the development of concurrent estimation and control frameworks that excel in computational efficiency, robustness, and accuracy becomes an imperative endeavor. The linear quadratic Gaussian (LQG) which is a popular and optimal framework for simultaneous estimation and control of linear dynamical systems, has found widespread adoption across various domains such as robotic manipulators [3], robot control [4], robot path planning [5], and satellite control [6]. However, the LQG framework is not without its limitations. The LQG framework is a linear quadratic regulator (LQR) that works based on the state feedback provided by the Kalman filter (KF) [7]. When confronted with uncertain dynamic models, its performance diminishes, and in the presence of external disturbances, it is not robust enough [8]. In such circumstances, the KF employed in conjunction with LQR control falls short of providing accurate information about system states/parameters. Consequently, the demonstrated limitations of the LQG underscore the pressing need for the development of a framework grounded in robust estimation principles.
In this study, we introduce a novel framework, LQR-MSIF, which combines the LQR controller with a recently introduced robust filtering strategy known as modified sliding innovation filter (MSIF) [9, 10]. The LQR-MSIF leverages the robustness of the MSIF filter in processing signals obtained from measurement systems. The MSIF represents an evolution of the sliding innovation filter (SIF), which belongs to the family of variable structure filters (VSF) [11], and also it can be considered as a new generation of smooth variable structure filter (SVSF) [12]. Importantly, unlike the KF family, which prioritizes frameworks founded on minimal estimation error, the VSF family of algorithms has been developed based on guaranteed stability in the presence of bounded modeling uncertainties and external disturbances [13].
Additionally, considering the recent advancements in neuromorphic computing tools, including spiking neural networks (SNN), and their applications in robotics control and estimation [10, 14], as well as the spike coding theories [15],
we present a pioneering approach. In this study, to introduce a framework that comprehensively addresses the aforementioned limitations, we translate the LQR-MSIF into a neuromorphic SNN-based framework, in which the firing rule derived from the predicted error of the network concerning the estimated state vector, constituting a manifestation of predictive coding [15]. This theory posits that the brain perpetually constructs and enhances a'mental model' of its surrounding environment, serving the critical function of anticipating sensory input signals, which are subsequently compared with the actual sensory inputs received. As the concept of representation learning gains increasing prominence, predictive coding theory has found vibrant application and exploration within the realms of biologically inspired neural networks, such as SNN. The adoption of SNNs mitigates the computational efficiency challenges associated with this problem [16]. Owing to their minimal computational burden and inherent scalability, SNNs offer significant advantages over traditional non-spiking computing methods [17].
SNNs represent the third generation of neural networks, taking inspiration from the human brain, where neurons communicate using electrical pulses called spikes. SNNs leverage neural circuits composed of neurons and synapses, communicating via encoded data through spikes in an asynchronous fashion [17, 18, 19, 20, 21]. The asynchronous in spiking fashion characterized by event-driven processing [10], stands in contrast to traditional Artificial Neural Networks (ANNs), which operate synchronously or, in other words, are time-driven. Studies [22] demonstrate that, for equivalent tasks, SNNs are 6 to 8 times more energy efficient than ANNs with an acceptable trade-off in accuracy [23]. Moreover, the inherent scalability of SNNs enhances their reliability, particularly under the condition of neuron silencing, where neuron loss is compensated for by an increase in the spiking rate of remaining neurons [18].
Thus, to harness the advantage of SNNs for the simultaneous robust estimation and control, here, we integrate the methods proposed in prior studies [10], and [14] to develop the previously mentioned SNN-LQR-MSIF framework, anticipating substantial advantages. Subsequently, we assess the performance of the proposed SNN-LQR-MSIF framework through a series of evaluations. Initially, we apply it to a linear workbench problem, followed by its application to the intricate task of satellite rendezvous in circular orbit, a critical maneuver in space robotic applications such as on-orbit servicing and refueling [24], We then compare the SNN-LQR-MSIF with its non-spiking counterpart, LQR-MSIF, and the standard LQG under various sources of uncertainty, including modeling uncertainty, measurement outliers, and neuron silencing. For the proposed framework, our findings revealed an acceptable performance in terms of curacy, and robustness while it outperforms the traditional frameworks in terms of computational efficiency
This paper is organized as follows. Section 2 provides an overview of related works and contributions. Then, the preliminaries, underlying theories, and the proposed framework for addressing the problem of concurrent robust estimation and control in linear dynamical systems are presented in Section 3. Next, Section 4 provides numerical simulations and discussions of the results, while Section 5 serves as the conclusion of the paper.
## II Related Works and Contributions
In this section, an overview of recent related works, and our contributions have been presented separately.
### _Related works_
This section offers a concise overview of recent works related to the problem of concurrent estimation and control. In [14], Yamazaki _et al_, proposed an SNN-based framework for concurrent estimation and control, employing a combination of the Luenberger observer and LQR controller. They applied their method to scenarios involving a spring-mass-damper (SMD) system and a Cartpole system, evaluating its performance in terms of accuracy and similarity to its non-spiking counterpart. They also explored the robustness of their network in handling neuron silencing. While their results were promising, their framework had limitations, notably the need to design both controller and observer gains for each problem. Additionally, since they used the Luenberger observer, their framework inherited the observer limitations related to modeling uncertainties and external disturbances, which were not thoroughly assessed for robustness.
To address these limitations, a novel SNN-based KF was proposed in [10] for optimal estimation of linear dynamical systems. In addition to performing the optimal estimation, this approach eliminated the need for observer gain design, simplifying the process. To enhance robustness against modeling uncertainties and external disturbances, a robust SNN-based estimation framework based on MSIF was introduced. Comparative assessments involving traditional KF and MSIF demonstrated acceptable performance for the SNN-based frameworks in terms of similarity to non-spiking strategies, robustness, and accuracy. However, the previous study did not investigate concurrent estimation and control scenarios, which is the primary focus of this research. Additionally, none of the aforementioned methods utilized biologically inspired firing rules for their network.
### _Contributions_
The contributions of our research are as follows:
* **Development of SNN-LQR-MSIF:** We introduce a robust SNN-based framework for concurrent estimation and control of linear dynamical systems, named SNN-LQR-MSIF. This framework leverages previously proposed methods in [10] and [14].
* **Biologically Plausible Firing Rule:** In order to have control over the spike distribution in the network and prevent excessive spiking for a part of the network or a neuron, we implement a biologically plausible firing rule based on the concept of predictive coding concept [15], enhancing the biological relevance of our network.
* **Robustness and Accuracy Assessment:** We comprehensively investigate the performance of our method in scenarios subjected to modeling uncertainties, measurement outliers, and neuron silencing, evaluating robustness and accuracy compared to its non-spiking counterpart LQR-MSIF and the traditional LQG. We also
analyze spiking patterns to demonstrate computational efficiency.
* **Application to Satellite Rendezvous:** We apply the SNN-LQR-MSIF to a real-world scenario involving concurrent estimation and control of satellite rendezvous, a novel application for this type of neuromorphic framework. We compare its performance with that of LQR-MSIF and LQG.
## III Theory
In this section, we provide essential preliminaries, followed by an outline of the study's outcomes. The linear dynamical system and measurement package considered in this study are defined by the following equations:
\[\dot{\mathbf{x}} =A\mathbf{x}+B\mathbf{u}+\mathbf{w} \tag{1}\] \[\mathbf{z} =\mathbf{Cx}+\mathbf{d} \tag{2}\]
Here, \(\mathbf{x}\in R^{n_{\mathbf{x}}}\) refers to the state vector, \(\mathbf{u}\in R^{n_{\mathbf{u}}}\) is the input vector, \(\mathbf{z}\in R^{n_{\mathbf{z}}}\) is the measurement vector. \(A\in R^{n_{\mathbf{x}}\times n_{\mathbf{x}}}\), and \(B\in R^{n_{\mathbf{x}}\times n_{\mathbf{u}}}\) denote the dynamic transition and input matrices, respectively, while \(\mathbf{C}\in R^{n_{\mathbf{x}}\times n_{\mathbf{x}}}\) is the measurement matrix. \(\mathbf{w}\) and \(\mathbf{d}\) represent the zero-mean Gaussian white noise with covariance matrices \(Q\), and \(R\), respectively.
Figure 1 depicts the traditional block diagram of a concurrent estimation and control loop in conventional dynamical systems. This diagram reveals that both the estimator and controller employ sequential algorithms, resembling the logic of traditional von Neumann computer architectures.
### _Spiking neural networks (SNN)_
In this section, we present a brief overview of implementing an SNN, including its firing rule. To design a network composed of recurrent leaky integrate-and-fire (LIF) neurons capable of approximating the temporal variation of a parameter like \(\mathbf{x}\) as expressed in Eq. (1), we need to implement the following equation [14]:
\[\dot{\mathbf{\sigma}}=\ -\lambda\mathbf{\sigma}+D^{T}(\dot{\mathbf{x}}+\lambda\mathbf{x})-D^{T}D \mathbf{s} \tag{3}\]
Here, \(\mathbf{\sigma}\in R^{N}\) refers to the neuron membrane potential vector, \(\lambda\) is a decay or leak term considered on the membrane potential of the neurons, \(D\in R^{n_{\mathbf{x}}\times N}\) is the random fixed decoding matrix containing the neurons' output kernel, and \(\mathbf{s}\in R^{N}\) is the emitted spike population of the neurons in each time step. Further, according to spike coding network theories [14, 15], the introduced network of LIF neurons can reproduce the temporal variation of \(\mathbf{x}\) under two assumptions. First, we should be able to estimate \(\mathbf{x}\) from neural activity using the following rule:
\[\mathbf{\hat{x}}=D\mathbf{r} \tag{4}\]
Here, \(\mathbf{r}\in R^{N}\) represents the filtered spike trains, which have slower dynamics compared to \(\mathbf{s}\in R^{N}\). The dynamics of the filtered spike trains are provided by:
\[\dot{\mathbf{r}}=-\lambda\mathbf{r}+\mathbf{s} \tag{5}\]
The second assumption is that the network minimizes the cumulative error between the true value of \(\mathbf{x}\) and the estimated \(\mathbf{\hat{x}}\), leveraging optimization on the spike times not by changing the output kernel values \(D\). So, the network minimizes the cumulative error between the state and its estimate while limiting computational cost by controlling spike occurrence. To achieve this, it minimizes the following cost function [15]:
\[\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small {\small{\small{\small{\small{\small{\small{\small \small{\small{\small{\small{\small \,}}}}}}{}{}{}{}{}{}{{}{{}{{{{{{{{{{{{{{{{{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \ \ \ \} \ \} \} \ \} \ \} \ \} \ \ \ \}
SNNs with the robustness of the MSIF. The equations governing SNN-MSIF are as follows [10]:
\[\dot{\mathbf{\sigma}}=\ -\lambda\mathbf{\sigma}+F\mathbf{u}(t)+\varOmega_{s}\mathbf{r}+ \varOmega_{f}\mathbf{s} \tag{8}\] \[+\varOmega_{g}\mathbf{r}+F_{k}\mathbf{z}+\mathbf{\eta}\]
where:
\[F=D^{T}B (9)\] \[\varOmega_{s}=D^{T}(A+\lambda I)D (10)\] \[\varOmega_{f}=-(D^{T}D+\mu\lambda^{2}I) \tag{11}\]
Here, \(\lambda\) represents the leak rate for the membrane potential, and \(F\) encodes the control input to a set of spikes that is readable for the network. \(\varOmega_{s}\) and \(\varOmega_{f}\) are synaptic weights for slow and fast connections, respectively. While slow connections typically govern the implementation of the desired system dynamics, in this context, they are chiefly responsible for executing the linear dynamics of the MSIF estimator. Conversely, fast connections play a pivotal role in achieving an even distribution of the spikes across the network. Consequently, the primary contributors to the _a-priori_ prediction phase of the estimation process are the second three terms in Eq. (3). In contrast, the subsequent two terms, which are influenced by \(\varOmega_{k}\), and \(F_{k}\), adapt dynamically during the estimation process, and are tasked with handling the measurement-update or _a-posteriori_ phase of the estimation. Here, \(\varOmega_{k}\) imparts the dynamics of the update component, while \(F_{k}\) furnishes the SNN with an encoded measurement vector. To update these weight matrices the following expressions, need to be used:
\[\varOmega_{k} =\ -D^{T}(C^{+}sat(diag(P^{xx})/\delta))CD \tag{12}\] \[F_{k} =D^{T}(C^{+}sat(diag(P^{xx})/\delta)) \tag{13}\]
Here, \(P^{xx}\) represents the innovation covariance matrix, and \(\delta\) is the sliding boundary layer, a tuning parameter. To update \(P^{xx}\), the following equations are used:
\[P^{xx}=P\mathbf{CP}^{T}+R \tag{14}\] \[\dot{P}=AP+PA^{T}+Q-PC^{T}R^{-1}CP \tag{15}\]
The final term \(\mathbf{\eta}\), accounts for zero-mean Gaussian noise, simulating the stochastic nature of the neural activity in biological neural circuits. The weight matrices are analytically designed to capture MSIF dynamics, allowing the estimation of a fully observable linear dynamical system with partially noisy state measurements via a network of recurrent LIF neurons.
Utilizing the framework presented in this section for concurrent estimation concurrently with the conventional control methods results in the system depicted in Fig. 2. The figure illustrates how the conventional non-spiking estimator in Fig. 1 has been replaced by an SNN designed to function as an estimator. Instead of employing sequential estimation algorithms, this SNN-based approach capitalizes on the advantages of SNNs, including computational efficiency, highly parallel computing, and scalability. However, as shown in Fig. 2, estimation and control tasks are still conducted sequentially.
### _SNN-based concurrent estimation and control_
This section extends SNN-MSIF to a network capable of concurrently performing state estimation and control of linear dynamical systems. As introduced in [10], for the derivation of the SNN-MSIF, which implements the linear dynamics of an estimator, the SNN should be able to mimic the following dynamics:
\[\dot{\mathbf{\chi}}=A\mathbf{\hat{x}}-B\mathbf{u}+K_{KF}(\mathbf{z}-\mathbf{\hat{z}}) \tag{16}\]
Here, to go further and add the control to the above-mentioned dynamics; \(\mathbf{u}=-K_{c}(\mathbf{x}-\mathbf{x}^{D})\) is considered as the control input, So, the network should emulate the following linear system of equations:
\[\dot{\mathbf{\chi}}=A\mathbf{\hat{x}}-BK_{c}(\mathbf{\hat{x}}\ -\mathbf{x}^{D})+K_{KF}(\mathbf{z}-\mathbf{ \hat{z}}) \tag{17}\]
where \(x^{D}\)denotes the desired state. To extend the previously introduced network, the control rule \(\mathbf{u}\) is substituted into Eq. (8), resulting in the following network equation:
\[\dot{\mathbf{\sigma}}=\ -\lambda\mathbf{\sigma}-FK_{c}(\mathbf{\hat{x}}\ -\mathbf{x}^{D})+ \varOmega_{s}\mathbf{r}+\varOmega_{f}\mathbf{s}\] (18) \[\
where:
\[\Omega_{c} =-D^{T}BK_{c}D \tag{22}\] \[\bar{\Omega} =D^{T}BK_{c}\bar{D}\] (23) \[\bar{\Omega}_{f} =-(\bar{D}^{T}\bar{D}+\mu\lambda^{2}I) \tag{24}\]
Here, \(\Omega_{c}\) somehow represents the slow connections for implementing the control input of the desired system. \(\bar{\Omega}\), and \(\bar{\Omega}_{f}\) represent the slow and fast synaptic weights for various connections respectively. parallel with other connections, these weights are responsible for implementing the dynamics of the desired state for the controller and Eq. (18) represents the membrane potential dynamics of a recurrent SNN of LIF neurons, capable of concurrently performing state estimation and control of linear dynamical systems. While the controller gain \(K_{c}\) must be designed for the considered system, this framework operates without requiring any learning by the network. Furthermore, although we implemented optimal LQR control in this study, the controller gain can be independently designed using any arbitrary approach. Finally, to extract the control input vector for the external plant from the spike populations, the following equation is employed:
\[\mathbf{u}=D_{u}\mathbf{r} \tag{25}\]
where:
\[D_{u}=-K_{c}(D-\bar{D}) \tag{26}\]
The above matrix can be used for decoding the control input from the neural activity inside the network. In summary, the proposed framework concurrently estimates the state vector \(\mathbf{x}\) from a noisy partial measurement vector \(\mathbf{z}\) and provides control input for the considered system. Fig. 3 illustrates the block diagram of the framework presented in this section.
Fig. 3 demonstrates that for this framework, both the blocks of estimator and controller from Fig. 1 and Fig. 2 have been replaced by a single SNN. This represents an extension of the framework, leveraging the advantages of SNNs. Furthermore, the computations required for state estimation and control input have been parallelized. Consequently, implementing this framework can significantly reduce computational costs, allowing more complex tasks to be performed even with limited computing resources. Additionally, owing to the scalability of SNNs, if a part of the implemented network becomes damaged or loses some neurons, the process continues by increasing the spiking rate of the remaining neurons, as demonstrated in the next section.
## IV Numerical Simulations
In this section, we first apply the proposed framework to a linear workbench problem and conduct various performance evaluations in terms of robustness, accuracy, and computational efficiency, in comparison with the well-established methods LQG and LQR-MSIF. Subsequently, we extend the analysis of the SNN-LQR-MSIF to a practical scenario involving the concurrent estimation and control of satellite rendezvous maneuvers.
### _Case study 1: Linear workbench problem_
Here, we initiate our investigation by applying the introduced framework to the following linear dynamical system:
\[\dot{\mathbf{x}}=\begin{bmatrix}0&0\\ 0&1\end{bmatrix}\mathbf{x}+\begin{bmatrix}0\\ 1\end{bmatrix}\mathbf{u}+\mathbf{w} \tag{27}\] \[\mathbf{z}=[1&0]\mathbf{x}+\mathbf{v} \tag{28}\]
where:
\[\mathbf{u}=-K_{c}\mathbf{x} \tag{29}\]
Simulations have been performed over a 10-second period with a time step of 0.01, employing the numerical values provided in TABLE I.
Initially, we evaluated the applicability of the proposed framework in comparison with its non-spiking counterparts, LQG, and LQR-MSIF, by simulating a deterministic system without uncertainties. Next, we assessed the performance and effectiveness of the proposed framework by introducing various sources of uncertainties and disturbances. In line with real-world scenarios, where exact decoding matrices are typically unknown, we defined the decoding matrices \(D\) and \(\bar{D}\) using random samples from zero-mean Gaussian distributions with covariances of 0.25 and 1/300, respectively.
Fig. 4 displays time histories of controlled states and estimation errors within \(\pm 3\sigma\) bounds obtained from SNN-LQR-SIF in comparison with LQG, and LQR-MSIF. Fig. 4(a) illustrates that the state \(\mathbf{x}_{1}\) converges to zero after \(t=5\)s, showcasing similar performance between the proposed
\begin{table}
\begin{tabular}{l l} \hline \hline Parameter & Value \\ \hline \(x_{0}\) & [10,1] \\ \(\hat{x}_{0}\) & [10,1] \\ \(K_{c}\) & [1, 1.7321] \\ \(Q_{c}\) & \(I\) \\ \(R_{c}\) & \(I\) \\ \(Q\) & \(I/1000\) \\ \(R\) & \(I/100\) \\ \(N\) & \(250\) \\ \(\lambda\) & \(0.01\) \\ \(\mu\) & \(0.005\) \\ \(\nu\) & \(0.005\) \\ \(\delta_{MSIF}\) & \(0.005\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: LINEAR SYSTEM SIMULATION PARAMETERS
Fig. 3: Block diagram of SNN-based concurrent estimation and control loop.
framework and its non-spiking counterparts, LQG and LQR-MSIF. Fig. 4(b) indicates that the state \(x_{2}\) converges to zero around \(t=6\)s, again showing consistent performance between the proposed framework and non-spiking methods. Fig. 4(c) demonstrates that all considered strategies remain stable, with errors staying within the prescribed bounds. Notably, the error obtained from KF deviates further from zero before converging around \(t=3\)s, while the errors from SNN-MSIF and MSIF exhibit faster convergence with smaller deviations. Fig. 4(d) confirms the stability of all estimation methods, with SNN-LQR-MSIF showing nearly identical performance to non-spiking KF and MSIF.
Further, to gain more intuitive insights into the tuning parameters of the firing rule, namely \(\mu\), and \(\nu\) and their impacts on control accuracy, we conducted a sensitivity analysis. as depicted in Fig. 5, utilizing a colored map to show the variations of normalized average error, this analysis reveals that parameters tuning directly affects control accuracy, and depending on the specific system proper parameter sets can be identified by trial and error. The preferred parameter set used throughout our simulations is \(\mu=0.005\) and \(\nu=0.005\) marked with a white circle in the figure. The percentage of emitted spikes by the neurons compared to all possible spikes is also shown in the figure by a number on the figure for each set of \(\mu\) and \(\nu\). It can be observed that decreasing \(\nu\) leads to a higher percentage of spikes compared to possible spikes for each \(\mu\). This highlights a trade-off between accuracy and computational efficiency that can be an important factor in the tuning procedure of the network firing rule and confirms the previously mentioned matter about the tuning of \(\nu\) that controls the number of spikes.
Furthermore, we evaluated the robustness of SNN-LQR-MSIF against modeling uncertainties by introducing a 20% error in the dynamic transition matrix \(\tilde{A}=0.8A\). Simulation results in the presence of modeling uncertainty were compared with LQG and LQR-MSIF, as presented in Fig. 6. Fig. 6(a) shows that in the presence of uncertainty, the SNN-based framework for the state \(x_{1}\) deviates from non-spiking LQG and LQR-MSIF. However, SNN-LQR-MSIF exhibits superior performance, converging to zero at approximately \(t=4\)s and completely converging by \(t=6\)s. In contrast, non-spiking frameworks yield matching results converging to zero at \(t=7\)s. Fig. 6(b) demonstrates that state \(x_{2}\) exhibit similar deviation from non-spiking methods, particularly with a slightly greater overshoot and error until \(t=4\)s. However, after \(t=4\)s, SNN-LQR-MSIF displays faster convergence, a minor overshoot, and eventual convergence to zero after \(t=8\)s. In summary, these findings indicate that the proposed SNN-based framework exhibits commendable robustness in handling modeling uncertainties or external disturbances compared to non-spiking methods. Fig. 6(c) illustrates the results for the state \(x_{1}\), showcasing the performance of SNN-LQR-MSIF comparable to that of LQR-MSIF. Initially, both methods exhibit an error trend that diverges over time, exceeding the bound around \(t=1.5\)s but returning within the bound by \(t=4\)s. Eventually, both methods achieve stable estimation, converging to zero around \(t=6\)s and \(t=8\)s for SNN-MSIF and MSIF, respectively. Meanwhile, the error from KF deviates entirely and its error has returned to the bound in almost \(t=8\)s and finally, it converged to zero at \(t=10\)s. Notably, at \(t=6\)s, KF exhibits an error that is approximately 20 times greater than the error obtained for the proposed SNN-LQR-MSIF is almost near to zero. In Fig. 6(d), the results for the state \(x_{2}\) show nearly identical performance between SNN-MSIF and MSIF, both maintaining stability in their estimations throughout the considered period. Conversely, the error from KF deviates similarly to what occurred with the state \(x_{1}\). The obtained error for KF has exceeded the bound and has risen continually until almost \(t=2.5\)s reaches its maximum that is about 102 times greater than the obtained error for MSIF
Figure 4: Controlled states and estimation errors within \(\pm 3\sigma\) bounds (a) controlled state \(x_{1}\), (b) controlled state \(x_{2}\), (c) estimation error of \(x_{1}\), (d) estimation error of \(x_{2}\)
Figure 5: Colored map analysis of normalized average error obtained from various sets of \(\mu\) and \(\nu\). Additionally, compared to all possible spikes for each set of \(\mu\) and \(\nu\) the number of emitted spikes in percent is presented.
and SNN-MSIF is also approximately near to zero. Hence, it is evident that SNN-MSIF outperforms MSIF by faster convergence to zero in the presence of uncertainty, and it outperforms KF in terms of estimation stability.
An important challenge in robust navigation and control systems is handling measurement outliers, which can arise from sensor faults or external disturbances in the working environment. Therefore, to assess the framework's robustness in such scenarios, unmodeled measurement outliers were introduced into the system at \(t=3\)s, \(t=5\)s, and \(t=6\)s. To simulate the presence of measurement outliers, the measurement system noise was multiplied by a factor of 500 at these time points. Fig. 7 presents a comparison of results for controlled states and estimation errors within \(\pm 3\sigma\) bounds obtained from various frameworks in the presence of measurement outliers. Fig. 7(a) displays the time history of the state \(x_{1}\). It demonstrates that the presence of measurement outliers causes slight deviations in the results obtained from the SNN-based framework between \(t=3\)s, and \(t=7\)s. However, the framework successfully regulates the error, ultimately converging to results obtained from non-spiking methods. Fig. 7(b) demonstrates the same behavior for the state \(x_{2}\). Results from the SNN-based framework show minor deviations compared to non-spiking methods between \(t=3\)s, and \(t=7\)s, indicating that, although more sensitive to measurement outliers, the SNN-based methods continue to control the states effectively. Fig. 7(c) presents the obtained errors for the state \(x_{1}\), which exhibit significant deviations at the points of outlier injection. However, for all considered filters, these deviations are followed by rapid convergence to zero, confirming the filters' stability. Moreover, the error from SNN-MSIF is considerably smaller, especially compared to KF which exceeds the bound on all points. In Fig. 7(d), we investigate the error for the state \(x_{2}\) which reveals when KF experiences abrupt deviation and its error exceeds the bound at the points of outlier injection, whereas SNN-MSIF and MSIF remain stable throughout the simulation. Thus, SNN-MSIF exhibits superior robustness in such situations.
Fig. 8 illustrates the spiking pattern of the network achieved by the SNN-LQR-MSIF approach when confronted with measurement outliers. In Fig. 8(a), we present the spiking pattern recorded in the presence of measurement outliers. It is evident that just right before the points of outlier injections (at time steps 300, 400, and 600), most neurons are in standby mode, emitting a few spikes. However, after the introduction of outliers, a substantial portion of neurons (around 40%) become activated to handle the injected disturbances, which are rejected within just 2-3 time steps. The neural activity then decreases, demonstrating that the network effectively overcomes external disturbances or unmodeled dynamics by increasing neural activity or computational cost without failing in the assigned task. Moreover, Fig. 8(b) reveals the temporal variation of active neurons in percent, emphasizing the sudden change in the population of active neurons at the designated time steps. The population rises to nearly 40% to overcome the negative impacts of injected outliers on the system.
Fig. 8: Spiking pattern and temporal variation of active neurons population obtained from SNN-LQR-MSIF, (a) spiking pattern, (b) temporal variation of active neurons
Fig. 6: Controlled states and estimation errors within \(\pm 3\sigma\) bounds for uncertain model \(\tilde{A}=0.8A\), (a) controlled state \(x_{1}\), (b) controlled state \(x_{2}\), (c) estimation error of \(x_{1}\), (d) estimation error of \(x_{2}\)
Fig. 7: Controlled states and estimation errors within \(\pm 3\sigma\) bounds for measurement outlier (a) controlled state \(x_{1}\), (b) controlled state \(x_{2}\), (c) estimation error of \(x_{1}\), (d) estimation error of \(x_{2}\)
Finally, to assess the proposed framework's performance in situations where some neurons may become silent, several simulations were conducted with varying numbers of neurons, ranging from \(N=50\) to \(N=400\) in the step of 50 neurons. Fig. 9 presents the average overall network error in the controlled states after \(t=6\)s (where the errors almost converged to zero) versus the number of neurons. In region 1, a significant error divergence to infinity is observed (the solid line which shows the error variation became almost vertical at the edge of region 1) while this error is abruptly decreased at \(N=100\). This corresponds to the minimum number of neurons that the proposed framework requires to function effectively. Below this threshold, active neurons cannot provide sufficient neural activity to perform the necessary computations. An increase in the number of neurons within region 2 results in a gentle reduction in error. The minimum error can be observed at the optimal number of neurons at \(N=250\). In contrast, region 3 shows that an increase in the number of neurons degrades accuracy due to unstable spiking patterns with excessive neural activity.
Overall, the proposed framework exhibits remarkable robustness in handling measurement outliers and effectively adapts to situations with varying numbers of neurons, provided a minimum neuron threshold is maintained. These findings support the framework's suitability for robust navigation and control systems in real-world scenarios. Further studies on spiking patterns are provided in [10].
### _Case study 2: Satellite rendezvous maneuver_
This section is initiated by the presentation of the mathematical model for the satellite rendezvous maneuver. Subsequently, the design of the LQR controller is expounded upon. Lastly, the simulation results are provided. The rendezvous problem involves maneuvering two distinct satellites, the chaser, and the target. As depicted in Fig. 10, the chaser satellite approaches the target in orbit.
To derive the equations of relative motion, we consider the following equation in the Earth-centered inertial frame (ECI) [25].
\[\mathbf{s}=\mathbf{r}_{c}-\mathbf{r}_{t} \tag{30}\]
Here, \(\mathbf{r}_{c}\) and \(\mathbf{r}_{t}\) represent the position vectors of the chaser and target, respectively. The relative acceleration is described by the following expression:
\[\bar{\mathbf{s}}=\bar{\mathbf{r}}_{c}-\bar{\mathbf{r}}_{t} \tag{31}\]
Meanwhile, considering the circular orbit, the gravitational force in ECI is expressed as:
\[f_{g}(\mathbf{r})=-\mu_{earth}\frac{m}{r^{3}}\mathbf{r} \tag{32}\]
Here, \(\mu_{earth}\) signifies the Earth's gravitational parameter, \(m\) denotes spacecraft mass, and \(\mathbf{r}\), and \(\mathbf{r}\) represent the spacecraft position vector and its magnitude, respectively. Importantly, the absolute motion of both the chaser and target in the ECI frame can be separately formulated as follows:
\[f_{g}(\mathbf{r}_{t})=\bar{\mathbf{r}}_{t}=-\frac{\mu_{earth}}{r_{t}^{3 }}\mathbf{r}_{t} \tag{33}\] \[f_{g}(\mathbf{r}_{c})=\bar{\mathbf{r}}_{c}=-\frac{\mu_{earth}}{r_{c}^{3 }}\mathbf{r}_{c} \tag{34}\]
The above equations represent normalized forms of Eq. (32), divided by the spacecraft mass. To formulate suitable equations for controller design, it is advantageous to represent relative motion in the target frame, a non-inertial reference frame rotating with the angular velocity, \(\mathbf{\omega}\).
\[\begin{split}\frac{{d^{*}}^{2}\mathbf{s}^{*}}{dt^{2}}& +\mathbf{\omega}\times(\mathbf{\omega}\times\mathbf{s})+2\mathbf{\omega}\times \frac{{d^{*}}\mathbf{s}^{*}}{dt}\\ &+\frac{d\mathbf{\omega}}{dt}\times\mathbf{s}^{*}+\frac{\mu_{earth}}{r^ {3}}M\mathbf{s}^{*}=\mathbf{f}\end{split} \tag{35}\]
Here, \(\mathbf{s}\) denotes relative distance, \(M\), and \(\mathbf{f}\) refer to Earth's mass and external forces, respectively, and the asterisk (*) denotes parameters in the target frame. The linearized form of Eq. (35) in the target frame, known as the Clohessy-Wiltshire
Fig. 10: Schematic of rendezvous maneuver
Fig. 9: Averaged network error versus number of neurons (because of the huge divergence of error in region 1, the solid line became almost vertical at the edge of region 1)
(CW) equations, is expressed as [19]:
\[\ddot{x}-2n\dot{z}=f_{x} \tag{36}\] \[\ddot{y}+n^{2}\dot{y}=f_{y}\] (37) \[\ddot{z}+2n\dot{x}-2n^{2}\dot{z}=f_{x} \tag{38}\]
where:
\[n=\sqrt{\frac{n_{earth}}{R_{o}^{3}}} \tag{39}\]
Here, \(R_{o}\) represents the orbital radius of the target spacecraft, and \(n\) is the mean motion. To design the LQR controller, we begin by defining the state and input vectors as \(\mathbf{x}=[x,y,z,\dot{x},\dot{y},\dot{z}]^{T}\), and \(\mathbf{u}=[f_{x},f_{y},f_{z}]\), respectively. Subsequently, we derive the state space form of CW equations, expressed as:
\[\dot{\mathbf{x}}=A\mathbf{x}+B\mathbf{u} \tag{40}\]
where:
\[A=\begin{bmatrix}0&0&0&1&0&0\\ 0&0&0&1&0\\ 0&0&0&0&0&1\\ 0&0&0&0&2n\\ 0&0&0&0&-n^{2}&0\\ 0&0&0&-2n&0&2n^{2}\end{bmatrix}:B=\begin{bmatrix}0&0&0\\ 0&0&0\\ 0&0&0&0\\ 1&0&0\\ 0&1&0\\ 0&0&1\end{bmatrix} \tag{41}\]
In general, for the controllable pair of \((A,B)\), the control law for the LQR controller is given by [26]:
\[\mathbf{u}=\ -K_{LQR}\mathbf{\hat{x}} \tag{42}\]
Here, the symbol \(\mathbf{\widehat{\ }}\), denotes an estimated parameter. The controller gain \(K_{LQR}\) is designed to minimize the following cost function:
\[J_{c}=\ \int_{0}^{\infty}(\mathbf{x}^{T}Q_{c}\mathbf{x}\ +\mathbf{u}^{T}R_{c}\mathbf{u})dt \tag{43}\]
The weight matrices \(Q_{c}\) and \(R_{c}\) are determined through trial and error, with conditions \(Q_{c}>0\) and \(R_{c}\geq 0\) satisfied. The controller gain \(K_{LQR}\) is calculated using the following equation:
\[K_{LQR}=R^{-1}B^{T}S \tag{44}\]
where \(S\) is the unique positive semidefinite solution of the algebraic Riccati equation:
\[A^{T}S+SA-SBR^{-1}B^{T}S+Q\ =0 \tag{45}\]
It is important to note that due to the linearity and time-invariance of the considered system (LTI), the gain matrix \(K_{LQR}\) is computed offline and does not require updating during the maneuver. Moreover, based on the separation principle of linear systems theory, the obtained gain can be incorporated into our presented network without imposing any condition on the estimator. The simulations in this section are conducted using the numerical values provided in TABLE 2, with a time duration of 360 seconds and a time step of 0.1. Additionally, the decoding matrices \(D\) and \(\overline{D}\) are defined using random samples from zero-mean Gaussian distributions with covariances of 1/50, and 1/2500, respectively.
Fig. 11 presents a comparison between SNN-LQR-MSIF and non-spiking LQG and LQR-MSIF in the context of the rendezvous maneuver problem. Each element of the system's state vector is individually compared. The results demonstrate that all considered frameworks successfully control the states, with errors smoothly converging to zero. Moreover, it is evident that the proposed SNN-based framework exhibits similar performance in controlling the states, aligning with the results obtained from the optimal non-spiking framework LQG. Notably, for states z, and \(v_{z}\), some discrepancies are observed. For state z, the SNN-LQR-MSIF exhibits a slightly greater overshoot compared to non-spiking LQG and LQR-MSIF, but ultimately successfully controls the state error to zero. Furthermore, for state \(v_{z}\) the result from SNN-LQR-MSIF exhibits minor deviation from non-spiking frameworks between \(t=100\)s and \(t=200\)s. To provide quantitative insight into this comparison, averaged errors obtained from different methods after \(t=300\)s are presented in TABLE 3.
\begin{table}
\begin{tabular}{l l} \hline Parameter & Value \\ \hline \(\mathbf{r_{o}}\) (\(m\)) & \([70,30,-5]^{T}\) \\ \(\mathbf{v_{0}}\) (\(m/s\)) & \([-1.7,-0.9,0.25]^{T}\) \\ \(\mathbf{x}_{0}\) & \([\mathbf{r_{o}},\mathbf{v_{0}}]^{T}\) \\ \(\mathbf{\hat{x}}_{0}\) & \(\mathbf{x}_{0}\) \\ \(Q_{c}\) & \((1e-6)I_{6}\) \\ \(R_{c}\) & \(I_{3}\) \\ \(Q\) & \((1e-12)I_{6}\) \\ \(R\) & \((1e-2)I_{2}\) \\ \(N\) & \(350\) \\ \(\lambda\) & \(0.001\) \\ \(\mu\) & \(1\) \\ \(\nu\) & \(0.0001\) \\ \(\delta_{MSIF}\) & \(0.005\) \\ \hline \end{tabular}
\end{table} TABLE 2PARAMETERS FOR SATELLITE REDEZVOUS
Fig. 11: Controlled states for satellite rendezvous obtained from various frameworks in normal condition.
The results reveal that non-spiking methods deliver consistent accuracy, and the SNN-based method demonstrates acceptable accuracy. In summary, compared to traditional non-spiking frameworks like LQG and LQR-MSIF, the achieved results for controlled states affirm the acceptable performance of SNN-LQR-MSIF for the problem of satellite rendezvous, a critical maneuver in space robotic applications.
To assess the computational efficiency of the SNN-based framework relative to conventional artificial neural networks (ANNs), we delve into the spiking pattern generated by the designed SNN, as showcased in Fig 12(a). This vividly illustrates the network's efficient execution of its task. Upon closer examination, as depicted in Fig. 12(b), during the initial 2000 time-steps (before \(t=100\)s), when state-vector errors are sizable, the network exhibits heightened neural activity, with approximately 20% of neurons being active. Subsequently, the population of active neurons gently declines and remains relatively constant, with minor fluctuations hovering around 5% for the remainder of the simulation. In essence, the network accomplishes its task while utilizing a mere 2.4% of possible spikes over the entire simulation duration, in stark contrast to traditional ANNs that consume 100% of potential spikes. This underscores the computational efficiency of SNN-LQR-MSIF in simultaneously handling estimation and control for satellite rendezvous. Moving on to assess the robustness of the SNN-LQR-MSIF against modeling uncertainties, we introduce a 10% error into the dynamic transition matrix \(\hat{A}=0.9A\) used within the framework. Fig. 13 demonstrates the results for controlled states using aforementioned strategies. This figure underscores that SNN-LQR-MSIF exhibits higher sensitivity to modeling uncertainties compared to non-spiking strategies. However, it also presents that SNN-LQR-MSIF effectively control the system, with all the errors gracefully converging to zero. Furthermore, TABLE 4 presents averaged errors obtained from controlled states after \(t=300\)s, reaffirming the findings depicted in Fig. 13. To further evaluate the robustness of SNN-LQR-MSIF against external disturbances, such as instability in the working environment, we introduce measurement outliers. This scenario is configured so that unmodeled measurement outliers are injected into the system at \(t=100\)s, \(t=150\)s, and \(t=200\)s. Notably, to introduce the outliers at these time steps, the measurement system noise is scaled by a factor of 200. Fig. 14 illustrates the results for various frameworks in this scenario. Similar to modeling uncertainties, it reveals that the SNN-LQR-MSIF is more sensitive to measurement outliers compared to non-spiking strategies. However, it effectively maintains control, with all errors converging to zero. Corresponding averaged errors from the controlled states after \(t=300\)s is presented in TABLE 5, thus reinforcing the insights gleaned from the data depicted in Fig. 14.
Fig. 15 provides insight into the spiking pattern of SNN-LQR-MSIF in the presence of measurement outliers. In Fig. 15(a), the network reacts to disturbances by increasing the number of active neurons, rapidly rejecting disturbances in just 2-3 time steps. Fig. 15(b) quantifies this by depicting the
\begin{table}
\begin{tabular}{l c c c} \hline \hline State & LQG & LQR-MSIF & SNN-LQR-MSIF \\ \hline \(x(m)\) & 0.0223 & 0.0222 & 0.3059 \\ \(y(m)\) & 0.0058 & 0.0057 & 0.4001 \\ \(z(m)\) & 0.0049 & 0.0049 & 0.0082 \\ \(v_{x}(m/s)\) & 0.0012 & 0.0012 & 0.0030 \\ \(v_{y}(m/s)\) & 0.0005 & 0.0005 & 0.0001 \\ \(v_{z}(m/s)\) & 0.0005 & 0.0005 & 0.0035 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: AVERAGED ERROR FOR DIFFERENT METHODS – UNCERTAIN MODEL
Fig. 12: Spiking pattern and temporal variation of active neurons population obtained from SNN-LQR-MSIF for satellite rendezvous maneuver, (a) spiking pattern, (b) temporal variation of active neurons.
Fig. 13: Controlled states for satellite rendezvous maneuver obtained from various frameworks for uncertain model.
\begin{table}
\begin{tabular}{l c c c} \hline \hline State & KF & LQR-MSIF & SNN-LQR-MSIF \\ \hline \(x(m)\) & 0.0223 & 0.0222 & 0.3924 \\ \(y(m)\) & 0.0057 & 0.0057 & 0.3626 \\ \(z(m)\) & 0.0048 & 0.0048 & 0.0936 \\ \(v_{x}(m/s)\) & 0.0012 & 0.0012 & 0.0018 \\ \(v_{y}(m/s)\) & 0.0005 & 0.0005 & 0.0002 \\ \(v_{z}(m/s)\) & 0.0005 & 0.0005 & 0.0030 \\ \hline \hline \end{tabular}
\end{table} TABLE III: AVERAGED ERROR FOR DIFFERENT METHODS
variation in the population of active neurons in percentage terms. The figure highlights a significant increase in the proportion of active neurons, rising from approximately 10% to nearly 50%.
Finally, the results obtained in this section affirm that the framework proposed in this study demonstrates computational efficiency for such problems. Compared to traditional computing strategies like LQR-MSIF and LQG, it exhibits good and comparable performance in terms of robustness and accuracy.
## V Conclusion
In this presented study, we delved into the crucial challenges of concurrent estimation and control within dynamical systems, underscoring its paramount importance. As the complexity and safety considerations associated with mission-critical tasks continue to intensify, the demand for computationally efficient and dependable strategies has become increasingly imperative. Moreover, in the real-world application landscape, rif with uncertainties such as environmental instability, external disturbances, external disturbances, and unmodeled dynamics, the call for robust solutions capable of navigating these challenges is resounding. To answer this call, we introduced a novel approach grounded in biologically plausible principles. Our framework harnessed the potential of a recurrent spiking neural network (SNN), composed of leaky integrate-and-fire neurons, bearing resemblance to a linear quadratic regulator (LQR) enriched by the insights of a modified sliding innovation filter (MSIF). This innovative amalgamation endowed the SNN-LQR-MSIF with the robustness inherited from the MSIF, while concurrently infusing it with computational efficiency and scalability inherent in SNNs. Importantly, the elimination of the need for extensive training, owing to spike coding theories, empowered the design of SNN weight matrices grounded in the dynamic model of the target system.
In the face of a diverse array of uncertainties, including modeling imprecision, unmodeled measurement outliers, and occasional neuron silencing, we conducted a thorough comparative analysis. The SNN-LQR-MSIF underwent meticulous evaluation, alongside its non-spiking counterpart, the LQR-MSIF, and the well-established optimal approach, linear quadratic Gaussian (LQG).This evaluation spanned both linear benchmark problems and the satellite rendezvous maneuver, a mission-critical task within the realm of space robotics. The results of our investigation underscored the SNN-LQR-MSIF's commendable performance. It demonstrated competitive advantages in terms of computational efficiency, reliability, and accuracy, positioning it as a promising solution for addressing concurrent estimation and control challenges. Looking forward, we envisage the development of learning-based concurrent robust estimation and control frameworks, leveraging the capabilities of SNNs and predictive coding. These endeavors represent exciting prospects for future research in this domain, further enhancing the state-of-the-art in dynamical system control and estimation.
## VI Conflict of interest statement
The authors declare that they do not possess any conflicts of interest pertinent related to this research. This study was executed with the utmost objectivity and impartiality, and the results articulated herein stem from a meticulous and unbiased scrutiny and comprehension of the data. The authors maintain that they harbor no financial or personal affiliations with individuals or entities that could conceivably introduce bias into the findings or exert influence over the conclusions drawn from this study.
Fig. 14: Controlled states for satellite rendezvous maneuver obtained from various frameworks subjected to measurement outlier.
Fig. 15: Spiking pattern and temporal variation of active neurons population obtained from SNN-LQR-MSIF for satellite rendezvous maneuver subjected to measurement outlier, (a) spiking pattern, (b) temporal variation of active neurons.
|
2305.19445
|
A Computational Account Of Self-Supervised Visual Learning From
Egocentric Object Play
|
Research in child development has shown that embodied experience handling
physical objects contributes to many cognitive abilities, including visual
learning. One characteristic of such experience is that the learner sees the
same object from several different viewpoints. In this paper, we study how
learning signals that equate different viewpoints -- e.g., assigning similar
representations to different views of a single object -- can support robust
visual learning. We use the Toybox dataset, which contains egocentric videos of
humans manipulating different objects, and conduct experiments using a computer
vision framework for self-supervised contrastive learning. We find that
representations learned by equating different physical viewpoints of an object
benefit downstream image classification accuracy. Further experiments show that
this performance improvement is robust to variations in the gaps between
viewpoints, and that the benefits transfer to several different image
classification tasks.
|
Deepayan Sanyal, Joel Michelson, Yuan Yang, James Ainooson, Maithilee Kunda
|
2023-05-30T22:42:03Z
|
http://arxiv.org/abs/2305.19445v1
|
# A Computational Account Of Self-Supervised Visual Learning From Egocentric Object Play
###### Abstract
Research in child development has shown that embodied experience handling physical objects contributes to many cognitive abilities, including visual learning. One characteristic of such experience is that the learner sees the same object from several different viewpoints. In this paper, we study how learning signals that equate different viewpoints--e.g., assigning similar representations to different views of a single object--can support robust visual learning. We use the Toybox dataset, which contains egocentric videos of humans manipulating different objects, and conduct experiments using a computer vision framework for self-supervised contrastive learning. We find that representations learned by equating different physical viewpoints of an object benefit downstream image classification accuracy. Further experiments show that this performance improvement is robust to variations in the gaps between viewpoints, and that the benefits transfer to several different image classification tasks.
**Keywords:** infant learning; embodied vision; machine learning.
## Introduction
In interacting with the real-world, an individual's experience is highly connected from one instant to the next. If someone is holding a spoon at one moment, it is likely that they will still be holding the same spoon in the next, possibly at a slightly different distance and hand/head/spoon pose. This physical continuity serves to generate a multitude of different views of the held object. Furthermore, the physical act of holding the object informs the learner that the sequence of differing views is tied to the same object, i.e. a form of object permanence. Even if the observer does not know that an object is a spoon, they understand that the object is the same across multiple moments in time. In this paper, we study whether this embodied experience of seeing different views of an object, and knowing that the views correspond to the same object, can provide a useful form of self-supervisory signal to enable visual learning in computational models.
There is a rich body of research studying the links between motor development in infants and their perceptual and cognitive abilities. Bushnell and Boudreau (1993) proposed that the progressive development of different motor abilities in infants leads to different schedules for various kinds of perceptual inputs; these, in turn, cause a temporal difference in the appearance of various cognitive abilities. Further studies have elaborated on the links between different kinds of perceptual inputs in development and the appearance of various cognitive skills (Needham, 2000; Libertus and Needham, 2010; Schwarzer et al., 2013; Baumgartner and Oakes, 2011). Looking specifically at the ability to hold and manipulate objects, there is evidence that being able to perform hand-held object manipulations benefits several different cognitive abilities, such as learning nouns (Slone et al., 2019), visual understanding (Ruff, 1982; Soska et al., 2010) and understanding causality of actions (Rakison and Krogh, 2012). Recent research aiming to characterize infant visual experience using head-mounted cameras has found that first-person visual experience of manipulating objects during self-play constitutes a significant portion of the infants' visual diets (Herzberg et al., 2022). In addition, there is considerable consistency in the distributions of object viewing experience across different cultures (Casey et al., 2022).
While previous studies have observed the importance of the embodied experiences generated by infants, the specific learning mechanisms that link these inputs (and their characteristics and distributions) to learning outcomes are not well understood. In this paper, we consider the visual experience that is generated during embodied manipulation of objects and propose a possible mechanism by which this experience helps develop good visual representations which support category learning. To do this, we use the SimCLR framework (Chen et al., 2020) which learns effective representations by maximizing the representational similarity between two differently-augmented versions of one image. This framework relies on instance-level similarity to learn representations, and a similar framework has recently been proposed to explain the representational goal of the visual system (Konkle and Alvarez, 2022). We hypothesize
Figure 1: Visual experience during embodied object manipulation. Each row shows frames from an egocentric video of one object being manually rotated. Equating different physical views provides a strong learning signal.
rience provides stronger signals for this kind of learning. In this paper, we focus on the multi-view aspect of natural visual experience and show that access to different physical views of the same object leads to emergence of strong category structure.
Our work is also linked to research showing that temporal contiguity of visual experience can play a crucial role in learning invariant representations (Sprekeler, Michaelis, & Wiskott, 2007; Li & DiCarlo, 2010; Wood & Wood, 2018). Further, the development of such invariant object representations is not affected by reward (Li & DiCarlo, 2012), suggesting an unsupervised mechanism which regulates this kind of learning. For our part, we only consider the different views of an object that are generated during embodied manipulation of the object and show that equating these views presents a strong signal for category learning. Our contributions in this paper are:
* We demonstrate that representations learned by maximizing similarity between different physical views of the same object support strong performance on a subsequent classification task.
* We show that the representations are fairly robust to variations in the magnitude of difference between the paired object views utilized for learning.
* We demonstrate that these learned representations also successfully transfer to a diverse set of downstream classification tasks.
## Related Work
There has been recent interest in using machine learning (ML) models to explain and understand different facets of human visual abilities as they relate to human visual experience. Bambach et al. (2018) used convolutional neural networks (CNN) to investigate the differences in the visual experiences of infants and adults and showed that an infants' visual experience contains a more diverse range of views of objects, which lends itself to better object recognition performance. Stojanov et al. (2019) addressed the problem of learning object representations from incremental experience with individual objects and showed that repeated experiences with objects help ML models avoid problems related to catastrophic forgetting.
A recent work (Orhan et al., 2020) considered the problem of learning representations from infant headcamera recordings without explicit image labels. They used data from the SAYCam dataset (Sullivan et al., 2021), and showed that a learning signal based on temporal continuity enables learning representations that support image classification on the SAYCam and the Toybox datasets. While this work has similarities to our work, we focus on the visual experience that is generated during embodied object manipulation.
Other works have used CNNs to reason about the relationship between visual abilities in humans and limitations in visual experience; (Vogelsang et al., 2018) showed that CNNs can help explain deficits in configural face processing in children born with congenital cataracts. Jang and Tong (2021) showed that while CNNs can be used to recreate differences between object and face processing, they do not yet account for robustness of adult vision to image blur.
Another relevant body of research is that of learning representations from visual data without explicit labels in the field of computer vision. Initial approaches for these methods used various pretext tasks such as image colorization (Zhang et al., 2016), predicting relative patches in images (Doersch et al., 2015), solving jigsaw puzzles (Noroozi & Favaro, 2016) and predicting rotations (Gidaris et al., 2018) to generate self-supervision. However, a recent body of work (Grill et al., 2020; Misra & Maaten, 2020; Chen et al., 2020) based on contrastive learning (Hadsell et al., 2006) has significantly outperformed those earlier approaches. Self-supervised approaches have also been applied to the problem of learning visual representations from videos (X. Wang & Gupta, 2015; J. Wang et al., 2020; Qian et al., 2021; Tschannen et al., 2020).
## Our Approach
### Dataset
Previous research has established differences between the distributional properties of infant visual experience and traditionally popular datasets used in the computer vision literature (Smith & Slone, 2017). Therefore, we used the Toybox (X. Wang et al., 2018) dataset, which was designed to contain more human-like continuous videos of egocentric handheld object manipulations. The dataset consists of 12 categories from 3 super-categories: household items (ball, cup, mug, spoon), animals (cat, duck, giraffe, horse) and vehicles (airplane, car, helicopter, truck). These 12 categories are among the most common early-learned nouns for children in the US (Fenson et al., 2007). For vehicle and animal categories, the objects in the dataset are either realistic, scaled-down models or toy objects. Fig 2 shows one object per category from the Toybox dataset.
The dataset consists of short videos, each of which shows
Figure 2: Examples of all 12 classes in the Toybox dataset: car, truck, helicopter, plane, ball, spoon, cup, mug, giraffe, horse, duck, cat. This figure shows full images; in our experiments, we used images cropped to their bounding boxes.
one object being manipulated in one of several ways using an egocentric head-mounted wearable camera. The manipulations present in the dataset include systematic transformations, such as rotation and translation as well as random manipulations labeled as "hodgepodge" videos. Since our learning signal uses different viewpoints for each object, we use the 6 rotation videos (one around each axis in one direction) and the hodgepodge video. This gives us a total of 2520 videos for the 360 objects. Each video is about 20 seconds in length, and rotation videos contain two full revolutions around the specified axis and direction.
There are several interesting aspects of the Toybox dataset. First, since the objects are being manipulated by hand, the objects are often partially occluded by the subjects' hands. Second, there are several views for each object, including a lot of non-canonical views. Third, unlike traditional ImageNet-style datasets which contain many thousands or millions of objects (with one image each), Toybox has images from a relatively small set of physical instances (30 objects per category) with a large number of images from each. Thus, it can be challenging for a learner to acquire category-general representations that are less sensitive to the idiosyncrasies of individual objects in the training data. However, these specific aspects of the dataset enable our experiments, since these properties also characterize the visual experience of infants.
Bounding box annotations at 1 fps are available for the rotation and the hodgepodge videos in the Toybox dataset. In order to maintain the original aspect ratios of the objects in the images, we extended the bounding boxes along their shorter dimension to match the size of the larger dimension. Cropping each image to this extended bounding box helps maximize the information content in the images while also preventing distortion of the images.
## Method
SimCLR frameworkWe use the paradigm of contrastive learning in our experiments, and particularly the SimCLR approach (Chen et al., 2020). The experiments progress in two steps:
1. _Self-supervised representation learning._ First, a CNN backbone (Lecun et al., 1998) is trained from scratch to learn image representations. During this phase of training, a base network \(f_{\theta}\) is attached to a smaller projection network \(g_{\phi}\), and this combined network is trained using a self-supervised objective function.
2. _Representation evaluation using supervised learning._ In the second phase of training, called the linear evaluation phase, we throw away the projection network \(g_{\phi}\), the backbone \(f_{\theta}\) is frozen and we attach a linear classifier \(fc\) on top of the backbone network. This linear classifier is then trained to perform image classification. We now describe the learning signal used for training the network. During training, each minibatch \(M\) contains \(N\) pairs of images \(\{x_{2i},x_{2i+1}\}_{i=1}^{N}\). Each pair \((x_{2i},x_{2i+1})\) forms a positive pair and all other image pairs \((x_{i},x_{k})\) within \(M\) constitute the negative pairs. Each image is passed through the backbone and the projection network to obtain \(z_{i}=g\circ f(x_{i})\). The loss for one pair of positive images \((x_{i},x_{j})\) is given by \[l(i,j)=-\log\frac{exp(sim(z_{i},z_{j})/\tau)}{\sum_{k=1}^{2N}\mathbbm{1}_{|k \neq i}exp(sim(z_{i},z_{k})/\tau)}\] where \(sim(u,v)\) represents the dot product \(u\cdot v\), \(\tau\) is the temperature parameter which modulates how sharp the similarity function is and \(\mathbbm{1}\) represents the indicator variable, which evaluates to 1 when \(k\neq i\) and to 0 otherwise. The above loss function is called the NT-XEnt loss. For the entire minibatch, the loss function for all positive pairs is aggregated as:
Figure 3: An overview of the learning framework with four images from a batch. Each of the four augmented images are run through the network and we obtain the feature vector \(z_{i}\) associated with each image. The contrastive learning signal then works by moving the positive image pairs closer together while pushing the negative image pairs further apart. The pairs of images linked by the green arcs represent the positive pairs, while the image-pairs linked by the red arcs represent the negative pairs.
\[\mathcal{L}=\frac{1}{2N}\sum_{k=1}^{N}[l(2k,2k+1)+l(2k+1,2k)]\]
By minimizing the above loss function, the learning signal encourages the network to learn representations so that the positive image pairs are closer in the representation space, while the negative image pairs are further away. The effectiveness of the learning signal depends on the positive image pairs that are used. In the original paper [3], \(x_{i}\) and \(x_{j}\) are sourced from the same image with different amounts of stochastic image augmentation applied on them, thus telling the network to put differently augmented versions of the same image closer in the feature space compared to different images.
Modifications and Details1 In our experiments, we investigate the extent to which having access to different physical views of the same object contributes to good representations through self-supervision. Thus, in addition to applying stochastic augmentations on the images, we vary the viewpoints from which the positive image pair are chosen. Thus, by equating these two different views, the underlying network learns to bring the representations of these views closer. Fig 3 provides an overview of our learning framework.
Footnote 1: The code for these experiments can be found at: [https://github.com/aivaslab/toybox_simclr](https://github.com/aivaslab/toybox_simclr)
We use 27 objects from each Toybox class as the training set. During both phases of training, images from these 324 objects are used to train the network. Classification accuracies are reported on images from the remaining 3 objects. During the linear evaluation phase, in keeping with prior work, we use a randomly sampled 10% of the images to train the network. During both phases of training, we apply the following set of augmentations to all training images: color jitter, random grayscale, random crop, and random horizontal flip. No augmentations are applied to the images while calculating the accuracies. For our backbone \(f_{\theta}\), we use a ResNet-18 [1] and the projection head \(g_{\phi}\) is a 2-layer neural network.
## Experiment 1
As stated above, we vary the viewpoints that comprise each positive pair during training. In doing so, we are signalling that the different views are from the same object. To systematically study how this signal contributes to the learned representations, we use 5 different settings in our experiments:
1. SimCLR + Self: The positive pair is sourced from the same image frame with different image augmentations applied. This is the default setting for the SimCLR framework.
2. SimCLR + Transform: The Toybox dataset consists of 7 videos for each object. In this setting, the positive image pair are sourced from any one of those videos. Specifically, for every image in the dataset, we randomly sample another image from the same video to form the positive pair.
3. SimCLR + Object: The positive pairs, in this setting, come from any videos of the same object.
4. Supervised: For baseline comparison, we train a network in a supervised setting on the training images from Toybox.
5. SimCLR + Class: As a second baseline, we use SimCLR with positive pairs formed by two images from any two objects of the same class. This setting uses the same information about category membership as the Supervised setting but modified to the SimCLR framework.
Fig 4 shows example image pairings used as positive pairs in these different settings. We observe that the difficulty of the self-supervised task increases from the Self setting to the Class setting as the visual dissimilarity between the positive image pair comes from a larger range.
**Results** Table 1 shows the results of our experiments in the different settings. We find that the default SimCLR setting achieves modest performance on the Toybox dataset. However, in both the Transform and the Object settings, the final accuracy approaches that of the supervised model. These accuracies show that the representations learned by equating
\begin{table}
\begin{tabular}{l l} \hline \hline Experimental Setting & Top-1 Accuracy \\ \hline SimCLR + Self & 46.54 (0.84) \\ SimCLR + Transform & **73.83 (0.47)** \\ SimCLR + Object & 71.92 (1.13) \\ SimCLR + Class & 69.13 (0.46) \\ \hline Supervised & 73.57 (1.42) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance under different training settings. (Random guessing would yield 1/12, i.e., roughly 8.3% accuracy.) The best performance is shown by the learner in Transform setting and is comparable to the supervised learner. Accuracy drops off in the Object and Class settings. It is notable that the Transform setting exceeds the performance of the Self setting, which is the default for how SimCLR works. We report the mean and std over two runs with different random seeds.
Figure 4: Image pairings used in different experiment settings. In all cases, the anchor image is paired with one other image. In the Self setting, the same image is reused. In the Transform setting, another image of the same object from the same video is selected as the pair. In the Object setting, the image pair can be any image from any of the videos of the object. In the Class setting, the only restriction is that the image pair need to belong to the same class. After the image pair is selected, stochastic image augmentation is applied to both to generate augmented images for learning.
different views of the same object support good classification performance. What we find exciting in the results is that the Transform setting performs so well, despite learning from a weaker supervisory signal compared to the supervised model and the SimCLR models in the Object and Class settings. This seems to suggest that access to some form of viewpoint variation during training is extremely beneficial for the learned representations. We explore this more in Experiment 2.
The model trained in the Class setting did not perform as well as the Transform or the Object settings. This is likely because of the negative pairs: while we control which images form the positive pairs, the negative pairs are automatically decided during training. Because of this, several of the negative pairs are images from the same category. While this drawback is present in the other settings as well, the network seems to be able to handle them better in those settings. This robustness of the learning signal in the Transform and Object settings derives from the fact that in these cases, the chances of getting a negative pair which is more closely related than a positive pair are lower. Hence, the _false negative_ pairs do not affect performance in these cases as much.
## 2 Experiment 2
In the previous experiment, we saw that the Transform model performs better than the Object model despite weaker learning signal from the positive pairs. In the current experiment, we wish to study how the visual dissimilarity between the images forming the positive pair affect the learned representations. We do this by carefully controlling the gap between the video frames which form the positive pairs. We focus on the _SimCLR+Transform_ configuration in these experiments. We vary the gap between the frames in two settings: 1) Fixed: We fix the gap between the frames, i.e. we say that the two frames forming the positive pair have to be 2 or 4 seconds apart in the same Toybox video. 2) Range: We fix the maximum gap between the two frames, i.e. if we fix the gap to be 2s, the two frames can be 1s or 2s apart. In both settings, we increase the gap in steps of 2s from 0s to 10s and train the networks as described in the previous section. By varying the gap between frames, we can see how the distance in viewpoints for the positive pairs affects the learning performance. It should be noted that a gap of 0 in both settings corresponds to the _SimCLR+Self_ model.
**Results** Table 2 shows our results for these experiments. We see that the _Range_ setting seems to perform comparably with the _Fixed_, though it has more variation in the learning signal. This seems to indicate that there is enough variability that arises from the visual data itself which can lead to stronger learning. Further, we see that the performance in both settings remains in the same range even with decreasing gap between the positive pair.
To reduce the gap further, we used a version of the Toybox dataset sampled at 3fps. Since the bounding box annotations are done at 1fps, we use linear interpolation to obtain the annotations for the intermediate frames. Further, for this set of experiments, we used only the rotation videos. This allows us to avoid the randomness from the hodgepodge video and study the effect of viewpoint variation in a more structured and regular manner. We increase the gap parameter from 0s to 3.33s in steps of 0.67s. The other settings remain similar to the 1fps experiments. Table 3 shows our results in this setting. The first thing we note is that, because the total amount of training data increases close to 3-fold, the accuracy increases in both the _Self_ and _Transform_ settings. This is consistent with previous results in the machine learning literature showing that more data is generally beneficial. Secondly, we also note that the performance in both settings remains competitive even when the gap between frames is reduced to 0.67s. This demonstrates that the learning signal remains robust even when the gap between frames is reduced to 0.67s. With the Toybox videos, this gap corresponds to an average angular distance of 12\({}^{\circ}\) between viewpoints. These results suggest that during object manipulation, it is possible to leverage even small variations in viewing angles to learn good visual representations.
## 3 Experiment 3
In the previous experiments, we have seen that representations learned using self-supervision are beneficial for category learning on the Toybox dataset. In the final set of ex
\begin{table}
\begin{tabular}{l c c} \hline \hline Gap b/w frames & \multicolumn{2}{c}{Setting} \\ (seconds) & Fixed & Range \\ \hline
0 & 46.54 (0.84) & 46.54 (0.84) \\
2 & 71.90 (0.84) & 72.70 (0.89) \\
4 & 71.64 (0.43) & 72.72 (0.51) \\
6 & 70.18 (2.09) & **74.63 (1.04)** \\
8 & 72.02 (0.52) & 71.77 (0.62) \\
10 & **73.00 (0.42)** & 71.61 (0.32) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of model performance using the _SimCLR + Transform_ model in the Fixed and Range settings as the gap between frames is varied from 0 to 10 seconds. We report the mean and std over 2 runs.
\begin{table}
\begin{tabular}{l c c} \hline \hline Gap b/w frames & \multicolumn{2}{c}{Setting} \\ (seconds) & Fixed & Range \\ \hline
0 & 48.94 (0.23) & 48.94 (0.23) \\
0.67 & 74.05 (0.49) & 72.04 (0.63) \\
1.33 & 71.27 (0.44) & 72.16 (0.36) \\
2.00 & 70.73 (0.39) & 70.64 (0.47) \\
2.67 & 72.64 (0.42) & **75.93 (0.38)** \\
3.33 & **75.69 (0.52)** & 73.87 (0.51) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of model performance using the _SimCLR + Transform_ model in the Fixed and Range settings as the gap between frames is varied from 0 to 3.33 seconds. This table uses images from the Toybox dataset extracted at 3fps. We report mean and std over 2 runs.
periments, we examine how these representations generalize to other kinds of classification tasks. Do the benefits we see by equating different physical views of objects in classifying Toybox images transfer to other datasets as well? To accommodate a variety of classification tasks, we use several downstream tasks to measure transfer performance. The phenomenon of machine learning methods developing bias towards their training dataset is well-documented Torralba and Efros (2011). Our aim in this set of experiments is to show that the benefit from using the learning signal is not limited only to the Toybox dataset, but extends to other datasets as well. We will refrain from providing a detailed description of the datasets, but will point out some aspects of the datasets which we find relevant for this paper.
In the computer vision community, use of large-scale datasets is mainstream. These datasets function as good test data to evaluate the generality of models. To include these kinds of datasets, we choose the CIFAR-10 and CIFAR-100 datasets Krizhevsky et al. (2009). While the CIFAR-10 dataset has some classes overlapping with the Toybox dataset, the CIFAR-100 dataset has classes of natural scenes and a much larger variety of classes than the Toybox dataset. While these internet-based datasets have a large number of instances for each class, there is usually only one image of each instance and it has been shown that the images in the dataset have a skewed distribution over viewpoints due to cameraman bias. To include more datasets where evaluation is done over multiple viewpoints, we include an object classification task on the CORe50 Lomonaco and Maltoni (2017) dataset and an instance classification task on the ALOI Geusebroek et al. (2005) dataset. Finally, we examine if the representations learned from the Toybox dataset are transferable to real-world instances of the same categories. For this, we have curated the IN-12 dataset using images from the popular ImageNet Deng et al. (2009) and MS-COCO Lin et al. (2014) datasets for the Toybox classes. Specifically, we identify classes in the ImageNet dataset which overlap with the Toybox classes and randomly sample from each of these candidate classes to select 1700 images for each Toybox category. From these 1700 images, we use 1600 images per class for training and 100 images per class for testing the network.
## Results
Table 4 shows our results for the transfer learning experiments. We see that the _Transform_ model performs better than the _Self_ model and is competitive with the _Object_ models on all the transfer tasks. The improvement in performance is strong for the datasets with multiple viewpoints CORe50 and ALOI), thus showing that learning from multi-view egocentric experience of object manipulation benefits downstream performance for other multi-view datasets as well. The relative jump in performance is highest for CIFAR-100, thus demonstrating the general strength of the learned representations even for classification tasks where the image classes are vastly different. Looking at how the representations learned from the Toybox images transfer to real-world images from the same categories (IN-12 dataset), we find that similar trends hold in this case as well. It is interesting that even in these transfer conditions, the _Class_ models generally perform worse than the _Transform_ models, though it performs slightly better for the CORe50 dataset.
## Conclusion and Discussion
We have considered the problem of learning from the visual experience of embodied object manipulation and proposed a mechanism by which good representations which support image classification can be learned. We do this by utilizing a learning signal which minimizes the representational distance between different physical views of the same object. Through our experiments, we showed that this signal enables learning good representations which support categorization. We further showed that this signal is robust to the magnitude of difference between the viewpoint-pair which generate the learning signal. Finally, we demonstrated that the generality of learning with this signal by showing that the learned model can transfer non-trivially to a diverse classification tasks.
Our work leads to several important questions that will be addressed in future work: 1) While our work shows the effectiveness of the learning signal for downstream classification tasks, research has shown that similar algorithms can lead to relevant information being lost in the model Xiao et al. (2021). In order to understand the development of robust human vision that can perform diverse visual tasks, further research looking at the interaction between learning signals and the efficacy of the learned representations at different tasks needs to be done. 2) Our approach requires the use of strong image augmentations. This is likely due to the fact that CNNs can learn to use color histograms as a shortcut Chen et al. (2020) during the self-supervised training and this problem is especially acute in the case of exemplar-based datasets like the Toybox dataset. Further research needs to be done to understand how the human visual system avoids such issues.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Model & \multicolumn{5}{c}{Dataset} \\ & Cifar-10 & Cifar-100 & CORe50 & ALOI & IN-12 \\ \hline SimCLR + Self & 60.99 (0.76) & 26.67 (0.66) & 28.91 (0.55) & 79.91 (0.16) & 49.49 (0.25) \\ SimCLR + Transform & 63.86 (0.11) & 34.44 (0.13) & 38.96 (0.62) & 95.07 (0.12) & 60.37 (1.79) \\ SimCLR + Object & 63.11 (0.42) & 34.22 (0.11) & 36.35 (0.51) & 95.47 (0.07) & 60.16 (0.58) \\ SimCLR + Class & 60.35 (1.51) & 32.01 (0.22) & 39.75 (0.05) & 90.62 (0.09) & 60.83 (1.33) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of the models trained with different learning signals on various transfer experiments
## Acknowledgements
We would like to thank the anonymous reviewers for their helpful and constructive comments.
|
2308.07687
|
DiffGuard: Semantic Mismatch-Guided Out-of-Distribution Detection using
Pre-trained Diffusion Models
|
Given a classifier, the inherent property of semantic Out-of-Distribution
(OOD) samples is that their contents differ from all legal classes in terms of
semantics, namely semantic mismatch. There is a recent work that directly
applies it to OOD detection, which employs a conditional Generative Adversarial
Network (cGAN) to enlarge semantic mismatch in the image space. While achieving
remarkable OOD detection performance on small datasets, it is not applicable to
ImageNet-scale datasets due to the difficulty in training cGANs with both input
images and labels as conditions. As diffusion models are much easier to train
and amenable to various conditions compared to cGANs, in this work, we propose
to directly use pre-trained diffusion models for semantic mismatch-guided OOD
detection, named DiffGuard. Specifically, given an OOD input image and the
predicted label from the classifier, we try to enlarge the semantic difference
between the reconstructed OOD image under these conditions and the original
input image. We also present several test-time techniques to further strengthen
such differences. Experimental results show that DiffGuard is effective on both
Cifar-10 and hard cases of the large-scale ImageNet, and it can be easily
combined with existing OOD detection techniques to achieve state-of-the-art OOD
detection results.
|
Ruiyuan Gao, Chenchen Zhao, Lanqing Hong, Qiang Xu
|
2023-08-15T10:37:04Z
|
http://arxiv.org/abs/2308.07687v2
|
# DiffGuard: Semantic Mismatch-Guided Out-of-Distribution Detection
###### Abstract
Given a classifier, the inherent property of semantic Out-of-Distribution (OOD) samples is that their contents differ from all legal classes in terms of semantics, namely _semantic mismatch_. There is a recent work that directly applies it to OOD detection, which employs a conditional Generative Adversarial Network (cGAN) to enlarge semantic mismatch in the image space. While achieving remarkable OOD detection performance on small datasets, it is not applicable to _ImageNet_-scale datasets due to the difficulty in training cGANs with both input images and labels as conditions.
As diffusion models are much easier to train and amenable to various conditions compared to cGANs, in this work, we propose to directly use pre-trained diffusion models for semantic mismatch-guided OOD detection, named DiffGuard. Specifically, given an OOD input image and the predicted label from the classifier, we try to enlarge the semantic difference between the reconstructed OOD image under these conditions and the original input image. We also present several test-time techniques to further strengthen such differences. Experimental results show that _DiffGuard_ is effective on both _Cifar_-_10_ and hard cases of the large-scale _ImageNet_, and it can be easily combined with existing OOD detection techniques to achieve state-of-the-art OOD detection results.
+
Footnote †: Code: [https://github.com/cure-lab/DiffGuard](https://github.com/cure-lab/DiffGuard)
## 1 Introduction
The effectiveness of deep learning models is largely contingent on the independent and identically distributed (i.i.d.) data assumption, _i.e_., test sets follow the same distribution as training samples [22]. However, in real-world scenarios, this assumption often does not hold true [6]. Consequently, the task of out-of-distribution (OOD) detection is essential for practical applications, so that OOD samples can be rejected or taken special care of without harming the system's performance [12].
For image classifiers, a primary objective of OOD detection is to identify samples having semantic shifts, whose contents differ from all legal classes in the training dataset [51]. To differentiate such OOD samples and in-distribution (InD) ones, some existing solutions utilize information from the classifier itself, such as internal features [44], logits [11], or both [46]. While simple, these solutions inevitably face a trade-off between the InD classification accuracy and the over-confidence of the trained classifier for OOD detection [32], especially on hard OOD inputs. Some other methods propose using an auxiliary module for OOD detection based on either reconstruction quality [3] or data density [34]. The auxiliary module does not affect the training process of the classifier, but these methods tend to have a low OOD detection capability.
To the best of our knowledge, MoodCat [52] is the only attempt that directly models the semantic mismatch of OOD samples for detection. Specifically, it employs a conditional Generative Adversarial Network (cGAN) to synthesize an image conditioned on the classifier's output label together with the input image. For InD samples with correct labels, the synthesis procedure tries to reconstruct the original input; while for OOD samples with semantically different labels, ideally the synthesis result is dramatically different from the input image, thereby enabling OOD detection. While inspiring, due to the difficulty in cGAN training with potentially conflicting conditions, MoodCat is not applicable to _ImageNet_-scale datasets.
Recently, diffusion models have surpassed GANs in terms of both training stability and generation quality. Moreover, they are amenable to various conditions during generation, including both label conditions [42, 16] and image-wise conditions through DDIM inversion [40]. With the above benefits, we propose a new semantic mismat-guided OOD detection framework based on diffusion models, called _DiffGuard_. Similar to [52], _DiffGuard_ takes both the input image and the classifier's output label as conditions for image synthesis and detects OODs by measuring the similarity between the input image and its conditional synthesis result.
However, it is non-trivial to apply diffusion models for semantic mismatch identification. A critical problem with label guidance in diffusion models is the lack of consideration for the classifier-under-protection. This issue arises in both types of guidance in diffusion models, namely classifier guidance1[42] and classifier-free guidance [16]. If the guidance cannot match the semantics of the classifier's output, the synthesis result may fail to highlight the semantic mismatch of OODs. To address this problem, we propose several techniques that effectively utilize information from the classifier-under-protection. Additionally, we propose several test-time enhancement techniques to balance the guidance between the input image and the label condition during generation, without even fine-tuning the diffusion model.
Footnote 1: Classifier guidance relies on a noisy classifier rather than the classifier-under-protection. See Sec. 3.2.1 for more details.
We evaluate the effectiveness of the proposed framework on the standard benchmark, OpenOOD [50]. Given Cifar-10 or ImageNet as the InD dataset, DiffGuard outperforms or is on par with existing OOD detection solutions, and it can be easily combined with them to achieve state-of-the-art (SOTA) performance. We summarize the contributions of this paper as follows:
* We propose a diffusion-based framework for detecting OODs, which directly models the semantic mismatch of OOD samples, and it is applicable to ImageNet-scale datasets;
* We propose several test-time techniques to improve the effectiveness of conditioning in OOD detection. Our framework can work with any pre-trained diffusion models without the need for fine-tuning, and can provide plug-and-play OOD detection capability for any classifier;
* Experimental results show that our framework achieves SOTA performance on Cifar-10 and demonstrates strong differentiation ability on hard OOD samples of ImageNet.
The rest of the paper is organized as follows. Section 2 introduces related OOD detection methods and diffusion models. Section 3 presents our framework and details the proposed solution. Experimental results are presented in Section 4. We also provide discussion on limitations and future works in Section 5. Finally, we conclude this paper in Section 6.
## 2 Related Work
This section begins by surveying existing OOD detection methods. Especially, we demonstrate diffusion models for OOD detection, and talk about the differences between our method and other reconstruction-based ones.
OOD Detection Methods.In general, OOD detection methods can be categorized as classification-based or generation-based.
Classification-based methods utilize the output from the classifier-under-protection to differentiate between OODs and InDs. For methods that do not modify the classifier, ODIN [25], ViM [46], MLS [11], and KNN [44] are typical ones. They extract and utilize information in the feature space (e.g., KNN), the logits space (e.g., MLS, ODIN), or both (e.g., ViM). Other methods modify the classifier by proposing new losses [17] or data augmentation techniques [43, 30], or using self-supervised training [37, 45].
Generation-based methods typically have a wider range of applications than classification-based ones because they have no restriction on classifiers. Most generation-based methods focus on either reconstruction quality based on inputs [3, 36] or likelihood/data-density estimated from the generative model [1, 39]. Their basic assumption is that generative models trained with InD data may fail to make high-quality reconstructions [3] or project OODs to low-density areas of the latent space [34]. However, this assumption may not hold true [31, 20]. In contrast, conditional synthesis does not rely on such an assumption. it differentiates OODs by constructing semantic mismatch (e.g., [52] uses cGAN). Since semantic mismatch is the most significant property of OODs, this kind outperforms reconstruction-based ones.
Our method leverages conditional image synthesis, which shares the same benefits as [52]. However, our method outperforms cGAN in terms of model training. DiffGuard is compatible with any normally trained diffusion models, which eliminates the need for additional training process.
**Diffusion Models.** Following a forward transformation from the image distribution to the Gaussian noise distribution, diffusion models [15] are generative models trained to learn the reverse denoising process. The process can be either a Markov [15] or a non-Markov process [40]. The recently proposed Latent Diffusion Model (LDM) [35] is a special kind. LDM conducts the diffusion process in a latent space to make the model more efficient.
**Diffusion Models for OOD Detection.** Previously, researchers primarily utilize the reconstruction ability of diffusion models for detecting OOD and novelty instances. They achieve this by measuring the discrepancy between the input image and its reconstructed counterpart. For example, [30] trains a binary classifier with training data generated from the diffusion models to differentiate OODs. [27] conducts noise augmentations to input images and then compares the differences between the denoised images and the inputs for OOD detection. Similarly, [8] also uses diffusion models in a reconstruction-based manner, establishing a range of noise levels for the addition and removal of noise.
Although reconstruction is one of the functions of diffusion models, a more significant advantage of diffusion models is their flexibility to handle different conditions. Our paper employs diffusion models in detecting OODs with semantic mismatch. By utilizing both input images and semantic labels as conditions for generation, diffusion models highlight the semantic mismatch on OODs, thus facilitating the differentiation of OODs from InDs.
## 3 Method
In this section, we first demonstrate some preliminaries about diffusion models. Then, we present our DiffGuard, which uses diffusion models for OOD detection.
### Preliminaries
Our method is based on three significant techniques: classifier-guidance [42], classifier-free guidance [16], and DDIM inversion [40]. The first two pertain to label conditioning methods in diffusion, whereas the last one is associated with image conditioning. We provide a concise overview of these techniques.
**Conditional Diffusion Models.** As a member of generative models, diffusion models generate images (\(\mathbf{x}_{0}\)) through a multi-step denoising (reverse) process starting from Gaussian noise (\(\mathbf{x}_{T}\)). This process was first formulated as a Markov process by Ho [15] with the following forward (diffusion) process:
\[q(\mathbf{x}_{1:T}|\mathbf{x}_{0}):=\prod_{t=1}^{T}q(\mathbf{x}_{t}|\mathbf{x}_{t-1}) \tag{1}\]
where
\[q(\mathbf{x}_{t}|\mathbf{x}_{t-1}):=\mathcal{N}\left(\sqrt{\frac{\alpha_{t}}{\alpha_{ t-1}}}\mathbf{x}_{t-1},(1-\frac{\alpha_{t}}{\alpha_{t-1}})\mathbf{I}\right) \tag{2}\]
and the decreasing sequence \(\alpha_{1:T}\in(0,1]^{T}\) is the transition coefficient. After refactoring the process to be non-Markov, Song [40] proposed a skip-step sampling strategy to speedup the generation, as in Eq. (3), where \(t\in[1,...,T]\), \(\epsilon_{t}\sim\mathcal{N}(\mathbf{0},I)\) is the standard Gaussian noise independent of \(\mathbf{x}_{t}\), and \(\epsilon_{\theta}^{(t)}\) is the estimated noise by the model \(\theta\) at timestep \(t\). The sampling process can be performed on any sub-sequence \(t\in\tau\subset[1,...,T]\).
\[\begin{split}\mathbf{x}_{t-1}&=\sqrt{\alpha_{t-1}} \Big{(}\frac{\mathbf{x}_{t}-\sqrt{1-\alpha_{t}}\epsilon_{\theta}^{(t)}(\mathbf{x}_{t} )}{\sqrt{\alpha_{t}}}\Big{)}\\ &+\sqrt{1-\alpha_{t-1}-\sigma_{t}^{2}}\cdot\epsilon_{\theta}^{(t )}(\mathbf{x}_{t})+\sigma_{t}\epsilon_{t}\end{split} \tag{3}\]
Under this formulation, there are two ways to apply the label semantic condition \(\mathbf{y}\) to the generation process: classifier guidance and classifier-free guidance. For classifier guidance [42, 4], the condition-guided noise prediction \(\hat{\mathbf{c}}(\mathbf{x}_{t})\) is given by (we omit \(\theta\) and \(t\) in \(\epsilon(\cdot)\)):
\[\hat{\mathbf{c}}(\mathbf{x}_{t}):=\epsilon(\mathbf{x}_{t})+s\sqrt{1-\alpha_{t}}\cdot \nabla_{\mathbf{x}_{t}}\log p_{\phi}(\mathbf{y}|\mathbf{x}_{t})\text{,} \tag{4}\]
where \(\log p_{\phi}\) is given by a classifier trained on noisy data \(\mathbf{x}_{t}\), and \(\mathbf{s}\) is to adjust the guidance scale (, strength of the guidance). For classifier-free guidance [16, 33], a conditional diffusion model \(\bar{\epsilon}_{\theta}^{(t)}(\mathbf{x}_{t},\mathbf{y})\) is trained. The training objective is the same as vanilla diffusion models, but \(\epsilon\) changes to \(\tilde{\epsilon}\) during inference as follows (we omit \(\theta\) and \(t\)):
\[\tilde{\epsilon}(\mathbf{x}_{t},\mathbf{y}):=\bar{\epsilon}(\mathbf{x}_{t},\emptyset)+ \omega[\bar{\epsilon}(\mathbf{x}_{t},\mathbf{y})-\bar{\epsilon}(\mathbf{x}_{t},\emptyset)] \text{,} \tag{5}\]
where \(\omega\) is to adjust the guidance scale. Both classifier guidance and classifier-free guidance are qualified for conditional generation.
**The Inversion Problem of Diffusion Models.** For generative models, applying the input image as a condition for synthesis can be done by solving the inversion problem [48]. By applying score matching [41] to the formulated SDE, the diffusion process can be converted into an
Figure 1: An overview of the DiffGuard framework with diffusion models. We first use DDIM inversion to get the latent embedding (\(\mathbf{x}_{T}\)) of the input (\(\mathbf{x}_{0}\) left). Then, we apply conditional image synthesis towards the label predicted by the classifier-under-protection. Finally, we differentiate OODs based on the similarity between the input and the synthesis. Both classifier guidance and classifier-free guidance can be applied to this framework.
Ordinary Differential Equation (ODE) [42], which provides a deterministic transformation between an image and its latent. This is also applied to the inference process of DDIM (where \(\sigma=0\) in Eq. (3)). Thus, the diffusion process from an image (\(\mathbf{x}_{0}\)) to its latent (\(\mathbf{x}_{T}\)) is given by:
\[\begin{split}\mathbf{x}_{t+1}&=\sqrt{\alpha_{t+1}} \Big{(}\frac{\mathbf{x}_{t}-\sqrt{1-\alpha_{t}}\epsilon(\mathbf{x}_{t})}{\sqrt{\alpha_ {t}}}\Big{)}\\ &+\sqrt{1-\alpha_{t+1}}\epsilon(\mathbf{x}_{t})\text{, where }t\in[0,...,T-1]. \end{split} \tag{6}\]
Such a latent can be used to reconstruct the input through the denoising process.
### Diffusion Models for OOD Detection
We show an overview of the proposed framework in Fig. 1. Given a classifier-under-protection, we utilize its prediction of the input and synthesize a new image conditioned on both the predicted label and the input. Intuitively, if the predicted label does not match the input (i.e., OOD), dissimilarity will be evident between the synthesis and the input, and vice versa. Then, we can assess whether an input image is OOD by evaluating the similarity between the input and its corresponding synthesis.
For the two conditions, the label condition tends to change the content to reflect its semantics while the input image condition tends to keep the synthesis as original. Therefore, the main challenge of our method is to apply and balance the two conditions. To handle the input image as a condition, diffusion models' inversion ability (e.g., DDIM [40]) serves as an advantage in faithfully restoring the contents. For the label condition, since there are two fundamentally different methods in diffusion, namely classifier guidance and classifier-free guidance, we propose different techniques for them to better differentiate OODs. We demonstrate the proposed methods respectively in the following of this section.
#### 3.2.1 Diffusion with Classifier Guidance
In classifier-guided diffusion models, the classifier trained on noisy data is the key to conditional generation, as shown in Eq. (4). However, directly using a classifier trained on such data for OOD detection is problematic. With a different training process, the classifier may predict differently from the classifier-under-protection, even on clean samples. As shown in Fig. 2 (A), when using a ResNet50 as the classifier-under-protection, differences in prediction exist in nearly 35% of the image samples.
The problem above hinders us from using a noisy classifier for guidance. In this section, we replace the noisy classifier \(\phi\) with the exact classifier-under-protection \(\phi_{n}\). Then, we propose two techniques for better utilization of the classifier for OOD detection.
**Tech #1: Clean Grad: using the gradient from a normal classifier.** At the right-hand side of Eq. (3), the first term can be interpreted as an estimation of \(\mathbf{x}_{0}\), i.e., \(\hat{\mathbf{x}}_{0}=\frac{\mathbf{x}_{t}-\sqrt{1-\alpha_{t}}\epsilon_{0}^{(t)}(\mathbf{ x}_{t})}{\sqrt{\alpha_{t}}}\). In this case, we can use \(\hat{\mathbf{x}}_{0}\) as a substitute of \(\mathbf{x}_{t}\). Calculation of the gradient on \(\mathbf{x}_{t}\) in Eq. (4) can be transformed into that on \(\hat{\mathbf{x}}_{0}\), shown as follows:
\[\nabla_{\mathbf{x}_{t}}\log p_{\phi}(y|\mathbf{x}_{t}):=\nabla_{\mathbf{x}_{t}}\log p_{ \phi_{n}}(y|\hat{\mathbf{x}}_{0}(\mathbf{x}_{t})). \tag{7}\]
With such an \(\hat{\mathbf{x}}_{0}\) as input, the classifier can provide a correct gradient of log-probability for a wide range of \(t\), thus offering more accurate generation directions and leading to better semantic guidance.
To understand the operation, we plot the changes in classification accuracy with different time-steps \(t\), shown in Fig. 2 (B). The classification accuracy reflects the prediction quality of \(\log p\), and thus the quality of \(\nabla\log p\). With the noisy \(\mathbf{x}_{t}\) as input, the accuracy of the normal classifier degrades more dramatically than the noisy classifier. However, with \(\hat{\mathbf{x}}_{0}\) as input, the classification accuracy of the normal classifier reduces much slower than the other two cases.
Figure 3: Gradient visualizations of classifier-guided diffusion with (right) and without (left) cutout at \(t=600\). We use a normal ResNet50 classifier from ImageNet.
Figure 2: Different behavior between a noisy classifier and a normal ResNet50 classifier on ImageNet validation. (A) Conflicting predictions: nearly 35% of the predictions are different; (B) The accuracy degradation throughout the diffusion process.
Besides, we propose that data augmentation is important to successfully applying a normal classifier. Using \(\hat{\mathbf{x}}_{0}\), the gradient of a normal classifier is relatively small and flat, as shown in Fig. 3 left. Since gradient is the only term representing the direction of semantics in Eq. (4), it is hard for a flat gradient to effectively change the semantics of the image during synthesis. To solve this problem, we propose to use data augmentations (_i.e._, random cutout) as follows:
\[\nabla_{\mathbf{x}_{t}}\log p_{\phi}(y|\mathbf{x}_{t}):=\nabla_{\mathbf{x}_{t}}\log p_{ \phi_{n}}(y|\operatorname{cutout}(\hat{\mathbf{x}}_{0}(\mathbf{x}_{t}))). \tag{8}\]
On the one hand, the gradient with augmentations is sharper and with higher amplitude (shown in Fig. 3 right), which is better for effective semantic changes on the image than that without augmentations. On the other hand, gradients corresponding to different augmentations can be accumulated to form more comprehensive guidance. To better interpret the effect, we provide a qualitative ablation study in Sec. 4.4.
**Tech #2: Adaptive Early-Stop (AES) of the diffusion process.** From Fig. 2 (B), we notice that both classifiers experience a sharp accuracy drop with increasing \(t\). This reminds us that there exists a \(t_{stop}\) such that the classifier cannot provide meaningful semantic guidance when \(t>t_{stop}\). Therefore, it is necessary to apply early-stop when performing image inversion.
Instead of setting a fixed step to stop, we propose to adaptively stop the inversion process according to the quality of the diffused image. Specifically, we use distance metrics (e.g., Peak Signal-to-Noise Ratio (PSNR) and DISTS [5]) to measure the pixel-level difference between the diffused image and the corresponding image input. If the quality degradation exceeds a given threshold, we stop the diffusion and start the synthesis (denoising) process. Empirically, such a threshold is located around \(t=600=3/5T\), as evidenced from Fig. 2 (B).
The principle of using the adaptive manner of early-stop lies in the trade-off between consistency and controllability. The early-stop technique is adopted in several literatures [23, 26], as image inversion through DDIM occasionally fails to guarantee a coherent reconstruction. Specifically, fewer inversion/generation steps lead to better consistency but lower controllability, and vice versa [29]. For example, LPIPS is used in [23] as a measure to balance image editing strength and generation quality.
For OOD detection tasks, we observe that InD and OOD samples have different patterns of quality degradation through the inversion process, especially reflected by PSNR and DISTS. Fig. 4 shows such a phenomenon. The empirical fact that InD data has faster quality degradation rates than OOD data acts as a good property to monitor the diffusion process. As a result, we can set a proper threshold with different purposes for InD and OOD samples. The threshold generally corresponds to fewer diffusion steps on InD samples, ensuring faithful reconstruction. Simultaneously, it also leads to greater steps on OOD samples, ensuring better controllability towards label conditions, and thus more significant differences compared with the inputs.
#### 3.2.2 Diffusion with Classifier-Free Guidance
Classifier-free guidance [16] relies on a trained conditional diffusion model. Benefiting from the conditional training process, it is not necessary to further apply an external classifier. In addition, the attention-based condition injection results in better coherence between the synthesis and the given label condition [33]. However, we find that the guidance scale (\(\omega\) in Eq. (5)) of the condition is a double-edged sword. For semantic mismatch, we rely on the differences between the syntheses given consistent and inconsistent conditions. A small guidance scale cannot provide semantic changes large enough to drive the OOD samples towards the inconsistent predictions, while a large guidance scale drastically changes both InD and OOD samples, also increasing the difficulty in differentiation. Therefore, it is critical to reach a trade-off with this single parameter.
Figure 4: PSNR changes throughout the diffusion process. The data is collected from the ImageNet validation set and 4 OOD datasets, with a ResNet50 classifier.
Figure 5: Illustration of the classifier-free guidance with CAM. CAM helps to utilize information given by the classifier-under-protection. For areas with high activation, we conduct label-guided generation; for areas with low activation, we drop the label guidance and perform the original DDIM-based reconstruction.
**Tech #3: Distinct Semantic Guidance (DSG).** To solve the issue stated above, we apply the Class Activation Map (CAM [38]) of the classifier-under-protection to impose restrictions on the generation process. Specifically, we apply classifier-free guidance to high-activation areas while applying the vanilla unconditional generation to other areas, with the procedure shown in Fig. 5.
While using masks to guide the generation process differently has been a common practice in image editing [33, 14], CAM in DSG associates the label guidance with the classifier-under-protection, which provides crucial information to construct and highlight semantic mismatch.
According to the CAM, high-activation areas are crucial to prediction, thus having a direct correlation with the predicted label, and vice versa. For InD samples, applying the guidance to high-activation areas effectively limits its scope of effect, thus mitigating unwanted distortions; for OOD cases, guidance on these areas leads to high inconsistency, as they are forced to embed target semantics that they do not originally have. As a result, we can easily differentiate OOD cases by similarity measurements.
## 4 Experiments
### Experimental Setups
**Benchmarks and Datasets.** We evaluate DiffGuard following a widely adopted semantic OOD detection benchmark, OpenOOD [50]. OpenOOD unifies technical details in evaluation (e.g., image pre-processing procedures and classifiers) and proposes a set of evaluation protocols for OOD detection methods. For each InD dataset, it categorizes OODs into different types (i.e., near-OOD and far-OOD) for detailed analyses.
In this paper, we employ Cifar-10 [21] and ImageNet [22] as InD samples, respectively. Cifar-10 is mostly adopted for evaluation, though being small in scale. We choose near-OODs (i.e. Cifar-100 [21] and TinyImageNet [24]). For large-scale evaluation, we set ImageNet as InD. OODs are also selected from the near-OODs in OpenOOD: Species [11], iNaturalist [18], ImageNet-O [13], and OpenImage-O [46].
**Metrics.** Following OpenOOD, we adopt the Area Under Receiver Operating Characteristic curve (AUROC) as the main metric for evaluation. AUROC reflects the overall detection capability of a detector. Besides, we consider FPR@95, which evaluates the False Positive Rate (FPR) at 95% True Positive Rate (TPR). The widely used 95% TPR effectively reflects the performance in practice.
**Baselines.** For comparison, we consider two types of baselines. For classification-based methods, we involve recently proposed well-performing methods, including EBO [28], KNN [44], MLS [11] and ViM [46]. EBO uses energy-based functions based on the output from the classifier. KNN performs OOD detection by calculating the non-parametric nearest-neighbor distance. MLS identifies the value of the maximum logit scores (without softmax). ViM proposes to combine logits with features for OOD detection. All of them are strong baselines according to OpenOOD. For generation-based methods, we consider a recent method, DiffNB [27], which utilizes the denoising ability of diffusion and performs reconstruction within the neighborhood. Following OpenOOD, all the classification-based baselines work on both Cifar-10 and ImageNet. For DiffNB, we use the official implementation and only compare with it on Cifar-10.
**Diffusion Models.** To evaluate the OOD detection ability of DiffGuard, we consider different types of diffusion models. Specifically, on Cifar-10, we use the same pre-trained model as DiffNB, which is a conditional DDPM [15] with classifier-free guidance. On ImageNet, we use the unconditional Guided Diffusion Model (GDM) [4] and apply classifier guidance. GDM is an advanced version of DDPM [15] with optimizations on the architecture and the model training process. Besides, we also adopt the Latent Diffusion Model (LDM) [35] as an example of classifier-free guidance. As stated in Sec. 2. LDM is a prevailing diffusion model in text-guided image generation [53] due to its efficient architecture.
**Classifiers-under-protection.** We directly apply the off-the-shelf settings of classifiers-under-protection established by OpenOOD. Specifically, we use ResNet18 [10] trained on Cifar-10 and ResNet50 [10] trained on ImageNet. The pre-trained weights for both can be found in OpenOOD's GitHub2.
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Cifar-100} & \multicolumn{2}{c|}{TinyImageNet} & \multicolumn{2}{c}{average} \\ \cline{2-7} & AUROC & FPR@95 & AUROC & FPR@95 & AUROC & FPR@95 \\ & \(\uparrow\) & \(\downarrow\) & \(\uparrow\) & \(\downarrow\) & \(\uparrow\) & \(\downarrow\) \\ \hline EBO [28] & 86.19 & _51.32_ & 88.61 & _44.89_ & 87.41 & _48.11_ \\ KNN [44] & 89.62 & 52.19 & 91.48 & 46.18 & 90.55 & 49.19 \\ MLS[11] & 86.14 & 52.04 & 88.53 & 45.38 & 87.34 & 48.71 \\ ViM[46] & 87.16 & 56.81 & 88.85 & 52.89 & 88.01 & 54.85 \\ MC-Dropout[7] & 86.74 & 61.49 & 88.32 & 58.44 & 87.53 & 59.97 \\ Deep Ens.[9] & 89.97 & 54.61 & 91.31 & 51.23 & 90.64 & 52.92 \\ ConfidNet*[2] & 85.92 & 72.37 & 87.16 & 69.75 & 86.54 & 71.06 \\ \hline DiffNB [27] & 89.79 & 53.23 & 91.77 & 45.88 & 90.78 & 49.56 \\ \hline Ours & 89.88 & 52.67 & 91.88 & 45.48 & 90.88 & 49.08 \\ Ours-ERO & **89.93** & **50.77** & _91.95_ & **43.58** & _90.94_ & **47.18** \\ Ours-Deep Ens. & **90.40** & 52.51 & **91.98** & 45.04 & **91.19** & 48.78 \\ Ours(Oracle) & 98.43 & 7.94 & 98.52 & 7.11 & 98.43 & 7.53 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The OOD detection performance with Cifar-10 as InD. The diffusion model uses classifier-free guidance. All the values are in percentages. \(\uparrow\)\(/\downarrow\) indicates that a higher/lower value is better. The best results are in **bold**, and the second best results are in _underlined italic_. (Oracle) indicates we use an Oracle InD classifier. * use VGG16 classifier.
**Similarity Metric.** For simplicity, we use generic similarity metrics across different diffusion models and different OODs. Specifically, we choose \(\ell_{1}\) distance on logits between input image and its synthetic counterpart for Cifar-10 benchmark, as in [27]; choose DISTS distance [19] for ImageNet benchmark (except Table 4).
Note that all diffusion models are pre-trained only with InD data. We do not fine-tune them. For more implementation details, please refer to the supplementary material.
### Results on Cifar-10
Table 1 shows the results on Cifar-10. DiffGuard outperforms or at least is on par with other methods on these two near-OOD datasets. In terms of AUROC, DiffGuard performs better than all other baselines. DiffGuard inherits the merit of image space differentiation in generation-based methods, which makes it better than classification-based ones. By highlighting the semantic mismatch, it further outperforms the generation-based DiffNB even with the same diffusion model. In terms of FPR@95, DiffGuard also outperforms DiffNB. Although classification-based methods perform slightly better than ours, we show that DiffGuard can work with them to establish new SO-TAs. Specifically, the combined method only trusts samples with high detection confidence by the baselines, while resorting to DiffGuard for hard cases. As an example, DiffGuard + Deep Ensemble performs best on AUROC and DiffGuard + EBO performs best on FPR@95 in the near-OOD benchmark for Cifar-10.
Note that the semantic mismatch utilized by DiffGuard comes from the predicted label and the input image. Wrong prediction from the classifier on InDs may affect the performance of the framework. To avoid such negative effects, we establish a hypothetical oracle classifier, as shown in the last row of Table 1. Specifically, this oracle classifier outputs the ground-truth labels for InDs and random labels for OODs. We notice both results get improved by a large margin. Especially, DiffGuard can reach a 95% TPR with very low FPRs. In practice, such a phenomenon reminds us of a common property in OOD detection [52, 46]: the performance (of DiffGuard) can improve with the increasing accuracy of the classifier.
### Results on ImageNet
Table 2 shows the results on the ImageNet benchmark. ImageNet is hard for OOD detection due to both its large scale and difficulty in semantic differentiation. We investigate the ability of DiffGuard in differentiating hard OOD cases. For example, on Species, none of the baselines perform well, while using GDM with DiffGuard outperforms all baselines in terms of both AUROC and FPR@95. On ImageNet-O, many baseline methods tend to assign higher scores to OODs rather than InDs, as indicated by AUROC \(<50\%\), which shows they fail to detect OODs. However, DiffGuard can still keep its performance and achieve the best FPR@95 with LDM.
We further validate the performance of DiffGuard on hard samples by combining it with some classification-based methods. We use the same method as that on Cifar-10 (stated in Sec. 4.2). The performance improvements are shown in the last 5 rows of Table 2. Especially, DiffGuard saves the worst-case performance of baselines. For example on ImageNet-O, DiffGuard brings considerable improvement to MLS. Besides, for average performance, DiffGuard helps to reach SOTA on this benchmark.
Another comparison shown in Table 2 is between GDM and LDM in DiffGuard. We notice that GDM performs better in general when used both alone and with other base
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c|c c} \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Species} & \multicolumn{2}{c|}{iNaturalist} & \multicolumn{2}{c|}{OpenImage-O} & \multicolumn{2}{c|}{ImageNet-O} & \multicolumn{2}{c}{Average} \\ \cline{2-11} & AUROC \(\uparrow\) & FPR@95 \(\downarrow\) & AUROC \(\uparrow\) & FPR@95 \(\downarrow\) & AUROC \(\uparrow\) & FPR@95 \(\downarrow\) & AUROC \(\uparrow\) & FPR@95 \(\downarrow\) & AUROC \(\uparrow\) & FPR@95 \(\downarrow\) \\ \hline EBO [28] & 72.04 & 82.33 & 90.61 & 53.83 & 89.15 & 57.10 & 41.91 & 100.00 & 73.43 & 73.31 \\ KNN [44] & 76.38 & 76.19 & 85.12 & 68.41 & 86.45 & 57.56 & 75.37 & 84.65 & 80.83 & 71.70 \\ ViM [46] & 70.68 & 83.94 & 88.40 & 67.85 & 89.63 & 57.56 & 70.88 & 85.30 & 79.90 & 73.66 \\ MLS [11] & 72.89 & 80.87 & 91.15 & 50.80 & 89.26 & 57.11 & 40.85 & 100.00 & 73.54 & 72.20 \\ \hline Ours(GDM) & 73.19\(\pm\)0.18 & 83.68\(\pm\)0.22 & 85.81\(\pm\)0.16 & 71.23\(\pm\)0.54 & 82.32\(\pm\)0.30 & 74.80\(\pm\)0.38 & 65.23\(\pm\)0.19 & 87.74\(\pm\)0.20 & 76.64\(\pm\)0.13 & 79.36\(\pm\)0.12 \\ Ours(LDM) & 65.87 & 91.70 & 75.64 & 79.06 & 73.92 & 81.19 & 68.57 & 84.35 & 71.00 & 84.08 \\ Ours(GDM)\(\rightarrow\)ECN & **77.81\(\pm\)**1.40 & 71.04\(\pm\)5.15 & 90.19\(\pm\)6.87 & 48.79\(\pm\)0.62 & 87.80\(\pm\)0.35 & 52.80\(\pm\)0.36 & **75.68\(\pm\)**0.32 & **80.85\(\pm\)**3.30 & **82.87\(\pm\)**2.04 & 63.37\(\pm\)**3.33 \\ Ours(GDM)\(\rightarrow\)ECN & 74.48\(\pm\)**3.30 & 72.26\(\pm\)1.68 & 92.50\(\pm\)0.49 & 39.09\(\pm\)3.76 & **91.11\(\pm\)**1.48 & 45.02\(\pm\)**1.24 & 72.42\(\pm\)1.54 & 82.30\(\pm\)3.30 & 82.63\(\pm\)2.75 & 59.67\(\pm\)4.40 \\ Ours(LDM)\(\rightarrow\)ECN & 71.08\(\pm\)0.40 & 82.20\(\pm\)1.74 & 89.39\(\pm\)0.49 & 61.01\(\pm\)0.84 & 89.65\(\pm\)0.02 & 58.83\(\pm\)0.13 & 74.85\(\pm\)3.97 & 81.95\(\pm\)3.35 & 81.24\(\pm\)1.38 & 70.25\(\pm\)3.41 \\ Ours(GDM)\(\rightarrow\)ECN & 75.95\(\pm\)0.36 & **70.31\(\pm\)**0.56 & **93.03\(\pm\)**1.38 & **30.74\(\pm\)**0.06 & 90.74\(\pm\)**1.48 & **40.61\(\pm\)**0.50 & 65.72\(\pm\)**0.487 & 87.05\(\pm\)2.95 & 81.36\(\pm\)**0.72 & **57.18\(\pm\)**5.02 \\ Ours(LDM)\(\rightarrow\)ECN & 73.69\(\pm\)**0.80 & 75.91\(\pm\)0.96 & 91.55\(\pm\)**0.40 & 43.56\(\pm\)**7.24 & 89.61\(\pm\)**0.35 & 50.61\(\pm\)**5.09 & 69.33\(\pm\)**2.58 & 84.00\(\pm\)**6.00 & 81.05\(\pm\)**7.51 & 63.52\(\pm\)**6.68 \\ \hline \end{tabular}
\end{table}
Table 2: The OOD detection performance with ImageNet as InD. GDM uses classifier guidance, while LDM uses classifier-free guidance. All the values are in percentages. \(\uparrow\)/\(\downarrow\) indicates that a higher/lower value is better. The best results are in **bold**. We highlight the comparisons with colors when combining DiffGuard with other baselines. For AUROC with Ours(GDM), we present the average and standard deviation over four runs. There is no randomness in LDM.
lines, while LDM stays at a close level and sometimes has a lower FPR@95. As long as the diffusion model can synthesize high-quality images, DiffGuard can use it to detect OODs. Beyond OOD detection performance, the choice of diffusion models can be made according to other properties. For example, GDM has a simpler architecture [4], while LDM uses fewer DDIM timesteps (as evidenced in Sec. 4.4), and thus is faster in inference. For different techniques proposed for both classifier guidance and classifier-free guidance, we provide ablation studies to analyze their effectiveness in Sec. 4.4.
### Ablation Study
In this section, we provide some in-depth analyses regarding the effectiveness of each technique proposed for DiffGuard. For more qualitative analyses such as failure cases, please refer to the supplementary material.
**Comparisons of Clean Grad.** We ablate the usage of either \(\hat{x}_{0}\) or data augmentation in the proposed Clean Grad on classifier guidance, and show how AUROC changes in Table 3. As can be seen, both techniques bring significant improvements. The best results are achieved by combining them together. Besides, we qualitatively validate their effectiveness, as shown in Fig. 6. For simplicity, we only show InD samples and use false-label guidance to show the effects on OODs. First, we identify the difficulty in manipulating noisy semantics with a normal classifier. Without \(\hat{\mathbf{x}}_{0}\) or cutout, the diffusion model fails to make visually perceptible modifications. Then, by adding either \(\hat{\mathbf{x}}_{0}\) or cutout, we can identify differences to some extent. Finally, after applying both \(\hat{x}_{0}\) and data augmentations, the generation results manage to reflect the given label. As a comparison, the model can guarantee faithful reconstruction and negligible distortion when synthesizing with ground-truth labels (last row in Fig. 6). Such results show the effectiveness of our techniques in benefiting similarity measurements, and thus OOD detection.
**Early-stop Metrics in AES.** As shown in Fig. 4, early-stop contributes to the differentiation ability of DiffGuard. In Table 4, we show the effect of different early-stop metrics and how AUROC varies with their thresholds. We note that different metrics perform differently on different OODs. Specifically, PSNR performs better on Species, while DISTS performs better on the others. In practice, we can combine them together to reach better average-case performance (as shown in the last row of Table 4).
In practice, it could be easy to choose a proper threshold for each distance. As stated in Sec. 3.2.1, the intuition of early-stop is to ensure meaningful semantic guidance by the classifier. Therefore, one can choose an initial threshold according to the change of the classification accuracy as shown in Fig. 2. Here, we pick the values at \(t=3/5T\) and vary slightly to show the effectiveness, shown in Table 4.
**CAM Cut-point in DSG.** In Sec. 3.2.2, we propose to use CAM to identify semantic-intensive regions, where the cut-point of the CAM is a hyperparameter. Typically, the cut-point can be set around 0.2 for various image synthe
\begin{table}
\begin{tabular}{c c|c c c c|c} \hline \hline PSNR & DISTS & Species & iNaturalist & OpenImage-O & ImageNet-O & Avg. \\ \hline
5.89 & - & 63.05 & 63.64 & 62.86 & 50.54 & 60.02 \\
6.39 & - & **72.29** & 72.2 & 68.18 & 52.28 & 66.23 \\ \hline - & 0.39 & 61.20 & 75.43 & 75.32 & **67.73** & 69.92 \\ - & 0.37 & 61.24 & 76.49 & 75.33 & 67.42 & 69.83 \\
6.39 & 0.37 & 69.91 & **81.06** & **77.43** & 60.66 & **72.27** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation on early-stop metrics (PSNR and DISTS) for GDM (classifier guidance). We report the best AUROC calculated with DISTS, GMSD [49] and \(\ell_{2}\) distance for each OODs from the ImageNet benchmark. The best results are in **bold**. We choose underlined thresholds at \(t=3/5T\), as stated in Sec. 3.2.1
\begin{table}
\begin{tabular}{c c|c c c|c} \hline \hline Method & \multicolumn{2}{c|}{Species iNaturalist OpenImage-O ImageNet-O} & \multicolumn{1}{c}{Average} \\ \hline w/o \(\hat{x}_{0}\), w/o aug & 66.45 & 64.80 & 48.80 & 42.30 & 55.59 \\ only w/ aug & 71.16 & 85.77 & 74.17 & 56.06 & 71.79 \\ only w/ \(\hat{x}_{0}\) & 71.95 & 84.11 & 80.72 & 63.82 & 75.15 \\ \hline Ours & **73.19** & **85.81** & **82.32** & **65.23** & **76.64** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation for Clean Grad on GDM with ImageNet as InD. We show AUROC with different OODs. The related settings are the same as in Table 2.
Figure 6: Ablation study to show the effectiveness of using \(\hat{x}_{0}\) and data augmentations. The images are from the ImageNet validation set. There is a clear difference between the ground-truth-guided syntheses and the false-label-guided ones.
sis settings. To investigate the impact of the CAM cut-point, we use LDM with both DDIM-25 (, DDIM with 25 timesteps) and DDIM-50 for image synthesis and calculate the average AUROC. As shown in Fig. 7 left, the optimal cut-point keeps around 0.2 regardless of the changes of timesteps for image synthesis. A larger cut-point implies a smaller area for conditional generation. Setting a too-small area for conditional generation is insufficient for highlighting semantic mismatch, while applying label guidance globally to all pixels of the image is also unsatisfactory. Empirically, balancing the conditional and unconditional generation at \(\mathrm{CAM}\approx 0.2\) achieves the best performance.
**Different Diffusion Timesteps.** Since the generation process of diffusion models includes multi-step iterations, the number of timesteps is the key to the trade-off between quality and speed. For all diffusion-model-based methods including DiffGuard, the trade-off still exists even with the DDIM sampler [40]. To analyze such a trade-off, we test the average AUROC of LDM with different timesteps ranging from 5 to 100. As shown in Fig. 7 right, the AUROC has a non-monotonic correlation with the number of time-steps, and the optimal AUROC is achieved by DDIM-25 empirically. Although more timesteps generally lead to better synthesis quality, in our case, the timesteps also affect the impact of label guidance. More guidance steps lead to more significant semantic changes towards the label, potentially leading to more severe distortions. This could explain why fewer steps may perform better for OOD detection. In addition, it is beneficial to have fewer time-steps for faster inference in practice.
## 5 Limitations and Future Work
Our method uses a diffusion model for inference, which inherently has a low inference speed due to its iterative nature. Using NVIDIA V100 32GB, GDM (60 steps) and LDM (25 steps) achieve speeds of 0.05 and 0.53 images/s/GPU respectively. Given that DiffGuard relies on diffusion models for both noise addition and denoising, future optimizations should focus on speed improvement in both processes.
## 6 Conclusion
In this paper, we investigate the utilization of pre-trained diffusion models for detecting OOD samples through semantic mismatch. A novel OOD detection framework named DiffGuard is proposed, which is compatible with all diffusion models with either classifier guidance or classifier-free guidance. By guiding the generation process of diffusion models with semantic mismatch, DiffGuard accentuates the disparities between InDs and OODs, thus enabling better differentiation. Moreover, we propose several techniques to enhance different types of diffusion models for OOD detection. Experimental results show that DiffGuard performs well on both Cifar-10 and hard cases from the ImageNet benchmark, without the need for fine-tuning pre-trained diffusion models.
**Acknowledgment**. This work was supported in part by the General Research Fund of the Hong Kong Research Grants Council (RGC) under Grant No. 14203521, and in part by the Innovation and Technology Fund under Grant No. MRP/022/20X. We gratefully acknowledge the support of MindSporc, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research.
|
2306.00444
|
Chiral Transport Phenomena and Compact Stars
|
I will review the main chiral transport phemomena arising in systems made up
of (almost) massless fermions associated to the quantum chiral anomaly. These
quantum effects might have relevant implications in compact stars, and I will
review some relevant works that reveal so. I will also show how a conservation
law that has the same form of the chiral anomaly also emerge in perfect
classical fluids, which expresses a conservation law of magnetic, fluid and
mixed helicities for isentropic fluids, and why this should also be relevant in
compact stars.
|
Cristina Manuel
|
2023-06-01T08:34:00Z
|
http://arxiv.org/abs/2306.00444v1
|
# Chiral Transport Phenomena and Compact Stars _Journal of Physics: Conference Series_
###### Abstract
I will review the main chiral transport phenomena arising in systems made up of (almost) massless fermions associated to the quantum chiral anomaly. These quantum effects might have relevant implications in compact stars, and I will review some relevant works that reveal so. I will also show how a conservation law that has the same form of the chiral anomaly also emerge in perfect classical fluids, which expresses a conservation law of magnetic, fluid and mixed helicities for isentropic fluids, and why this should also be relevant in compact stars.
## 1 Introduction
Understanding the role of symmetries, and also the violation of a symmetry, has played a pivotal role in different branches of physics. More particularly, in the development and construction of the Standard Model of particle physics the study of chiral symmetry was crucial. The weak force was found to be only interacting with left-handed fermions (or right-handed antifermions), revealing a breaking of parity. Electromagnetic interactions were known to be respectful with the chirality of the charged particles. However, at quantum level the chiral symmetry was found to be not longer preserved. This discovery had a deep impact on the construction of theoretical particle physics models, as one had to understand the criteria under which some symmetries could be broken by quantum effects.
In particle physics the effects of the quantum chiral anomaly come mainly in the explanation of different anomalous decays, such as that of the neutral pion into two photons. In many body systems, the quantum chiral anomaly has also several relevant macroscopic effects, as it has been found out that it leads to a wide variety of dissipationless transport phenomena that have a relevant impact in the dynamical evolution of chiral systems. In this talk I will discuss and review the most relevant aspects of chiral transport phenomena, and then focus on why this is also relevant in the study of compact stars. My intention is not to give a complete review on this topic (check the existing excellent reviews in the literature, see Refs. [1, 2, 3, 4, 5]), so I apologize if I cannot cover all existing works on this growing topic in this talk. I will rather pinpoint the most relevant concepts and focus on some works that reveal its relevance in compact stars.
## 2 Chiral magnetic and vortical effects
Chiral transport phenomena refers to quantum transport effects associated to the chirality of fermions, and related to the quantum chiral anomaly. One considers the situation of massless, or quasi-massless fermions, as chirality is only a well-defined quantum number in that case, as otherwise, the mass mixes up the dynamical evolution of the different fermion chiralities.
I will concentrate on discussing the chiral magnetic and vortical effects. The chiral magnetic effect (CME) [6] is a phenomenon in which a magnetic field generates an electric current in a conducting material, such as a plasma or a fluid, that contains a population of fermions with a chiral imbalance. One intuitive way of understanding the CME comes by realizing that for massless fermions the spin and momentum are either parallel or antiparallel, depending whether those are right-handed or left-handed (the opposite for the antifermions). In a magnetic field all the spins are aligned, and this implies that all particles then move in the direction of the magnetic field. In the presence of a misbalance among chiralities, these parallel/antiparallel currents to the magnetic field are not counterbalanced, and this is ultimately what creates the effect. A similar situation occurs in the presence of vorticity, as also spins align with the vorticity, then there is a current parallel to the vorticity vector, and one talks about the chiral vortical effect (CVE).
The CME or CVE currents depend on the chiral chemical potential, the parameter that gives account of the chiral misbalance. That the effects are related to the chiral anomaly can be shown in different ways, but I will highlight that first used in [7], based on using some effective field theory methods, and valid at zero temperature. In the Hamiltonian of a system a chemical potential \(\mu\) and chiral chemical potential \(\mu_{5}\) enter as new pieces in the Hamiltonian that go as \(\delta H=\mu n+\mu_{5}n_{5}\), where \(n/n_{5}\) are the charge/chiral charges densities. Very often one says that the chemical potentials act as a ficticious zero component of a vector gauge potential. In a moving system with velocity \(u^{\mu}\) those terms would rather be \(\delta H=\mu u_{\mu}j^{\mu}+\mu_{5}u_{\mu}j_{5}^{\mu}\), where \(j^{\mu},j_{5}^{\mu}\) are the (classically conserved) vector and axial vector currents, respectively. Thus, one sees that \(\mu u^{\mu}\) couples to matter as a real vector gauge field potential \(A^{\mu}\) does, while \(\mu_{5}u^{\mu}\) couples as it were an axial vector gauge potential. We can push these analogies, and then compute the quantum anomaly in the presence of the ficticious vector and axial vector fields. At a quantum level, we can then use the (covariant) quantum anomalies [3] for the vector and axial currents, assuming that the vector gauge field is \(eA^{\mu}+\mu u^{\mu}\), while the axial vector field is \(\mu_{5}u^{\mu}\)
\[\partial_{\mu}j_{5}^{\mu} = -\frac{1}{4\pi^{2}}\epsilon_{\mu\nu\alpha\beta}\left(\partial^{ \mu}(eA^{\nu}+\mu u^{\nu})\partial^{\alpha}(eA^{\beta}+\mu u^{\beta})+\partial ^{\mu}(\mu_{5}u^{\nu})\partial^{\alpha}(\mu_{5}u^{\beta})\right)\, \tag{1}\] \[\partial_{\mu}j^{\mu} = -\frac{1}{2\pi^{2}}\epsilon_{\mu\nu\alpha\beta}\left(\partial^{ \mu}(eA^{\nu}+\mu u^{\nu})\partial^{\alpha}(\mu_{5}u^{\beta})\right). \tag{2}\]
It turns out that all the pieces with a chemical and chiral chemical potential in the quantum anomaly equations might be re-written down by modifying the expressions of the vector and axial-vector currents. After integrating by parts, and discarding surfaces terms, one can re-write the above equations as
\[\partial_{\mu}\left(j_{5}^{\mu}+\frac{e\mu}{2\pi^{2}}B^{\mu}+ \frac{\mu^{2}+\mu_{5}^{2}}{2\pi^{2}}\omega^{\mu}\right) = -\frac{e^{2}}{16\pi^{2}}\epsilon_{\mu\nu\alpha\beta}F^{\mu\nu}F^{ \alpha\beta}\, \tag{3}\] \[\partial_{\mu}\left(j^{\mu}+\frac{e\mu_{5}}{\pi^{2}}B^{\mu}+\frac {\mu\mu_{5}}{\pi^{2}}\omega^{\mu}\right) = 0\, \tag{4}\]
where we have defined
\[\omega^{\mu}=\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}u_{\nu}\partial_{\alpha}u _{\beta}\,\qquad B^{\mu}=\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}u_{\nu}F_{\alpha \beta}\, \tag{5}\]
which in the rest frame of the system represent the vorticity and magnetic field vectors, respectively. In a moving frame however, the zero component of these vectors are associacted to the fluid and mixed helicities (we will later on discuss on this). We then see that at \(T=0\), one could read the conservation law of the electromagnetic current by multiplying by \(e\) the r.h.s. of Eq. (4), and thus identify the CME and CVE currents in this way. Please note that induced chiral currents proportional to \(B^{\mu}\) and \(\omega^{\mu}\) are also produced, these are called the chiral separation and chiral vortical effects, respectively.
An interesting remark is that all the chiral transport phenomena that originate in the chiral quantum anomaly are dissipationless, and thus they do not imply an increase of entropy. Also that the effects are not corrected perturbatively, as they rely on the quantum chiral anomaly.
There are several different systems where all these ideas might be applied. In some materials, like the Weyl of Dirac semi-metals, there are quasiparticles that behave as massless fermions. In these systems the CME has been detected [9]. In systems at extreme conditions of temperature and/or density, one might expect that most fermions can be considered as massless whenever their mass \(m\) is such that \(m\ll T\) and/or \(m\ll\mu\), where \(T\) is the temperature. Thus, one can also expect to find chiral transport phenomena in the quark-gluon plasma phase studied with heavy-ion collisions. Big efforts by the different experimental collaborations of both the LHC and RHIC have been carried out (see for example [11, 10]), but so far it has not been detected, while there are debates on what the criteria to claim detection should be [12]. We can also expect that these ideas might be relevant in several cosmological and astrophysical scenarios, as we will encounter several extreme conditions where one can take the ultrarelativistic limit to describe the corresponding quasiparticles.
## 3 Chiral hydrodynamics and chiral kinetic theory
Relativistic hydrodynamics has been naturally applied to a variety of cosmological, astrophysical and nuclear physics scenarios. The hydrodynamical equations are the expressions of the conservations laws of a system. In the presence of chiral fermions, it thus seems natural to incorporate the quantum chiral anomaly equation in the hydrodynamics [8], the resulting framework is known as chiral or anomalous hydrodynamics.
An interesting result is that even if the quantum chiral anomaly requires the computation of one-loop Feynman diagrams, the famous triangle diagrams, for many-body systems it is possible to give account of it with semi-classical methods. Chiral or anomalous hydrodynamics might be decoded from the so called chiral kinetic or transport theory (CKT) [13, 14, 15]. There are several derivations of CKT, I will focus on those that I am more familiar with. The physics associated to these chiral imbalanced systems, governed by the chiral quantum anomaly, might be deduced by incorporating the first quantum corrections to classical transport equations. One can do that by taking the Dirac Hamiltonian, in the presence of electromagnetic fields, and diagonalize it for the particle and antiparticle degrees of freedom in an expansion in \(\hbar\), the Planck constant [16], treating the resulting expression semi-classically. It is possible also to derive an effective field theory that does the same at a quantum field theory level [17]. Alternatively, one can derive transport equations from quantum field theory, and expand the resulting equations in \(\hbar\)[18]. There are some subtleties when one uses one framework over the other to derive CKT, which has to do with the semi-classical definition of quasiparticle (and getting rid of the so called _Zitterbewegung_ effect [19]), but I will not enter in discussing this issue here.
The transport equation obeyed by the distribution function \(f_{p}\) associated to a fermion of chirality \(\chi\) can be written down as (see Refs. [13, 16] )
\[\frac{\partial f_{p}}{\partial t} + (1+e{\bf B}\cdot{\bf\Omega})^{-1}\left\{\left[\tilde{\bf v}+e \ \tilde{\bf E}\times{\bf\Omega}+e\ {\bf B}(\tilde{\bf v}\cdot{\bf\Omega})\right]\cdot\frac{ \partial f_{p}}{\partial{\bf r}}\right.\] \[\left.
\[+ e\left[\tilde{\bf E}+\tilde{\bf v}\times{\bf B}+e{\bf\Omega}\ (\tilde{\bf E} \cdot{\bf B})\right]\cdot{\partial f_{p}\over\partial{\bf p}}\biggr{\}}=0\.\]
Here \({\bf\Omega}=\chi{\bf p}/p^{3}\) is the so called Berry-curvature, and we have defined
\[\tilde{\bf E}={\bf E}-{1\over e}{\partial\epsilon_{\bf p}\over\partial{\bf r }}\,\ \ \ \ \ \ \ \tilde{\bf v}={\partial\epsilon_{\bf p}\over\partial{\bf p}}\,\]
where \(\epsilon_{\bf p}\) is the particle's energy. Although we work in natural units, restoring dimensions, it is possible to check that all pieces that contain the Berry curvature are proportional to \(\hbar\), and are pure quantum effecs that correct the classical terms of the transport equation. The particle density associated to these chiral fermions reads
\[n=\int{d^{3}p\over(2\pi)^{3}}(1+e\,{\bf B}\cdot{\bf\Omega})f_{p}\,\]
while the current reads
\[{\bf j}=-\int{d^{3}p\over(2\pi)^{3}}\left[\epsilon_{p}{\partial f_{p}\over \partial{\bf p}}+e{\bf\Omega}\cdot{\partial f_{p}\over\partial{\bf p}}\epsilon _{p}{\bf B}+\epsilon_{p}{\bf\Omega}\times{\partial f_{p}\over\partial{\bf r}} -ef_{p}{\bf E}\times{\bf\Omega}\right]\.\]
After integrating the kinetic equation one then obtains
\[{\partial n\over\partial t}+\nabla\cdot{\bf j}=-e^{2}\int{d^{3}p\over(2\pi)^ {3}}\left({\bf\Omega}\cdot{\partial f_{p}\over\partial{\bf p}}\right){\bf E} \cdot{\bf B}\.\]
Now considering the two possible chiralities, one can construct both the vector current and chiral current, combing the contributions of the two chiralities. Taking an equilibrium form of the distribution function, taking into account both particle and antiparticle degrees of freedom, one can then reproduce the quantum chiral anomaly, Eq. (10), while one can obtain the conservation of the vectorial current. Written in this form, one sees the clear quantum origin of the non-conservation of the current, as if the quantum corrections to the classical transport equation are neglected, the chiral current would be conserved.
It is interesting to study the dynamical evolution of the CME, allowing the electromagnetic fields to be dynamical. Let us assume that the system is at rest. After integrating the chiral anomaly equation over space in a closed volume \(V\), it leads to a quantum conservation law that relates the chiral fermion density, \(Q_{5}={1\over V}\int d^{3}x\,n_{5}(x)\) with the magnetic helicity of the system
\[{dQ_{5}\over dt}={e^{2}\over 2\pi^{2}}{1\over V}\int d^{3}x\,{\bf E}\cdot{\bf B }=-{e^{2}\over 2\pi^{2}}{d{\cal H}\over dt},\]
where
\[{\cal H}={1\over V}\int d^{3}x\,{\bf A}\cdot{\bf B}\]
is the magnetic field density. This quantity is gauge invariant provided that the magnetic field vanishes at, or it is parallel to the boundary of \(V\). The magnetic helicity gives a measure of the linkage and twists of the magnetic field lines. The above equation tells us that the chirality of the fermions can be transformed into magnetic helicity, and/or viceversa, and thus generate/destroy different non-trivial topological field configurations.
As the chiral symmetry is only an approximated symmetry, one should also add in the chiral anomaly equation a chirality flipping rate \(\Gamma_{f}\), typically giving account of scattering processes that allow for a change of chirality due to the existence of a small mass. Thus, one should rather write
\[\frac{dQ_{5}}{dt}+\frac{e^{2}}{2\pi^{2}}\frac{d{\cal H}}{dt}=-\Gamma_{f}n_{5}. \tag{13}\]
The chiral anomaly coupled to Maxwell's equations govern the dynamical evolution of the chiral medium. One can view that the chirality of the fermions can be transformed into magnetic helicity, and/or viceversa, on time scales shorter than \(t\ll 1/\Gamma_{f}\). An interesting property is that some electromagnetic modes are unstable, and can grow exponentially. Let us explain why. Assuming the presence of a CME current along the \(z\) direction of an applied magnetic field, a chiral magnetic instability arises [21, 22]. According to Ampere's law, a magnetic field is generated in the \(\theta\) direction. This induced field, in turn, generates a current in the same direction through the CME effect, which would then generate a field in the \(z\) direction. This process results in an amplified value of the initial magnetic field in the \(z\) direction, leading to a runaway mechanism that causes the instability. The unstable modes occur for field wavelengths which are less than \(k\sim e^{2}\mu_{5}\). The time scale of growth in an electromagnetic plasma is of the order \(\sim 1/e^{4}\mu_{5}\)[22], while for a conductor, with electrical conductivity \(\sigma\) it is \(t_{\rm ins}\sim\sigma/ke^{2}\mu_{5}\)[23]. In the presence of an initial chiral chemical potential there can be a generation of magnetic helicity with a sort of inverse cascade phenomenon, when the helicity is transferred from the highest to the lowest modes [23].
In the presence of a small fermion mass, the chiral anomaly equation is also affected by the presence of quantum coherent mixtures of mixed helicities, as seen in Ref.[24]. The effect of these genuine quantum states has, however, not yet been studied.
## 4 Chiral anomaly equation and compact stars
A couple of weeks before giving this talk, a review article on chiral transport phenomena in astrophysics and cosmology appeared in the arrives [5], which contains an exhaustive lists of references on this topic. I cannot cover all works in this talk, but I recommend the previous review for a much more complete set of references.
Let us start by mentioning that in cosmology the use of the quantum chiral anomaly has been extensive, as all the baryogenesis models rely on it [25]. The CME and the chiral instabilities have also been used to explain the generation of primordial magnetic fields, with magnetic helicity (see, for example [21, 23]). There are also closed related works in cosmology and axions. Initially axions were proposed to solve the strong CP problem, and nowadays they are serious dark matter candidates [26]. Axions and axion-like particles are predicted in many models as pseudo scalar particles that couple both to fermions and to photons. Most of the experiments to detect axions rely on how they couple to the electromagnetic fields. Several cosmological models of axions assume that the axion field \(a\) can be treated as a coherent classical field \(a(t)=\sqrt{2\rho_{\rm DM}}m_{a}\sin m_{a}t\), where \(\rho_{\rm DM}\) is the local dark matter density, and \(m_{a}\) is the axion mass. Then, its time derivative acts as a chiral chemical potential for the fermions, and would imply that they produce a CME current. A new proposal to detect axions [28], LACME (low temperature axion chiral magnetic effect), is based on this fact.
Let me briefly mention how all these ideas are relevant for compact stars. There has not been the same amount of works on the impact of the chiral anomaly for compact stars than in cosmology, and most of the discussions are very qualitative, and simply do some estimates on the scales involved to assess how relevant chiral transport phenomena could be. Definitively, much more work is needed in this area of research.
It has been suggested [29] that the CME could explain the magnetic helicity of neutron stars, whose origin remains unknown. In a proto neutron star one could create a big chiral misbalance in the population of electrons, which can be taken as quasi-massless as \(\mu_{e}\sim 100\) MeV. The chiral misbalance can be created in the neutronization process, as only left-handed
leptons participate in this electroweak process. More particularly, when the neutron star is formed the neutronization or electron capture \(p+e_{L}\to n+\nu_{L}^{e}\), which is not yet counterbalanced with neutron decay, would lead to a high population of right-handed electrons. Initial estimates of the chiral chemical potential assumed these to be close to the QCD scale, creating a very large magnetic field, of the order of \(B\sim 10^{18}\) G. However, it was later argued [30] that the chirality flipping rate of Rutherford scattering of electrons off protons would damp the chiral plasma instability. A more careful analysis [31] however reveals that electron capture rate depends very much on the temperature, and that one can find for high \(T\), but still for \(\mu_{e}\gg T\), that this mechanism is operating, creating magnetic fields of the order \(B\sim 10^{14}\) G.
Another set of ideas come in how the different field configurations of parallel electric and magnetic fields could create a chiral misbalance. In particular, these configurations occur in the magnetospheres of supermassive black holes and of pulsars. In Ref. [32] estimates of the chiral densities generated in those cases are presented. While for supermassive black holes the effect seems to be negligible, the authors of Ref. [32] suggest that it could be substantial for pulsars, estimating the chiral density created with the large time solution of the chiral anomaly, with a chirality flipping rate due to electron-electron scattering. Then the effect should be detected by checking that the electromagnetic radiation in a given specific range of frequencies should be circular polarized, as it propagates in a chiral medium.
It has also been considered that chiral effects might be relevant for the explanation of the so called pulsar kicks [33]. Neutron stars typically have a velocity greater than their progenitors, and many different sort of mechanism have been considered for the explanation of those kicks. The existence of the CME and CVE could lead to a possible explanation of the kick [34, 35].
Chiral effects should be relevant to study the dynamics of neutrinos in core-collapse supernova. Most of the energy released in these supernova explosions is in the form of neutrinos. The chirality of neutrinos should play a relevant role, and recently a chiral radiation transport theory has been put forward in Ref. [36, 37, 38].
## 5 Helicity conservation law
Studies of the relevance of chiral transport phenomena in compact stars are in their infancy, and typically focus on order of magnitude estimates. The whole set of hydrodynamical equations, and not only the chiral anomaly, should be taken into consideration to asses its impact on the physics of compact stars.
To this regard, an interesting claim has been made in the literature associated to classical hydrodynamics. There is an equation analogous to the chiral anomaly equation, valid for classical barotropic fluids [39, 40, 41], which expresses the conservation law of a combination of all the helicities that can be defined in the fluid. In collaboration with Juan Torres-Rincon, we have presented our own derivation in Ref. [42], which extends the original derivation to the isentropic and finite temperature cases. It is easier to show it for relativistic fluids. It only requires to derive hydrodynamical equations for the vectors \(\omega^{\mu}\) and \(B^{\mu}\) from the Euler equation obeyed by \(u^{\mu}\), and use some thermodynamical relations. More particularly, for isentropic fluids one finds
\[\partial_{\mu}(h^{2}\omega^{\mu}+heB^{\mu})=-\frac{e^{2}}{4}\epsilon_{\mu\nu \alpha\beta}F^{\mu\nu}F^{\alpha\beta}\, \tag{14}\]
where \(h\) is the enthalpy density. Thus, for isentropic fluids, there is an equation very similar to the chiral anomaly equation. This is a classical conservation law, though. Let us stress that zero component of the \(\omega^{\mu}\) vector gives the fluid helicity, which measures the linkage of the fluid lines, and the zero component of the \(B^{\mu}\) vector gives the mixed helicity, which measures the linkage of magnetic and fluid lines. This equation expresses that there is a combination of the fluid, magnetic and mixed helicities which is conserved for isentropic fluids. As these three
helicities measure linkage, the equation describes a conservation of some topological properties of the system. It would be interesting to see how different dissipative effects might affect this conservation law.
We have also formulated how this equation is modified in the presence of a chiral imbalance, which then reads
\[\partial_{\mu}(h^{2}\omega^{\mu}+heB^{\mu})=-\frac{e^{2}}{4}\epsilon_{\mu\nu \alpha\beta}F^{\mu\nu}F^{\alpha\beta}+(2h\omega^{\mu}+eB^{\mu})(T\partial_{ \mu}\bar{s}+\mu_{5}\partial_{\mu}x_{5})\, \tag{15}\]
where \(\bar{s}=s/n\) is the entropy per particle, \(x_{5}=n_{5}/n\) is the chiral fraction. Even for isentropic fluids (\(\partial_{\mu}\bar{s}=0\)), a space-time dependent chiral misbalance modifies the previous helicity conservation law. However, in the presence of chiral misbalance, Eq. (15) has to be combined with the chiral anomaly equation, Eq. (3). The chiral anomaly equation in the presence of a chiral chemical potential also involves the fluid and mixed helicities, through the axial vortical and chiral separation effects. Sometimes the chiral anomaly equation has been taken as a sort of helicity conservation law [43].
Let us stress that in all estimates of the relevance of the chiral anomaly presented in the previous section assumed that the fluid is at rest. However, this is certainly not the case in the astrophysical settings of interest, where we may expect the presence of vorticity and helicities, so it might be interesting to revise all those estimates taking into consideration all hydrodynamical equations. More particularly, it would be interesting to review the generation of magnetic fields in the proto-nuton stars, or the magnetic field dynamics in the magnetospheres of magnetars. We hope to report on this issue in the near future.
## Acknowledgments
I thank the organizers of this wonderful and unique workshop for the invitation to give this talk and for financial support. This work was supported by Ministerio de Ciencia, Investigacion y Universidades (Spain) under the project PID2019-110165GB-I00 (MCI/AEI/FEDER, UE), Generalitat de Catalunya by the project 2017-SGR-929 (Catalonia). This work was also partly supported by the Spanish program Unidad de Excelencia Maria de Maeztu CEX2020-001058-M, financed by MCIN/AEI/10.13039/501100011033
|
2305.06565
|
Realization RGBD Image Stylization
|
This research paper explores the application of style transfer in computer
vision using RGB images and their corresponding depth maps. We propose a novel
method that incorporates the depth map and a heatmap of the RGB image to
generate more realistic style transfer results. We compare our method to the
traditional neural style transfer approach and find that our method outperforms
it in terms of producing more realistic color and style. The proposed method
can be applied to various computer vision applications, such as image editing
and virtual reality, to improve the realism of generated images. Overall, our
findings demonstrate the potential of incorporating depth information and
heatmap of RGB images in style transfer for more realistic results.
|
Bhavya Sehgal, Vaishnavi Mendu, Aparna Mendu
|
2023-05-11T04:49:37Z
|
http://arxiv.org/abs/2305.06565v1
|
# Realization RGBD Image Stylization
###### Abstract
This research paper explores the application of style transfer in computer vision using RGB images and their corresponding depth maps. We propose a novel method that incorporates the depth map and a heatmap of the RGB image to generate more realistic style transfer results. We compare our method to the traditional neural style transfer approach and find that our method outperforms it in terms of producing more realistic color and style. The proposed method can be applied to various computer vision applications, such as image editing and virtual reality, to improve the realism of generated images. Overall, our findings demonstrate the potential of incorporating depth information and heatmap of RGB images in style transfer for more realistic results.
style transfer, computer vision, depth information, RGB-D images, neural networks, image processing, image editing, image colorization, heatmap, artistic style transfer
## I **Introduction**
Neural Style Transfer (NST) is a widely used technique for artistic stylization of various forms of data, including images, videos, and 3D models. It involves synthesizing an output image that preserves the content of the input image while adopting the style of another image. While deep neural networks have significantly improved the performance of NST algorithms, there are still technical barriers in 3D photo stylization, such as blurry or inconsistent stylized images and monocular depth estimation leading to holes and artifacts when synthesizing stylized images of novel views.
Previous studies have proposed various NST algorithms that use a pre-trained VGG-19 network to extract high-level features from the content image and calculate the style loss using Gram matrices [1]. However, these approaches did not consider depth preservation and coherence of details, which are crucial for evaluating the visual quality of the NST results. To address this limitation, recent works have proposed a system that integrates depth estimation and reconstruction loss in the training of the transformation network as shown in Fig. 1.
In this project, we propose a novel approach to neural style transfer that incorporates depth heatmap information to generate more realistic and visually pleasing style transfer outputs. Our proposed method takes RGB and depth images as inputs along with the heatmap of the RGB image and generates stylized outputs that are more faithful to the original content image. We compared our method with traditional neural style transfer approaches and found that our method produces more accurate color and style information.
Our proposed method has significant poten
Fig. 1: Pipeline for 3D stylization using RGBD images
tial applications in various areas, such as computer graphics and image editing. It provides an adjustable way to control the structure of the artistically stylized result, focusing on the depth map and image edges. Furthermore, our approach can be extended to retain or enhance the structure of the artistically stylized result, which is an essential factor in evaluating the visual quality of the results.
## II **Realted Works**
### _3D Neural Style Transfer_
The paper [2] discusses an exploration of extensions of neural style transfer to three dimensions using iterative style transfer. It investigates applications of depth-aware neural style transfer to images where depth is either available as a fourth channel, or estimated via deep learning. The paper formulates depth-dependent style transfer by implementing a depth-based mask to the style and content loss functions typically used in neural style transfer and depth estimation network and loss function which tries to match the depth of the pastiche with the estimated depth of the content image. Single artistic styles are transferred as well as blending multiple styles. Various experiments were conducted and different methods for using depth to augment neural style transfer were showcased.
### _Depth-aware Neural Style Transfer using Instance Normalization_
The paper [3] provides an overview of various methods for neural style transfer in computer graphics and computer vision. The review covers different approaches, including those based on convolutional neural networks (CNNs), generative adversarial networks (GANs), and patch-based methods. The review also highlights the strengths and weaknesses of each approach and discusses their respective applications. In addition, the review provides a critical analysis of the challenges associated with neural style transfer, including the need for better algorithms to improve the visual quality of stylized images. Overall, the literature review serves as a comprehensive guide for researchers and practitioners interested in neural style transfer in computer graphics and computer vision.
### _3D Photo Stylization: Learning to Generate Stylized Novel Views from a Single Image_
The paper [4] introduces a new method called '3D photo stylization' that aims at synthesizing stylized novel views from a single content image with arbitrary styles. The method learns 3D geometry-aware features on a point cloud representation of the scene for consistent stylization across views without using 2D image features. The approach jointly models style transfer and view synthesis and doesn't require ground-truth depth maps for training. The method demonstrates superior results and supports several interesting applications.
### _Depth-aware Neural Style Transfer_
The paper [5] describes a novel approach for neural style transfer that integrates depth preservation as additional loss, preserving overall image layout while performing style transfer. It points out the limitation of existing deep neural network based image style transfer methods which fail to provide satisfactory results when stylizing the images containing multiple objects potentially at different depths. The proposed approach adds depth reconstruction loss to supplement it. The experimental results validate that the proposed approach retains the essential layout of the content image.
### _Semantic Image Synthesis with Spatially-Adaptive Normalization_
The paper [6] describes a new method called Spatially-Adaptive Normalization for synthesizing photorealistic images using an input semantic layout. The proposed method modulates the activations in normalization layers with a spatially-adaptive, learned transformation that effectively propagates the semantic information throughout the network. This results in improved image synthesis compared
to several state-of-the-art methods, as demonstrated by experiments on several challenging datasets. The proposed method also supports multi-modal and style-guided image synthesis, enabling controllable, diverse outputs.
## III **Project Plan**
The aim of this project is to propose and evaluate a novel approach for 3D style transfer using RGBD images that incorporates depth heatmap information to generate more realistic and visually pleasing stylized outputs. Style transfer is a technique that involves transferring the style of one image onto another image while preserving its content. In this project, we will focus on transferring the style of a 2D image onto a 3D RGBD image, which poses technical challenges due to the additional dimension of depth information Fig. 4.
We conducted a thorough review of existing literature on neural style transfer algorithms and their extensions to 3D photo stylization. Based on our findings, we proposed a novel approach that integrates depth heatmap information into the style transfer process to generate more realistic and visually pleasing stylized outputs. Our method takes RGB and depth images as inputs along with the heatmap of the RGB image, and we evaluated its performance in comparison to traditional neural style transfer approaches. The results showed that our proposed method outperformed the traditional approach, producing more realistic and visually pleasing stylized outputs.
To evaluate the proposed method, we will conduct experiments on a dataset of RGBD images and measure the quality of the stylized outputs in terms of visual fidelity, color accuracy, and coherence of details. We will also compare our method's performance with traditional neural style transfer approaches and analyze the impact of depth information on the stylization results.
## IV **Dataset**
Our proposed neural style transfer approach is highly flexible as it does not require a specific dataset to run the code. Instead, it can utilize any content image and any style image provided by the user. This feature makes our model highly adaptable and versatile, allowing for a wide range of creative possibilities. By removing the need for a specific dataset, our approach eliminates the constraints imposed by limited or biased datasets.The versatility of our model is one of its key advantages. It provides users with the ability to apply their preferred artistic style to any content image. This flexibility allows for a broader range of applications, from artistic expression to visual content creation for industries such as mobile photography, and AR/VR applications. Additionally, the ability to use any content and
Fig. 2: Example image
style images reduces the time and resources needed for pre-processing and allows for faster experimentation with different styles and content.
## V **Methodology**
We employed a two-code file approach to perform RGB-D image generation and style transfer. The methodology involves the following steps:
### _RGB-D image generation and style transfer:_
In the first step, we generate an RGB-D image from a given input image using the MiDaS (MidasNet) model for depth estimation. This involves installing the required packages and libraries, we used pre-trained MiDaS model, loading the input image and preprocessing it, loading the pre-trained depth model and creating a depth map, merging the input image and the depth map to generate an RGB-D image, applying a heatmap to visualize the depth information on the image, displaying the result and saving the heatmap image.
### _Style Transfer :_
In the second step, we apply style transfer to the generated RGB-D image using the VGG19 model for feature extraction and a pre-trained TensorFlow Hub model for style transfer. We define content and style representations using the VGG19 model, calculate style and content loss, and run gradient descent to optimize the combined image. The stylized image is then saved.
Our approach leverages the rich style information extracted from a pre-trained CNN model such as VGG-19, and utilizes the depth information from MiDaS to create an RGB-D image. The style transfer step uses a pre-trained TensorFlow Hub model, and calculates both style and content loss to optimize the combined image. Our approach is a novel way to perform 3D style transfer that preserves spatial relationships and important features while transferring the style from a 2D image to a 3D scene.
### _Advantage of using depth and heatmap :_
The use of depth and heat maps in image stylization provides several advantages. First, depth maps provide a more accurate representation of the 3D structure of an image, which allows for more realistic stylization of objects in the scene. By taking into account the depth information and the heatmap of an image, stylization techniques can better preserve the spatial relationships between objects and their relative depth. This can be especially important for stylization techniques that rely on edge detection or color manipulation, as they can often lead to unnatural or distorted results if applied to objects with complex 3D geometry.
Fig. 4: Blended Image (Depth + Heat Map).
Fig. 3: Content Image (left), Style Image (right).
## VI **Results**
To evaluate the effectiveness of our proposed method, we compared it with the traditional neural style transfer approach. We found that our method produces more realistic and visually pleasing style transfer outputs than the traditional method. Our approach incorporated depth heatmap information, which provided an adjustable way to control the structure of the artistically stylized result while focusing on the depth map and image edges. The proposed method improved the accuracy of color and style information in the stylized images.
Our method can be applied to various computer vision applications such as image editing and virtual reality, where improved realism of generated images is crucial. Our approach has significant potential applications in computer graphics and image editing, as it can be extended to retain or enhance the structure of the artistically stylized result, which is an essential factor in evaluating the visual quality of the results. Overall, our findings demonstrate the potential of incorporating depth information and heatmap of RGB images in style transfer for more realistic results.
## VII **Applications**
The proposed approach for 3D style transfer using RGBD images has several potential applications in the field of computer vision and graphics. One application is in the field of virtual reality and augmented reality, where realistic 3D stylized images can enhance the user's immersive experience. The proposed approach can also be applied in the field of architectural visualization, where architects and designers can visualize their designs in 3D with different styles, textures, and colors.
Another potential application is in the field of entertainment and animation, where the proposed approach can be used to create artistic stylized 3D animations and movies. Additionally, the proposed approach can also be used in the field of robotics, where robots can use 3D stylized images for object recognition and scene understanding. Overall, the proposed approach has several potential applications in various fields and can benefit researchers and practitioners in the field of computer vision and graphics.
## VIII **Challenges**
The proposed approach for 3D style transfer using RGBD images faced several challenges during its implementation. One of the main challenges was the lack of a large-scale RGBD dataset suitable for training the model. Another challenge was the difficulty of preserving the depth information of the input images while transferring the style. The proposed approach also faced challenges related to the complexity and computational requirements of the deep learning model.
While the proposed method shows promise in improving the realism of generated images, the time required to generate an image can be a limiting factor for real-time applications such as AR/VR. Developing more efficient methods for style transfer can address this challenge.
Additionally, the proposed approach required careful tuning of hyperparameters to achieve optimal results, which posed a challenge during the implementation. Other challenges include
Fig. 5: Content Image (a), Style Image (b), Heat & Depth map (c), Stylized Image (d).
handling missing or incomplete depth information, dealing with artifacts and inconsistencies in the stylized images, and ensuring the realism and coherence of the output images.
## IX **Future Works**
There can be several future works that can be considered for the project. Investigating the usage of generative adversarial networks (GANs) to boost the quality and realism of the stylised images is one approach that might be taken. Future research might also look into the transfer of different styles to a single image or the transfer of styles between various modalities, like from an image to a 3D model.
It should also focus on creating new evaluation criteria and benchmarks to rate the performance of 3D style transfer systems. Also, future research can concentrate on enhancing the computational effectiveness and scalability of 3D style transfer models to allow their use in practical applications.
## X **Conclusion**
To conclude, this research paper explored the application of style transfer in computer vision using RGB images and their corresponding depth maps. Our proposed method incorporated depth maps and a heatmap of the RGB image to generate more realistic style transfer results. Our approach outperformed traditional neural style transfer methods in terms of producing more accurate color and style information. The proposed method can have significant applications in various areas, such as computer graphics and image editing, to improve the realism of generated images. Future research can be focused on optimizing the proposed method for real-time processing and exploring its potential in virtual and augmented reality applications.
|
2307.07002
|
Classical Out-of-Distribution Detection Methods Benchmark in Text
Classification Tasks
|
State-of-the-art models can perform well in controlled environments, but they
often struggle when presented with out-of-distribution (OOD) examples, making
OOD detection a critical component of NLP systems. In this paper, we focus on
highlighting the limitations of existing approaches to OOD detection in NLP.
Specifically, we evaluated eight OOD detection methods that are easily
integrable into existing NLP systems and require no additional OOD data or
model modifications. One of our contributions is providing a well-structured
research environment that allows for full reproducibility of the results.
Additionally, our analysis shows that existing OOD detection methods for NLP
tasks are not yet sufficiently sensitive to capture all samples characterized
by various types of distributional shifts. Particularly challenging testing
scenarios arise in cases of background shift and randomly shuffled word order
within in domain texts. This highlights the need for future work to develop
more effective OOD detection approaches for the NLP problems, and our work
provides a well-defined foundation for further research in this area.
|
Mateusz Baran, Joanna Baran, Mateusz Wójcik, Maciej Zięba, Adam Gonczarek
|
2023-07-13T18:06:12Z
|
http://arxiv.org/abs/2307.07002v1
|
# Classical Out-of-Distribution Detection Methods Benchmark
###### Abstract
State-of-the-art models can perform well in controlled environments, but they often struggle when presented with out-of-distribution (OOD) examples, making OOD detection a critical component of NLP systems. In this paper, we focus on highlighting the limitations of existing approaches to OOD detection in NLP. Specifically, we evaluated eight OOD detection methods that are easily integrable into existing NLP systems and require no additional OOD data or model modifications. One of our contributions is providing a well-structured research environment that allows for full reproducibility of the results. Additionally, our analysis shows that existing OOD detection methods for NLP tasks are not yet sufficiently sensitive to capture all samples characterized by various types of distributional shifts. Particularly challenging testing scenarios arise in cases of background shift and randomly shuffled word order within in domain texts. This highlights the need for future work to develop more effective OOD detection approaches for the NLP problems, and our work provides a well-defined foundation for further research in this area.
## 1 Introduction
Systems based on artificial intelligence (AI) have to be safe and trustworthy (Amodei et al., 2016). Ensuring user reliance on these systems requires a cautious approach in making predictions. AI tools should avoid decisions on examples that significantly deviate from the training data. This is especially risky when the classifier shows excessive confidence in its incorrect decisions, leading to the propagation of errors in the system pipeline (Commission et al., 2019). However, current models are often trained under the closed-world assumption, limited to specific domains (Park et al., 2022). Test sets drawn from the same domain for evaluation may not reflect real-world scenarios accurately (Teney et al., 2020). This poses challenges when deploying such models in production environments (Schrouff et al., 2022).
Real-world data is often completely different from training one. The change in data distribution can be caused by several factors such as user behavior, legal regulations, market trends or seasonal changes. In an _open-world_ scenario, the AI-based system can be even exposed to inputs that deviate from the trained task. A significant risk that may arise is the possibility of model overconfidence while predicting data of this nature. As a result, there is a business need for detecting examples outside the domain (Hendrycks and Gimpel, 2017). Out-of-distribution (OOD) detection techniques can be well applied in a production system with human-in-the-loop technology (Wu et al., 2022), where it is important to quickly identify whether an input sample is characterized by a distributional shift. Such an example should be handled then by a human expert in order to avoid potential misclassification by the model. The essence of such systems is to find a trade-off between the accuracy and automation (Mosqueira-Rey et al., 2022) (Figure 1). This way, the model can achieve the highest possible performance on in-distribution (ID) data and difficult shifted data can be given to human verification, thus increasing the credibility of the overall system. The bottleneck here is a well-designed OOD detection method, which must be sensitive enough to capture all examples outside the domain.
Figure 1: Trustworthy mechanism in document processing platform. Classification models need additional method to detect OOD samples and provide them to human review.
The problem of OOD identification is mainly investigated for vision classification tasks (Yang et al., 2022; Kuan and Mueller, 2022), whereas in the field of NLP, studies on this topic are limited. We fill the missing gap by proposing a comprehensive analysis of existing OOD approaches for text classification tasks. In this work, we focus on the **post-hoc** techniques which are most suitable for business applications i.e. they have to fulfil the requirement of smooth integration into existing systems, without the need for additional OOD training data or any changes in model architecture. Ultimately, we evaluated eight methods in two different scenarios. The first one includes grouping test data into three splits according to the similarity to the in-distribution set: _Near-OOD_, _Far-OOD_ and _Distinct-OOD_(Yang et al., 2021). The AI system is evaluated based on the degree of domain difference between training and test samples. The second scenario considers the division of datasets according to the shift of distribution (Arora et al., 2021). There are many categories of distribution shift (Hupkes et al., 2022), but in this study, we consider two types - semantic and background. **Semantic shift** occurs when new labels appear, which may be due to the lack of a sufficient number of classes representing the training data or the emergence of new classes over time. In distinction, the **background shift** is class independent. It appears when the characteristic features of text change (e.g. source origin, writing style), which can happen even within the same class. The reason may be language evolution, regional conditions, etc. - such factors are difficult to predict and adequately address in the training set. By preparing data separated into different kinds of shift, we gain an in-depth insight into the origin of the data, on which a particular OOD detection method performs better or worse.
We also provide a well-structured research environment that allows the full reproducibility of the achieved outcomes and evaluation of another NLP models. The source code is available on GitHub1. To summarize, our contribution is as follows:
Footnote 1: [https://github.com/mateuszbaransanok/TrustworthyAI](https://github.com/mateuszbaransanok/TrustworthyAI)
* we adjust the existing OOD detection techniques to the text classification problems,
* we comprehensively evaluate the revised methods using two different scenarios tailored to the NLP domain,
* we deliver the complete experimental framework for evaluating the OOD methods.
## 2 Related Work
In recent years, there has been a growing interest in developing robust methods that can detect out-of-distribution examples. The work of Hendrycks and Gimpel (2017) has played a significant role in advancing this field. Their Maximum Softmax Probability (MSP) method, which relies on the softmax output of a neural network, has become a reference for subsequent research and still remains as the solid baseline approach (Zhang et al., 2023). The benefit of the MSP was its independence from the specific task domain. Since then, many researchers have extended this method or proposed novel techniques to address the challenge of detecting OOD data.
The first to popularize the interest in the OOD topic were computer vision (CV) researchers (Bengio et al., 2011). The emerged techniques in this field were summarized in a survey by Yang et al. (2021). The authors proposed a unified framework that groups OOD detection methods into categories based on their common underlying mechanisms. Among them, the following ones can be distinguished: (1) **output-based**(Liu et al., 2020; Liang et al., 2018) techniques which detect OOD samples based on output vector obtained by classification model for given input; (2) **gradient-based**(Huang et al., 2021) focus on analyzing the fluctuation of the gradient flow through the model layers to verify that the input is OOD; (3) **density-based**(Zong et al., 2018) methods involve modeling a density function from the training set and then determining whether a new example belongs to the same distribution; (4) **distance-based**(Sun et al., 2022; Ren et al., 2021) measure the dissimilarity between a new input and the training data by computing standard metrics such as cosine similarity, Euclidean or Mahalanobis distance. Another work of Yang et al. (2022) provides a comprehensive evaluation of 13 methods for OOD detection in CV. Notably, the experimental results show that simple preprocessing techniques can be highly effective, outperforming even more sophisticated methods in identifying OOD examples. In addition, post-hoc methods have demonstrated considerable effectiveness in OOD detection and have made significant impact in this task. The NLP community is also more and more interested in addressing the challenge of OOD detection data, especially after the appearance of text processing automation systems. Despite the expectation that pre-trained language models (PLMs)
would generalize well to unseen data, many existing transformer-based architectures perform poorly in an open-world assumption setup. This was proven by the work Yang et al. (2022) where the authors created the GLUE-X benchmark to reliably test the robustness of PLMs against OOD samples exposure, without using any of the previously mentioned techniques dedicated to OOD. Their achieved results confirm the necessity of further development of OOD detection methods. Currently, researchers are continuously proposing techniques tailored for the NLP tasks Rawat et al. (2021); Zhou et al. (2021), revisiting existing ones Podolskiy et al. (2021) or designing completely novel approaches that can address specific shifts in data distribution Arora et al. (2021); Chen et al. (2023). The latter two publications particularly highlight the importance of dividing datasets into semantic and background shift sets, as they provide valuable findings and a better understanding of how the model works on different data types.
Evidently, there have been several NLP articles addressing OOD detection, but their comparison to existing methods has been limited. A comprehensive study which evaluates various OOD detection approaches on a larger scale and addressing the specific needs of businesses is still lacking. To fill this gap, we have developed a benchmark that provides a fair comparison of these techniques while testing their performance across different distributional shift scenarios. All the selected methods have been inspired by CV achievements, and we have specifically chosen those that can be easily integrated into an existing AI system with minimal complexity.
## 3 Benchmark Outline
This section provides an overview of the datasets and the model architecture, with a detailed description of the techniques reimplemented in our benchmark for detecting out-of-domain examples. The metrics used for evaluating the effectiveness of the detection methods are also presented.
### Datasets
**News Category Dataset**Misra (2022) is one of the biggest news dataset. It contains around 210k news headlines from HuffPost published between 2012 and 2022. The dataset comprises of 42 classes that are heavily imbalanced. Therefore, the most similar classes were combined to avoid confusion between similar classes. Ultimately, we obtained 17 representative classes.
**Twitter Topic Classification**Antypas et al. (2022) is a topic classification dataset collected from Twitter posts. It consists of 3184 high-quality tweets that have been assigned to one of six classes.
**SST-2**The Stanford Sentiment Treebank) Socher et al. (2013) is a corpus with fully labeled parse trees that allows for an analysis of the compositional effects in language sentiment. The corpus includes almost 70k sentences extracted from movie reviews. Sentences were annotated with regard to their polarization (positive or negative).
**IMDB**Maas et al. (2011) is a large collection of movie reviews from the Internet Movie Database created for the binary sentiment classification task. According to the original 10-point movie rating scale from the website, the dataset samples were filtered to include only highly polarized texts annotated as positive (\(\geq 7\)) or negative (\(\leq 4\)).
**Yelp Polarity Review**Zhang et al. (2015) dataset includes almost 600k customer reviews which are labeled as positive or negative based on the number of stars given by the reviewer. Specifically, texts with \(\leq 2\) stars are labeled as negative, while those with \(\geq 3\) are labeled as positive. Due to the large size of the dataset, we created a smaller version by randomly selecting a subset of 75k reviews.
**Language Detection Dataset**Saji (2021) is a small dataset for language detection task. It contains texts in 17 different languages. For benchmark purposes, we filter out languages that do not use Latin alphabet. We've also excluded English texts to create a clear out-of-distribution dataset. Finally, dataset consist around 6k samples and all of them are used for OOD evaluation.
**20 Newsgroups**McGraw Hill (1995) consists of around 18k newsgroups posts on 20 topics. It is divided in two sets for training and evaluation. Moreover, we allocated an additional subset from the training set for validation purposes.
### Model
In all experiments, we used transformer-based Vaswani et al. (2017) RoBERTa\({}_{\text{base}}\)Liu et al. (2019) model as a backbone with a fully connected layer as a classification head. The model was pretrained on English corpora, but it supports multiple languages.
### Methods
We decided to compare **post-hoc** methods that are suitable to apply to trained models. They mainly use information based on model statistics such as intermediate layer values, gradients or non-deterministic properties of dropout regularization, etc. Their implementation is technically straightforward and independent of the type of model used.
An overview of our benchmark methodology is outlined in Figure 2. In addition to label prediction, we obtain a real-valued _confidence_ score that indicates the level of confidence that the model has in whether the given sample belongs to the ID data. We reimplemented eight OOD detection techniques and adapted them to the NLP classification pipeline.
(1) **Maximum Softmax Probability (MSP)**(Hendrycks and Gimpel, 2017) employs the softmax score to check the certainty of whether an example belongs to a domain - we refer to it as the baseline method in our work.
(2) **Energy-based**(Liu et al., 2020) uses an energy score function to indicate model confidence.
(3) **Rectified Activations (ReAct)**(Sun et al., 2021) is a simple technique for reducing model overconfidence on OOD examples by truncating the high activations during evaluation.
(4) **KL-Matching (KLM)**(Hendrycks et al., 2022) calculates the minimum KL-divergence between the softmax probabilities and the mean class-conditional distributions.
(5) **GradNorm**(Huang et al., 2021) utilizes information obtained from the gradient space of model's classification layer. This approach uses the vector norm of gradients to distinguish between ID and OOD samples, with the assumption that higher norm values correspond to in-distribution data.
(6) **Directed Sparisification (DICE)**(Sun and Li, 2022) selectively chooses a subset of weights through sparsification, which helps to eliminate irrelevant information from the output.
(7) **Virtual-logit Matching (ViM)**(Wang et al., 2022) combines information from feature space (PLM embedding) and output logits, providing both class-agnostic and class-dependent knowledge simultaneously for better separation of OOD data.
(8) **K-nearest neighbors (KNN)**(Sun et al., 2022) computes the distance between the embedding of an input example and the embeddings of the training set, and uses it to determine whether the example belongs to the ID or not.
The first four methods use signals originating from the output layer of the model. GradNorm focuses solely on the gradients that flow through the classification head, while methods from 6 to 8 operate on the embedding of a PLM. Most techniques (specifically no. 3-4, 6-8) need an initial configuration on the training or validation set to estimate the required statistics for ID data. To ensure consistency in the benchmarking process, the hyperparameters for the above methods were set to the values recommended in their original papers.
### Metrics
To compare the chosen methods, we used three the most common metrics for OOD detection.
**AUROC** calculates the area under the Receiver Operating Characteristic (ROC) curve. The ROC curve plots the true positive rate against the false positive rate, and a larger area under the curve indicates better performance. This was used as our primary evaluation metric.
**AUPR-IN** measures the area under the Precision-Recall (PR) curve. The PR curve displays how well the method can identify true positives with high precision, and AUPR provides a measure of overall performance. The _"IN"_ suffix indicates that this metric pertains to in-distribution data.
**FPR@95** is the false positive rate when the true positive rate is set to 95%. Lower scores indicate better performance.
Figure 2: Benchmark schema – fine-tuned PLM-based classifier followed by OOD detection method.
## 4 Data Preparation
In our study, we have paid particular attention to provide a complete and unbiased comparison of OOD detection methods. To achieve this goal, we adopted two diverse perspectives: one inspired by the field of computer vision Yang et al. (2022) and the other drawn from works dedicated to the NLP domain Rawat et al. (2021); Arora et al. (2021).
### Scenario 1
The first perspective intends to provide a detailed analysis of considered techniques based on the similarity between OOD examples and the training set. The degree of similarity is defined here in a human-intuitive way, taking into account such factors as thematic proximity, task dissimilarity or the sentence correctness.
As a base in-distribution data, we chose _News Category_ dataset using the seven most popular classes (**NC/I**). The remaining classes were considered as out-of-distribution split (**NC/O**) which represents data in close semantic shift. The _Twitter Topic Classification_ dataset has categories that are similar to those in the _News Category_ dataset, but the sentence construction is significantly different. Both sets create the **Near-OOD** data setup. Another prepared collection, **Far-OOD** includes datasets with reviews of movies, hotels and restaurants that are vastly different from _NC/I_ data - it is a connection of _SST-2_, _Yelp_ and _IMDB_. Additionally, we prepared one more group named **Distinct-OOD** containing _Language Detection_ dataset. With the inclusion of non-English texts there, we obtain a distinct set of tokens that the RoBERTa model has not encountered before, creating a completely separate dataset from the in-distribution data.
Finally, we also designed two collections derived from the _News Category_ dataset by randomly shuffling words from all those available within each category. The new dataset, called _News Category Random_, retained the original number of examples and the number of words in each sample. These sets aimed to examine the classification system behavior when presented with input sentences that are completely disrupted from their original context. The previous partition into ID (**NCR/I**) and OOD (**NCR/O**) subsets was maintained.
### Scenario 2
This scenario investigated the performance of detection methods for OOD examples under semantic and background shift. For semantic shift, we utilized the _20 Newsgroups_ dataset that is a hierarchical collection of documents. Among the four top-level categories, we selected three - **Computer**, **Sports**, and **Politics** - as training sets for the model, while excluding the _"misc"_ category due to potential data leakage issues. Subsequently, we generated various combinations of these categories, treating each one in turn as an in-distribution set, while considering the others as a OOD data. For example, the model could be trained on the samples from Computer class (ID dataset) and evaluated later on Sports and Politics (OOD).
In order to test the impact of background shift, we took three sentiment classification datasets - _IMDB_, _SST-2_ and _Yelp_, which are based on user reviews and represent different domains. Although these datasets have similar linguistic properties, the topics they address are distinct. Again, we constructed various combinations of these collections by treating each one as the ID set and the others as OOD sets.
## 5 Experiments
In this section, we describe the details of a training procedure and present the outcomes from the experiments.
### Training Setup
The PLM fine-tuning duration took maximally \(100\) epochs with an early stopping mechanism Raskutti et al. (2011) applied (patience \(=10\) epochs). By using this technique, we were able to conserve computational resources while still obtaining high-performing models. The learning rate hyperparameter was always set to \(2e-5\). To prevent overfitting and enhance the model's generalization capabilities, we used a weight decay \(w_{d}=0.01\) with
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Dataset** & **\#Classes** & **Train / Val / Test** & **Arg. words** \\ \hline NC/I & 7 & 66223 / 26475 / 39688 & 9.95 \\ NC/O & 10 & - / - 48522 & 9.77 \\ Twitter & 6 & - / - 3184 & 29.80 \\ IMDB & 2 & 25000 / 50000 / 20000 & 231.15 \\ SST-2 & 2 & 43221 / 5000 / 20000 & 9.53 \\ Yelp & 2 & 50000 / 5000 / 20000 & 133.11 \\ Language & 9 & - / - 5864 & 19.08 \\ NCR/I & 7 & - / - 39688 & 9.95 \\ NCR/O & 10 & - / - 48522 & 9.77 \\ Computer & 5 & 2965 / 456 / 1460 & 218.63 \\ Politics & 4 & 1959 / 315 / 979 & 406.53 \\ Sports & 4 & 2363 / 432 / 1182 & 224.43 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Datasets setup for experiments.
Adam optimizer (Zhang, 2018). The best performing model was selected based on F1-score achieved on the validation set, and the final results were reported on the test set (see Appendix A). To minimize the influence of randomness on the outcomes, we trained PLM five times for each task using different initial seeds.
During each experiment, the PLM was fine-tuned on ID data, which consisted of training and validation splits. The evaluation of the OOD detection methods themselves was performed on predefined test data. A complete overview of the split sizes along with the number of classes in all data collections is presented in Table 1.
### Results
The outcomes from experiments on data prepared in the first scenario (Section 4.1) are shown in Table 2. The _KNN_ clearly outperformed the other OOD detection techniques on all three data groups. _Energy-based_ method also stands out with its good results as well as _ViM_, except with its results on IMDB and Yelp dataset (worse than baseline _MSP_). As expected, the values of evaluation metrics on the NC/O dataset were the lowest among _Near-OOD_ and _Far-OOD_ divisions. This dataset was separated from the original dataset used in the training, making it the most difficult to properly identify as OOD due to the distributional closeness. The most challenging among the _Far-OOD_ collections appeared to be _SST-2_, probably because of a small average number of words per example. The _Language_ turned out to be the easiest dataset to detect OOD samples, and almost all methods performed well on it. The two worst performing approaches on the presented NLP tasks can be distinguished, i.e. _DICE_ and _KLM_. Their measures were always worse than _MSP_, sometimes even nearly random (a little above 50%) - _DICE_ on NC/O and _KLM_ on Twitter.
Interesting results can be seen in the last part of Table 2. Randomization of words in case of NC/O dataset (which created NCR/O) significantly increased the model confidence in detecting OOD examples comparing with initial NC/O samples. However, the OOD methods could not cope well with shuffled in-domain _News category_ data (NCR/I), which a human would recognize as the OOD.
Table 3 presents AUROC scores obtained from the second scenario (Section 4.2) evaluation. The results demonstrate that the _ViM_ method is more effective in detecting OOD samples with semantic shift to ID data. However, for background shift data, _ViM_ is not always the best and is outperformed by _KNN_ on IMDB and Yelp datasets. The SST-2 dataset proved to be problematic again, but
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{2}{c}{**Near-OOD**} & \multicolumn{3}{c}{**Far-OOD**} & \multicolumn{3}{c}{**Distinct-OOD**} \\ \cline{2-9}
**Method** & **NC/O** & **Twitter** & **IMDB** & **SST-2** & **Yelp** & **Language** & **NCR/I** & **NCR/O** \\ \hline MSP & 74.2\(\pm\)0.3 & 74.8\(\pm\)2.4 & 96.6\(\pm\)3.1 & 84.2\(\pm\)3.3 & 95.3\(\pm\)1.5 & 95.1\(\pm\)1.9 & 59.0\(\pm\)0.8 & 80.5\(\pm\)0.6 \\ Energy & 77.6\(\pm\)0.4 & 84.8\(\pm\)1.9 & 99.6\(\pm\)0.5 & 92.6\(\pm\)2.6 & 98.6\(\pm\)0.7 & 98.7\(\pm\)0.6 & 60.1\(\pm\)1.0 & 84.9\(\pm\)0.7 \\ GradNorm & 77.2\(\pm\)0.5 & 81.8\(\pm\)2.7 & 99.0\(\pm\)1.1 & 90.8\(\pm\)2.2 & 97.8\(\pm\)0.8 & 97.8\(\pm\)0.7 & 60.5\(\pm\)1.4 & 85.0\(\pm\)0.8 \\ KLM & 62.9\(\pm\)0.4 & 54.0\(\pm\)3.8 & 92.5\(\pm\)6.2 & 67.7\(\pm\)4.6 & 88.9\(\pm\)3.7 & 86.7\(\pm\)3.9 & 50.6\(\pm\)0.1 & 68.5\(\pm\)0.6 \\ ReAct & 77.5\(\pm\)0.4 & 84.5\(\pm\)2.0 & 99.6\(\pm\)0.5 & 92.4\(\pm\)2.8 & 98.6\(\pm\)0.7 & 98.7\(\pm\)0.6 & 60.0\(\pm\)1.0 & 84.7\(\pm\)0.7 \\ DICE & 58.2\(\pm\)0.6 & 60.9\(\pm\)3.2 & 76.6\(\pm\)5.8 & 60.9\(\pm\)1.4 & 84.4\(\pm\)2.2 & 69.3\(\pm\)2.8 & 51.2\(\pm\)0.9 & 60.4\(\pm\)1.4 \\ KNN & **80.1\(\pm\)0.2** & **92.9\(\pm\)1.2** & **99.8\(\pm\)0.1** & **96.4\(\pm\)1.1** & **99.5\(\pm\)0.1** & **99.6\(\pm\)0.1** & **67.6\(\pm\)1.3** & **88.7\(\pm\)0.5** \\ ViM & 79.9\(\pm\)0.2 & 89.2\(\pm\)1.5 & 90.6\(\pm\)3.1 & 96.0\(\pm\)0.9 & 92.9\(\pm\)1.6 & 98.1\(\pm\)0.8 & 60.7\(\pm\)0.8 & 86.1\(\pm\)0.4 \\ \hline \hline \end{tabular}
\end{table}
Table 2: AUROC (%) and standard deviations for methods evaluated on datasets from first scenario.
Figure 3: The performance of the methods is presented in AUROC depending on the type of distribution shift. The baseline method and its asymptotes are highlighted in pink color to facilitate comparison with other methods.
only when used as a training set. It is worth noting that the average length of texts per SST-2 is considerably different from IMDB and Yelp collections, which mainly contain longer texts. These observations suggest that _KNN_ is more stable in terms of different data characteristics. To further emphasize the importance of comparing methods based on the type of shift, we created a visualization in Figure 3. The _ReAct_, _Energy_, and _GradNorm_ techniques turned out to be better than the baseline, but only for the semantic shift case.
To summarize, either _KNN_ or _ViM_ is the preferred choice among all the analyzed OOD detection approaches. Other reported metric values (AUPR-IN and FPR@95) from all experiments are attached in Appendix B.
### Computational Resources
All experiments were conducted on a workstation equipped with a mid-range _Nvidia RTX 3060_ GPU with 12GB of memory, a high-end _Intel(R) Core(TM) i9-10900X_ CPU with 20 cores and 40 threads, and 256 GB RAM. These resources provided sufficient capacity for running the experiments and training the models used in this work, including analysis and processing of large datasets. In total, we trained 35 models, taking 222 GPU-hours while evaluation alone lasted 124 GPU-hours.
## 6 Conclusions
The latest advancements in OOD detection techniques have surpassed the conventional _MSP_ baseline. In this work, we applied some of them to the NLP classification problems, selecting only post-hoc approaches because of their easy integration to already trained PLM model. Most of the examined techniques achieved better results than the _MSP_, but their performance varied when subjected to different types of data distributional shift. Background shift was particularly challenging for the majority of methods to properly distinguish OOD examples. The _KNN_ and _ViM_ methods were found to be the most effective, and their performance was also stable. Hence, they are better alternatives to _MSP_ for out-of-distribution detection. However, it should be kept in mind that it is likely that the _ViM_ method is sensitive to cases where the language model was trained on short texts and later exposed to a long text from outside the domain.
The proposed by us the unique analysis of _Distinct-OOD_ scenario, allowed to draw interesting findings. The tested methods were able to identify texts in different languages very easily as a OOD examples, but they had problems detecting OOD on the _News Category Random_ with shuffled data. This means that PLM models, despite their ability to detect contextual nuances in text, still tends to behave like Bag-of-Words (Zhang et al., 2010) in text classification tasks. Business-wise, such structurally disturbed examples should not be further processed by AI systems. Therefore, OOD methods employed in NLP should better address semantic disorders in input sentences.
In conclusion, the overall performance of current OOD detection techniques is still low and unsatisfactory, particularly when presented with the _Near-OOD_ samples. Further research is necessary for the development of OOD detection methods, es
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline
**ID** & **OOD** & **MSP** & **Energy** & **GradNorm** & **KLM** & **ReAct** & **DICE** & **KNN** & **ViM** \\ \hline \multirow{4}{*}{Computer} & Politics & 91.5\({}_{\pm 1.9}\) & 96.3\({}_{\pm 1.1}\) & 95.4\({}_{\pm 0.9}\) & 78.0\({}_{\pm 7.3}\) & 96.2\({}_{\pm 1.2}\) & 34.6\({}_{\pm 13.2}\) & 97.0\({}_{\pm 0.5}\) & **98.6\({}_{\pm 0.3}\)** \\ & Sports & 89.8\({}_{\pm 2.7}\) & 94.9\({}_{\pm 1.6}\) & 94.1\({}_{\pm 1.6}\) & 74.5\({}_{\pm 4.6}\) & 94.6\({}_{\pm 1.7}\) & 51.9\({}_{\pm 6.9}\) & 95.7\({}_{\pm 0.9}\) & **97.7\({}_{\pm 0.6}\)** \\ \multirow{4}{*}{Politics} & Computer & 94.4\({}_{\pm 0.8}\) & 96.0\({}_{\pm 0.6}\) & 95.5\({}_{\pm 0.7}\) & 82.8\({}_{\pm 4.6}\) & 95.9\({}_{\pm 0.6}\) & 63.9\({}_{\pm 3.2}\) & 96.9\({}_{\pm 0.2}\) & **98.3\({}_{\pm 0.2}\)** \\ & Sports & 91.4\({}_{\pm 1.1}\) & 93.4\({}_{\pm 0.9}\) & 92.9\({}_{\pm 1.0}\) & 72.3\({}_{\pm 5.6}\) & 93.3\({}_{\pm 0.9}\) & 58.6\({}_{\pm 2.4}\) & 95.3\({}_{\pm 0.4}\) & **97.3\({}_{\pm 0.3}\)** \\ \multirow{4}{*}{Sports} & Computer & 95.7\({}_{\pm 0.6}\) & 97.0\({}_{\pm 0.9}\) & 96.8\({}_{\pm 0.5}\) & 81.6\({}_{\pm 3.9}\) & 96.9\({}_{\pm 0.9}\) & 58.1\({}_{\pm 7.6}\) & 97.6\({}_{\pm 0.4}\) & **98.5\({}_{\pm 0.2}\)** \\ & Politics & 95.3\({}_{\pm 0.2}\) & 96.5\({}_{\pm 0.6}\) & 96.4\({}_{\pm 0.5}\) & 79.9\({}_{\pm 2.5}\) & 96.5\({}_{\pm 0.7}\) & 52.4\({}_{\pm 11.5}\) & 97.2\({}_{\pm 0.3}\) & **98.0\({}_{\pm 0.1}\)** \\ \hline \multirow{4}{*}{IMDB} & SST-2 & 85.3\({}_{\pm 0.8}\) & 84.3\({}_{\pm 1.8}\) & 77.8\({}_{\pm 3.0}\) & 61.2\({}_{\pm 1.7}\) & 84.5\({}_{\pm 1.9}\) & 84.6\({}_{\pm 3.3}\) & **97.8\({}_{\pm 1.2}\)** & 97.3\({}_{\pm 0.7}\) \\ & Yelp & 76.0\({}_{\pm 3.3}\) & 74.9\({}_{\pm 1.1}\) & 66.2\({}_{\pm 3.6}\) & 32.0\({}_{\pm 1.0}\) & 75.3\({}_{\pm 3.4}\) & 49.6\({}_{\pm 8.6}\) & 97.5\({}_{\pm 1.1}\) & **98.4\({}_{\pm 0.8}\)** \\ \multirow{4}{*}{SST-2} & IMDB & 83.2\({}_{\pm 1.4}\) & 82.7\({}_{\pm 2.2}\) & 70.3\({}_{\pm 3.2}\) & 55.0\({}_{\pm 2.7}\) & 83.3\({}_{\pm 2.4}\) & 34.5\({}_{\pm 10.7}\) & **87.2\({}_{\pm 1.7}\)** & 83.9\({}_{\pm 3.3}\) \\ \multirow{4}{*}{Yelp} & Yelp & 75.7\({}_{\pm 2.2}\) & 75.0\({}_{\pm 3.1}\) & 61.3\({}_{\pm 2.7}\) & 51.3\({}_{\pm 3.0}\) & 75.7\({}_{\pm 3.4}\) & 35.4\({}_{\pm 8.4}\) & **87.8\({}_{\pm 0.4}\)** & 80.1\({}_{\pm 2.8}\) \\ \multirow{4}{*}{Yelp} & IMDB & 79.5\({}_{\pm 0.5}\) & 79.2\({}_{\pm 1.6}\) & 71.7\({}_{\pm 1.9}\) & 38.6\({}_{\pm 1.3}\) & 79.5\({}_{\pm 1.6}\) & 26.8\({}_{\pm 5.1}\) & 84.7\({}_{\pm 0.8}\) & **88.6\({}_{\pm 0.7}\)** \\ \multirow{4}{*}{Yelp} & SST-2 & 91.6\({}_{\pm 0.5}\) & 91.5\({}_{\pm 0.9}\) & 86.1\({}_{\pm 1.0}\) & 59.9\({}_{\pm 2.5}\) & 91.7\({}_{\pm 0.9}\) & 55.8\({}_{\pm 8.5}\) & 98.5\({}_{\pm 0.3}\) & **99.0\({}_{\pm 0.1}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 3: AUROC (%) and standard deviations for methods evaluated on datasets from second scenario. The first part of the table refers to semantic shift, where the second part refers to background shift.
pecially in the field of NLP, where more and more document processing automation systems are being developed, where ensuring reliability is important for users. Our work addresses the need for a comprehensive framework to evaluate the quality of OOD detection and provides easy extensibility to emerging methods.
## 7 Limitations
While our study provides valuable insights, it is important to keep in mind its limitations. Firstly, it was confined to text classification and did not include other NLP problems such as Named Entity Recognition (NER) [22], Question Answering (QA) [20], etc. Expanding this research to a wider range of tasks would provide a better understanding of the methods' performance in diverse data scenarios. Additionally, the inclusion of a task shift can be valuable, where the model is trained on a single task but OOD data come from a totally different prediction problems.
Secondly, we conducted our experiments using only RoBERTa model. We chose a widely used language model for text classification, but there are several other architectures worth testing, especially large language models (LLMs) [23] that now becoming extremely popular. A more comprehensive evaluation of the models and methods could provide more insights into whether the development of transformer-based methods contributes to better detection of OOD data.
Finally, due to restricted computational time, we did not perform a hyperparameter search for either model or methods. We just used recommend values from the original publications. This may have affected the obtained results, and it is certainly an aspect worth investigating in the future.
## 8 Ethics Statement
The authors believe that their work does not raise any ethical questions of harm or discrimination. Moreover, they acknowledge that the benchmark has a wide range of potential applications and want to make it clear that they are not responsible for any unethical applications of their work.
## Acknowledgements
The research was conducted under the Implementation Doctorate programme of Polish Ministry of Science and Higher Education (DWD/6/0322/2022) with cooperation of the Artificial Intelligence Department at Wroclaw University of Science and Technology. It was partially co-funded by the European Regional Development Fund within the Priority Axis 1 "Enterprises and innovation", Measure 1.2. "Innovative enterprises, sub-measure 1.2.1. "Innovative enterprises - horizontal competition" as part of ROP WD 2014-2020, support contract no. RPDS.01.02.01-02-0063/20-00. The work conducted by Maciej Zieba was supported by the National Centre of Science (Poland) Grant No. 2021/43/B/ST6/02853.
|
2302.13871
|
Iterated Filters for Nonlinear Transition Models
|
A new class of iterated linearization-based nonlinear filters, dubbed
dynamically iterated filters, is presented. Contrary to regular iterated
filters such as the iterated extended Kalman filter (IEKF), iterated unscented
Kalman filter (IUKF) and iterated posterior linearization filter (IPLF),
dynamically iterated filters also take nonlinearities in the transition model
into account. The general filtering algorithm is shown to essentially be a
(locally over one time step) iterated Rauch-Tung-Striebel smoother. Three
distinct versions of the dynamically iterated filters are especially
investigated: analogues to the IEKF, IUKF and IPLF. The developed algorithms
are evaluated on 25 different noise configurations of a tracking problem with a
nonlinear transition model and linear measurement model, a scenario where
conventional iterated filters are not useful. Even in this "simple" scenario,
the dynamically iterated filters are shown to have superior root mean-squared
error performance as compared with their respective baselines, the EKF and UKF.
Particularly, even though the EKF diverges in 22 out of 25 configurations, the
dynamically iterated EKF remains stable in 20 out of 25 scenarios, only
diverging under high noise.
|
Anton Kullberg, Isaac Skog, Gustaf Hendeby
|
2023-02-27T15:22:47Z
|
http://arxiv.org/abs/2302.13871v3
|
# Iterated Filters for Nonlinear Transition Models
###### Abstract
A new class of iterated linearization-based nonlinear filters, dubbed dynamically iterated filters, is presented. Contrary to regular iterated filters such as the iterated extended Kalman filter (IEKF), iterated unscented Kalman filter (IUKF) and iterated posterior linearization filter (IPLF), dynamically iterated filters also take nonlinearities in the transition model into account. The general filtering algorithm is shown to essentially be a (locally over one time step) iterated Rauch-Tung-Striebel smoother. Three distinct versions of the dynamically iterated filters are especially investigated: analogues to the IEKF, IUKF and IPLF. The developed algorithms are evaluated on 25 different noise configurations of a tracking problem with a nonlinear transition model and linear measurement model, a scenario where conventional iterated filters are not useful. Even in this "simple" scenario, the dynamically iterated filters are shown to have superior root mean-squared error performance as compared with their respective baselines, the EKF and UKF. Particularly, even though the EKF diverges in 22 out of 25 configurations, the dynamically iterated EKF remains stable in 20 out of 25 scenarios, only diverging under high noise.
## I Introduction
State estimation in dynamical systems is a universal problem occurring in the fields of engineering, robotics, economics, etc. State estimation requires a system model describing the dynamical evolution of the system and a measurement model relating the measured quantities to the state of the system. If the model is affine with additive Gaussian noise, the most well-known state estimation algorithm is the analytically tractable Kalman filter, which is the optimal estimator in the _mean-squared error_ (mse) sense [1].
In many practical problems, a nonlinear system model is necessary to accurately describe the system. This means that the state estimation problem is no longer analytically tractable and approximate inference techniques must be used. Approximate inference in state-space models is a well-studied field in signal processing, machine learning, etc. Here, we shall focus on linearization-based approximate inference techniques. These inference techniques linearize the nonlinear model locally (in each time instance) and then employ the Kalman filter. Analytical linearization leads to the _extended Kalman filter_ (ekf), while sigma-point filters, such as the _unscented Kalman filter_ (ukf) and the _cubature Kalman filter_ (ckf), can be thought of as statistical linearization filters [1, 2, 3].
General (Gaussian) state-space models, in the form of a transition model and a measurement model, may equivalently be probabilistically interpreted as a transition density and a measurement density. Under this interpretation, the linearization-based approximate inference techniques can be thought of as approximating the transition and measurement densities, e.g.,
\[\begin{array}{l}\mathbf{x}_{k+1}=\mathbf{f}(\mathbf{x}_{k},\mathbf{w}_{k}) \\ \mathbf{y}_{k}=\mathbf{h}(\mathbf{x}_{k},\mathbf{e}_{k})\end{array}\to\quad p (\mathbf{x}_{k+1}|\mathbf{x}_{k})\overset{a}{\approx}q(\mathbf{x}_{k+1}| \mathbf{x}_{k})\\ \begin{array}{l}\mathbf{y}_{k}=\mathbf{h}(\mathbf{x}_{k},\mathbf{e}_{k}) \end{array}\to\quad p(\mathbf{y}_{k}|\mathbf{x}_{k})\overset{a}{\approx}q( \mathbf{y}_{k}|\mathbf{x}_{k}),\end{array}\]
where \(p(\mathbf{x}_{k+1}|\mathbf{x}_{k})\) and \(p(\mathbf{y}_{k}|\mathbf{x}_{k})\) are the transition and measurement density and \(q(\mathbf{x}_{k+1}|\mathbf{x}_{k})\) and \(q(\mathbf{y}_{k}|\mathbf{x}_{k})\) the corresponding approximations. Particularly, the linearization-based filters assume affine Gaussian densities for \(q(\mathbf{x}_{k+1}|\mathbf{x}_{k})\) and \(q(\mathbf{y}_{k}|\mathbf{x}_{k})\) and the Kalman filter is then applied to this "auxiliary" model. The quality of the auxiliary model, and in extension the estimation performance of linearization-based filters, is thus highly dependent on the point (distribution in the statistical case) about which the models are linearized. Typically, the linearization point (distribution) is chosen to be the mean (distribution) of the current state estimate. However, a large error in the state estimate can lead to significant linearization errors that may cause even larger estimation errors in the next time step. This may, in the worst case, cause the filter to diverge. To alleviate such issues, several variants of iterated filters have been developed, such as the _iterated extended Kalman filter_ (iekf), the _iterated unscented Kalman filter_ (iukf) and the _iterated posterior linearization filter_ (iplf) [4, 5, 6, 7, 8]. These filters essentially iterate the measurement update, where each iteration the measurement model is re-linearized with the "latest" iterate. The research efforts within the field of iterated filters have particularly focused on finding a better linearization point for the measurement model, which is motivated by the fact that nonlinearities in the measurement model (likelihood) affect the resulting state estimate to a greater extent than nonlinearities in the transition model (prior). Nevertheless, these methods are for instance not useful in the case of a nonlinear transition model but linear measurement model.
In this paper, we seek to fill this gap by developing a class of iterated filters encompassing both the transition model and the measurement model in the iterative process, which we dub dynamically iterated filters. Note that a dynamically iterated filter based on posterior linearization was first derived in [9] for models with non-additive state transition noise. Further, the L-scaniplf in [10] is somewhat similar to the dynamicaliplf developed here, but requires access to past observations
and is thus not strictly a filter. In this paper, we particularly focus on additive noise models and treat both analytical as well as statistical linearization in a common framework. The algorithms developed here are essentially dynamically iterated analogues of the iekf, iukf and iplf, as well as other iterated sigma-point filters and does thus not require access to past observations. These new iterative algorithms encompass both the transition model as well as the measurement model. Thereby, the proposed algorithms constitute a generalization of conventional iterated filters. To illustrate the benefits of the proposed algorithms, it is empirically shown that iterating over the transition linearization improves the estimation performance even in the case of a linear measurement model. Thus, the contributions are twofold:
* A detailed derivation of dynamically iterated filters
* An extensive numerical evaluation of the developed algorithms as compared to standard nonlinear filters
The paper is organized as follows. In Section II, analytical and statistical linearization as well as the (affine) Kalman smoother equations are restated for completeness. In Section III, the state estimation problem is formulated in terms of approximate transition and measurement densities. Section IV derives the dynamically iterated filters and connects the final solution to iterated (affine) smoothers. Lastly, Section V provides a numerical example of the developed algorithm in a tracking scenario where conventional iterated filters are not useful.
## II Background
For clarity, we here present analytical and statistical linearization in a common framework, as well as restate the well-known Kalman smoother equations.
### _(Affine) Kalman Smoother_
The well-known Kalman filter and _Rauch-Tung-Striebel_ (rts) smoother equations are repeated here for clarity in terms of a time update, measurement update, and a smoothing step. These can for instance be found in [11]. Assume an affine state-space model with additive Gaussian noise, such as
\[\mathbf{x}_{k+1} =\mathbf{A}_{\mathbf{f}}\mathbf{x}_{k}+\mathbf{b}_{\mathbf{f}}+ \tilde{\mathbf{w}}_{k} \tag{1a}\] \[\mathbf{y}_{k} =\mathbf{A}_{\mathbf{h}}\mathbf{x}_{k}+\mathbf{b}_{\mathbf{h}}+ \tilde{\mathbf{e}}_{k}. \tag{1b}\]
Here, \(\tilde{\mathbf{w}}_{k}\sim\mathcal{N}(\tilde{\mathbf{w}}_{k};\mathbf{0}, \mathbf{Q}+\boldsymbol{\Omega}_{\mathbf{f}})\) and \(\tilde{\mathbf{e}}_{k}\sim\mathcal{N}(\tilde{\mathbf{e}}_{k};\mathbf{0}, \mathbf{R}+\boldsymbol{\Omega}_{\mathbf{h}})\) are assumed to be mutually independent. Note that usually, \(\boldsymbol{\Omega}_{\mathbf{f}}=\boldsymbol{\Omega}_{\mathbf{h}}=\boldsymbol{ 0}\). For this model, the (affine) Kalman smoother update equations are given by Algorithm 1.
### _Analytical and Statistical Linearization_
Given a nonlinear model
\[\mathbf{z}=\mathbf{g}(\mathbf{x}),\]
we wish to find an affine representation
\[\mathbf{g}(\mathbf{x})\approx\mathbf{A}\mathbf{x}+\mathbf{b}+\eta, \tag{5}\]
with \(\eta\sim\mathcal{N}(\eta;\boldsymbol{0},\boldsymbol{\Omega})\). In this affine representation, there are three free parameters: \(\mathbf{A},\mathbf{b}\) and \(\boldsymbol{\Omega}\). Analytical linearization through first-order Taylor expansion selects the parameters as
\[\mathbf{A}=\frac{d}{d\mathbf{x}}\mathbf{g}(\mathbf{x})|_{\mathbf{x}=\bar{ \mathbf{x}}},\quad\mathbf{b}=\mathbf{g}(\mathbf{x})|_{\mathbf{x}=\bar{ \mathbf{x}}}-\mathbf{A}\bar{\mathbf{x}},\quad\boldsymbol{\Omega}=\boldsymbol{ 0}, \tag{6}\]
where \(\bar{\mathbf{x}}\) is the point about which the function \(\mathbf{g}(\mathbf{x})\) is linearized. Note that \(\boldsymbol{\Omega}=\boldsymbol{0}\) essentially implies that the linearization is assumed to be error free.
Statistical linearization instead linearizes w.r.t. a distribution \(p(\mathbf{x})\). Assuming that such a distribution \(p(\mathbf{x})=\mathcal{N}(\mathbf{x};\bar{\mathbf{x}},\mathbf{P})\) is given, statistical linearization selects the affine parameters as
\[\mathbf{A} =\Psi^{\top}\mathbf{P}^{-1} \tag{7a}\] \[\mathbf{b} =\bar{\mathbf{z}}-\mathbf{A}\bar{\mathbf{x}}\] (7b) \[\boldsymbol{\Omega} =\Phi-\mathbf{A}\mathbf{P}\mathbf{A}^{\top}\] (7c) \[\bar{\mathbf{z}} =\mathbb{E}[\mathbf{g}(\mathbf{x})]\] (7d) \[\Psi =\mathbb{E}[(\mathbf{x}-\hat{\mathbf{x}})(\mathbf{g}(\mathbf{x}) -\bar{\mathbf{z}})^{\top}]\] (7e) \[\Phi =\mathbb{E}[(\mathbf{g}(\mathbf{x})-\bar{\mathbf{z}})(\mathbf{g} (\mathbf{x})-\bar{\mathbf{z}})^{\top}], \tag{7f}\]
where the expectations are taken w.r.t. \(p(\mathbf{x})\). The major difference from analytical linearization is that \(\boldsymbol{\Omega}\neq 0\), implying that the error in the linearization is captured.
```
1: Time update \[\hat{\mathbf{x}}_{k+1|k} =\mathbf{A}_{\mathbf{f}}\hat{\mathbf{x}}_{k|k}+\mathbf{b}_{ \mathbf{f}}\] (2a) \[\mathbf{P}_{k+1|k} =\mathbf{A}_{\mathbf{f}}\mathbf{P}_{k|k}\mathbf{A}_{\mathbf{f}}^ {\top}+\mathbf{Q}+\boldsymbol{\Omega}_{\mathbf{f}}.\] (2b)
2: Measurement update \[\hat{\mathbf{x}}_{k|k} =\hat{\mathbf{x}}_{k|k-1}+\mathbf{K}_{k}(\mathbf{y}_{k}-\mathbf{A }_{\mathbf{h}}\hat{\mathbf{x}}_{k|k-1}-\mathbf{b}_{\mathbf{h}})\] (3a) \[\mathbf{P}_{k|k} =\mathbf{P}_{k|k-1}-\mathbf{K}_{k}\mathbf{A}_{\mathbf{h}}\mathbf{ P}_{k|k-1}\] (3b) \[\mathbf{K}_{k} \triangleq\mathbf{P}_{k|k-1}\mathbf{A}_{\mathbf{h}}^{\top}( \mathbf{A}_{\mathbf{h}}\mathbf{P}_{k|k-1}\mathbf{A}_{\mathbf{h}}^{\top}+ \mathbf{R}+\boldsymbol{\Omega}_{\mathbf{h}})^{-1}.\] (3c)
3: Smoothing step \[\tilde{\mathbf{x}}_{k|K}^{s} =\hat{\mathbf{x}}_{k|k}+\mathbf{G}_{k}(\tilde{\mathbf{x}}_{k+1| K}^{s}-\tilde{\mathbf{x}}_{k+1|k})\] (4a) \[\mathbf{P}_{k|K}^{s} =\mathbf{P}_{k|k}+\mathbf{G}_{k}(\mathbf{P}_{k+1|K}^{s}-\] (4b) \[\mathbf{A}_{\mathbf{f}}\mathbf{P}_{k|k}\mathbf{A}_{\mathbf{f}}^ {\top}-\mathbf{Q}-\boldsymbol{\Omega}_{\mathbf{f}})\mathbf{G}_{k}^{\top}\] (4c) \[\mathbf{G}_{k} \triangleq\mathbf{P}_{k|k}\mathbf{A}_{\mathbf{f}}^{\top}\big{(} \mathbf{A}_{\mathbf{f}}\mathbf{P}_{k|k}\mathbf{A}_{\mathbf{f}}^{\top}+ \mathbf{Q}+\boldsymbol{\Omega}_{\mathbf{f}}\big{)}^{-1}\] (4d)
**Algorithm 1** (Affine) Kalman smoother
To set the stage for the algorithm development, the general state estimation problem is described here with a probabilistic viewpoint. To that end, consider a discrete-time state-space model (omitting a possible input \(\mathbf{u}_{k}\) for notational brevity) given by
\[\mathbf{x}_{k+1} =\mathbf{f}(\mathbf{x}_{k})+\mathbf{w}_{k} \tag{8a}\] \[\mathbf{y}_{k} =\mathbf{h}(\mathbf{x}_{k})+\mathbf{e}_{k}\] (8b) \[p(\mathbf{w}_{k}) =\mathcal{N}(\mathbf{w}_{k};\boldsymbol{0},\mathbf{Q}),\quad p( \mathbf{e}_{k})=\mathcal{N}(\mathbf{e}_{k};\boldsymbol{0},\mathbf{R}). \tag{8c}\]
Here, \(\mathbf{x}_{k},\ \mathbf{y}_{k},\ \mathbf{w}_{k}\) and \(\mathbf{e}_{k}\) denote the state, the measurement, the process noise and the measurement noise at time \(k\), respectively. It is further assumed that \(\mathbf{x}_{k}\in\mathcal{X},\forall k\) and that \(\mathbf{w}_{k}\) and \(\mathbf{e}_{k}\) are mutually independent. Note that (8a) and (8b) can equivalently be written as a _transition density_ and a _measurement density_ as
\[p(\mathbf{x}_{k+1}|\mathbf{x}_{k}) =\mathcal{N}(\mathbf{x}_{k+1};\mathbf{f}(\mathbf{x}_{k}),\mathbf{ Q}) \tag{9a}\] \[p(\mathbf{y}_{k}|\mathbf{x}_{k}) =\mathcal{N}(\mathbf{y}_{k};\mathbf{h}(\mathbf{x}_{k}),\mathbf{ R}). \tag{9b}\]
Further, the initial state distribution is assumed to be given by
\[p(\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{0};\hat{\mathbf{x}}_{0|0},\mathbf{ P}_{0|0}). \tag{10}\]
Given the transition and measurement densities and a sequence of measurements \(\mathbf{y}_{1:k}=\left[\mathbf{y}_{1}^{\top}\ \ldots\ \ \mathbf{y}_{k}^{\top}\right]^{\top}\), the state estimation problem consists of computing the posterior of the state sequence (trajectory), i.e., computing
\[p(\mathbf{x}_{0:k}|\mathbf{y}_{1:k})=\frac{1}{\mathbf{Z}_{1:k}}p(\mathbf{x}_{ 0})\prod_{i=1}^{k}p(\mathbf{y}_{i}|\mathbf{x}_{i})p(\mathbf{x}_{i}|\mathbf{x} _{i-1}), \tag{11}\]
where
\[\mathbf{Z}_{1:k}=\int_{\mathcal{X}}p(\mathbf{x}_{0})\prod_{i=1}^{k}p(\mathbf{ y}_{i}|\mathbf{x}_{i})p(\mathbf{x}_{i}|\mathbf{x}_{i-1})d\mathbf{x}_{0}\cdots d \mathbf{x}_{k},\]
is the marginal likelihood of \(\mathbf{y}_{1:k}\). The posterior (11) is commonly referred to as the joint _smoothing_ distribution which, in the case of linear \(\mathbf{f}\) and \(\mathbf{h}\), can be analytically found through the Kalman smoother, e.g., the rts smoother [11].
In the setting considered here, i.e., in _filtering_ applications, the densities of interest are rather the _marginal_ posteriors
\[p(\mathbf{x}_{k}|\mathbf{y}_{1:k})=\frac{p(\mathbf{y}_{k}|\mathbf{x}_{k}) \int_{\mathcal{X}}p(\mathbf{x}_{k}|\mathbf{x}_{k-1})p(\mathbf{x}_{k-1}| \mathbf{y}_{1:k-1})d\mathbf{x}_{k-1}}{\mathbf{Z}_{k}}, \tag{12}\]
for all times \(k\), where
\[\mathbf{Z}_{k}=\int_{\mathcal{X}}p(\mathbf{y}_{k}|\mathbf{x}_{k})p(\mathbf{x} _{k}|\mathbf{x}_{k-1})p(\mathbf{x}_{k-1}|\mathbf{y}_{1:k-1})d\mathbf{x}_{k-1} d\mathbf{x}_{k}.\]
Again, in the case of linear \(\mathbf{f}\) and \(\mathbf{h}\), the (analytical) solution is given by the Kalman filter [1].
In the general case, the marginal posteriors can not be computed analytically. Inspecting (12), there are two integrals that require attention. We turn first to the Chapman-Kolmogorov equation
\[p(\mathbf{x}_{k}|\mathbf{y}_{1:k-1})=\int_{\mathcal{X}}p(\mathbf{x}_{k}| \mathbf{x}_{k-1})p(\mathbf{x}_{k-1}|\mathbf{y}_{1:k-1})d\mathbf{x}_{k-1}. \tag{13}\]
Assuming that \(p(\mathbf{x}_{k-1}|\mathbf{y}_{1:k-1})\) is Gaussian, (13) has a closed form solution given by (2), _if_\(p(\mathbf{x}_{k}|\mathbf{x}_{k-1})\) is Gaussian and (8a) is affine. Therefore, as (9a) is Gaussian, we seek an affine approximation of the transition function \(\mathbf{f}\) as
\[\mathbf{f}(\mathbf{x}_{k-1})\approx\mathbf{A}_{\mathbf{f}}\mathbf{x}_{k-1}+ \mathbf{b}_{\mathbf{f}}+\eta_{\mathbf{f}}, \tag{14}\]
with \(p(\eta_{\mathbf{f}})=\mathcal{N}(\eta_{\mathbf{f}};\mathbf{0},\mathbf{\Omega }_{\mathbf{f}})\). Hence, the transition density \(p(\mathbf{x}_{k}|\mathbf{x}_{k-1})\) is approximated by \(q(\mathbf{x}_{k}|\mathbf{x}_{k-1})\) as
\[q(\mathbf{x}_{k}|\mathbf{x}_{k-1})=\mathcal{N}(\mathbf{x}_{k};\mathbf{A}_{ \mathbf{f}}\mathbf{x}_{k-1}+\mathbf{b}_{\mathbf{f}},\mathbf{Q}+\mathbf{\Omega }_{\mathbf{f}}). \tag{15}\]
If \(\mathbf{A}_{\mathbf{f}},\mathbf{b}_{\mathbf{f}}\) and \(\mathbf{\Omega}_{\mathbf{f}}\) are chosen to be the analytical linearization of \(\mathbf{f}\) around the mean of the posterior \(p(\mathbf{x}_{k-1}|\mathbf{y}_{1:k-1})\), the ekf time update is recovered through (2). Similarly, statistical linearization around the posterior at time \(k-1\) recovers the sigma-point filter time updates. This yields an approximate predictive distribution \(q(\mathbf{x}_{k}|\mathbf{y}_{1:k-1})\), which can then be used to approximate the second integral of interest (and subsequently, the posterior at time \(k\)). Explicitly, the second integral is given by
\[\mathbf{Z}_{k}\approx\int_{\mathcal{X}}p(\mathbf{y}_{k}|\mathbf{x}_{k})q( \mathbf{x}_{k}|\mathbf{y}_{1:k-1})d\mathbf{x}_{k}. \tag{16}\]
Similarly to (14), (16) has a closed form solution if \(p(\mathbf{y}_{k}|\mathbf{x}_{k})\) is Gaussian and (8b) is affine. Thus, as (9b) is Gaussian, we seek an affine approximation of the measurement function \(\mathbf{h}\) as
\[\mathbf{h}(\mathbf{x}_{k})\approx\mathbf{A}_{\mathbf{h}}\mathbf{x}_{k}+ \mathbf{b}_{\mathbf{h}}+\eta_{\mathbf{h}}, \tag{17}\]
with \(p(\eta_{\mathbf{h}})=\mathcal{N}(\eta_{\mathbf{h}};\mathbf{0},\mathbf{\Omega }_{\mathbf{h}})\). Hence, the measurement density \(p(\mathbf{y}_{k}|\mathbf{x}_{k})\) is approximated by \(q(\mathbf{y}_{k}|\mathbf{x}_{k})\) as
\[q(\mathbf{y}_{k}|\mathbf{x}_{k})=\mathcal{N}(\mathbf{y}_{k};\mathbf{A}_{ \mathbf{h}}\mathbf{x}_{k}+\mathbf{b}_{\mathbf{h}},\mathbf{R}+\mathbf{\Omega }_{\mathbf{h}}), \tag{18}\]
which leads to an analytically tractable integral. With (15) and (18), the (approximate) marginal posterior (12) is now given by
\[q(\mathbf{x}_{k}|\mathbf{y}_{1:k})=\frac{q(\mathbf{y}_{k}|\mathbf{x}_{k})q( \mathbf{x}_{k}|\mathbf{y}_{1:k-1})}{\int_{\mathcal{X}}q(\mathbf{y}_{k}| \mathbf{x}_{k})q(\mathbf{x}_{k}|\mathbf{y}_{1:k-1})d\mathbf{x}_{k}}, \tag{19}\]
which is analytically tractable and given by (3). Note that analytical linearization of (17) around the mean of \(q(\mathbf{x}_{k}|\mathbf{y}_{1:k-1})\) recovers the ekf measurement update whereas statistical linearization recovers the sigma-point measurement update(s).
The quality of the approximate marginal posterior (19) thus directly depends on the quality of the approximations (15) and (18). The quality of (15) and (18) in turn directly depends on the choice of linearization points or densities, which is typically chosen to be the approximate predictive and previous approximate posterior distributions. This choice is of course free and iterative filters such as the iekf, iukf or iplf have been proposed to improve these approximations [4, 5, 6, 12]. These filters can be thought of as finding an approximate posterior \(q^{i}(\mathbf{x}_{k}|\mathbf{y}_{1:k})\) which is then used to re-linearize the function \(\mathbf{h}\), producing a new approximation \(q^{i+1}(\mathbf{x}_{k}|\mathbf{y}_{1:k})\). This is then iterated until some convergence criterion is satisfied; typically until a fixed point is reached or a maximum number of iterations has been reached.
However, none of these algorithms, except [9], encompass the approximate density (15), even though this approximation directly affects the approximate marginal posterior as well. This is motivated by the fact that nonlinearities in the likelihood affect the posterior approximation to a greater extent than the prior. Nevertheless, standard iterated filters are for instance not useful in the case of a nonlinear transition function \(\mathbf{f}\) but linear measurement function \(\mathbf{h}\), even though the linearization of \(\mathbf{f}\) also affects the quality of the approximate posterior. Next, a general linearization-based algorithm encompassing both the transition density as well as the measurement density approximations is developed.
## IV Dynamically Iterated Filter
To derive an algorithm encompassing both the transition density (15), as well as the observation density (18), at time \(k\), we naturally need to seek an approximate posterior over both \(\mathbf{x}_{k-1}\) as well as \(\mathbf{x}_{k}\). To do so, we generalize the derivation in [8] to extend backwards one step. Define two auxiliary variables, \(\mathbf{g}_{k},\ \mathbf{g}_{k-1}\) as
\[\mathbf{g}_{k-1} =\mathbf{f}(\mathbf{x}_{k-1})+\psi \tag{20a}\] \[\mathbf{g}_{k} =\mathbf{h}(\mathbf{x}_{k})+\phi\] (20b) \[p(\psi) =\mathcal{N}(\mathbf{0},\alpha\mathbf{I}),\quad p(\phi)=\mathcal{ N}(\mathbf{0},\beta\mathbf{I}), \tag{20c}\]
where \(\psi\) and \(\phi\) are independent of each other as well as the process noise \(\mathbf{w}\) and the measurement noise \(\mathbf{e}\). Note that as \(\alpha,\ \beta\to 0\), \(\mathbf{g}_{k-1}\to\mathbf{f}(\mathbf{x}_{k-1})\) and \(\mathbf{g}_{k}\to\mathbf{h}(\mathbf{x}_{k})\). Now, the true joint posterior of \(\mathbf{x}_{k-1},\ \mathbf{x}_{k},\ \mathbf{g}_{k-1}\) and \(\mathbf{g}_{k}\) is given by
\[p(\mathbf{x}_{k-1:k},\mathbf{g}_{k-1:k}|\mathbf{y}_{1:k})\propto\] \[p(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})p(\mathbf{g}_{k}|\mathbf{x }_{k})p(\mathbf{g}_{k-1}|\mathbf{x}_{k-1}). \tag{21}\]
Following [8], we assume that the approximate posterior can be decomposed in the same manner, i.e.,
\[q_{\theta}(\mathbf{x}_{k-1:k},\mathbf{g}_{k-1:k}|\mathbf{y}_{1:k })\approx\] \[q_{\theta}(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})q_{\theta}(\mathbf{ g}_{k}|\mathbf{x}_{k})q_{\theta}(\mathbf{g}_{k-1}|\mathbf{x}_{k-1}), \tag{22}\]
where \(\theta\) are the parameters of the affine approximation of the transition model and measurement model, i.e., \(\theta=[\mathbf{A}_{\mathbf{f}},\mathbf{b}_{\mathbf{f}},\mathbf{\Omega}_{ \mathbf{f}},\mathbf{A}_{\mathbf{h}},\mathbf{b}_{\mathbf{h}},\mathbf{\Omega}_{ \mathbf{h}}]\).
We now seek a \(\theta\) such that \(q_{\theta}(\mathbf{x}_{k-1:k},\mathbf{g}_{k-1:k}|\mathbf{y}_{1:k})\) is close to \(p(\mathbf{x}_{k-1:k},\mathbf{g}_{k-1:k}|\mathbf{y}_{1:k})\), in some sense. Formally, the optimal parameters \(\theta^{*}\), and hence the optimal affine approximations of \(\mathbf{f}\) and \(\mathbf{h}\), are found through
\[\theta^{*}= \operatorname*{arg\,min}_{\theta}\mathcal{L}(\theta). \tag{23}\]
The loss \(\mathcal{L}(\theta)\) is free to choose, but a natural choice of dissimilarity measure between distributions is the Kullback-Leibler (kl) divergence, which we pursue here. The kl divergence between the true joint posterior and the approximate joint posterior is given by
\[\operatorname{KL}\left(p(\mathbf{x}_{k-1:k},\mathbf{g}_{k-1:k}| \mathbf{y}_{1:k})\right)\!\left\|q_{\theta}(\mathbf{x}_{k-1:k},\mathbf{g}_{k-1 :k}|\mathbf{y}_{1:k})\right\|=\\ \operatorname{KL}\left(p(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k}) \right)\!\left\|q_{\theta}(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})\right\|+\\ \operatorname{\mathbb{E}}\left[\operatorname{KL}(p(\mathbf{g}_{k}| \mathbf{x}_{k})\|q_{\theta}(\mathbf{g}_{k}|\mathbf{x}_{k}))\right]+\\ \operatorname{\mathbb{E}}\left[\operatorname{KL}(p(\mathbf{g}_{k- 1}|\mathbf{x}_{k-1})\|q_{\theta}(\mathbf{g}_{k-1}|\mathbf{x}_{k-1}))\right] \triangleq\mathcal{L}(\theta). \tag{24}\]
See Appendix A for the derivation. Note that the expectations in \(\mathcal{L}(\theta)\) are taken with respect to the true joint posterior \(p(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})\). It is noteworthy that \(\mathcal{L}(\theta)\) can be decomposed into three distinct terms, each dealing with each respective factor of (22). The first term is simply the kl divergence between the true and approximate joint posterior of the states at time \(k\) and \(k-1\). The second and third terms are the expected kl divergences of the affine approximation of the measurement model and transition model, respectively, where the expectation is taken with respect to the true joint posterior \(p(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})\).
It is impractical to minimize (24), seeing as the expectations are taken w.r.t. the true joint posterior \(p(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})\). Nevertheless, an iterative procedure may be used to approximately solve this minimization problem.
### _Iterative Solution_
To practically optimize (23), we assume access to an \(i\):th approximation to the state joint posterior \(p(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})\approx q_{\theta}^{i}(\mathbf{x}_{k-1:k }|\mathbf{y}_{1:k})\). We then use \(q_{\theta}^{i}(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})\) in place of \(p(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})\) in (24) and thus optimize an approximate loss, i.e., the approximate optimization problem is given by
\[\theta^{*}=\operatorname*{arg\,min}_{\theta}\operatorname{KL}\left( q_{\theta}^{i}(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})\|q_{\theta}^{i+1}( \mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})\right)+\\ \operatorname{\mathbb{E}}_{q_{\theta}^{i}(\mathbf{x}_{k-1:k}| \mathbf{y}_{1:k})}\left[\operatorname{KL}(p(\mathbf{g}_{k}|\mathbf{x}_{k})\|q_{ \theta}^{i+1}(\mathbf{g}_{k}|\mathbf{x}_{k}))\right]+\\ \operatorname{\mathbb{E}}_{q_{\theta}^{i}(\mathbf{x}_{k-1:k}| \mathbf{y}_{1:k})}\left[\operatorname{KL}(p(\mathbf{g}_{k-1}|\mathbf{x}_{k-1}) \|q_{\theta}^{i+1}(\mathbf{g}_{k-1}|\mathbf{x}_{k-1}))\right], \tag{25}\]
where the expectations are now over \(q_{\theta}^{i}(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})\). Sufficiently close to a fixed point, the first kl term is approximately 0 and the final optimization problem is thus given by
\[\theta^{*}=\operatorname*{arg\,min}_{\theta}\operatorname{\mathbb{E }}_{q_{\theta}^{i}(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})}\biggl{[}\operatorname{ KL}(p(\mathbf{g}_{k}|\mathbf{x}_{k})\|q_{\theta}^{i+1}(\mathbf{g}_{k}|\mathbf{x}_{k}))\\ +\operatorname{KL}(p(\mathbf{g}_{k-1}|\mathbf{x}_{k-1})\|q_{ \theta}^{i+1}(\mathbf{g}_{k-1}|\mathbf{x}_{k-1}))\biggr{]}. \tag{26}\]
Technically, the optimal \(\theta^{*}\) is given by statistical linearization of \(\mathbf{f}\) and \(\mathbf{h}\) w.r.t. the current approximation \(q_{\theta}^{i}(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})\), see e.g., [8]. Note that statistical linearization of \(\mathbf{f}\) w.r.t. \(q_{\theta}^{i}(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})\) only requires the marginal \(q_{\theta}^{i}(\mathbf{x}_{k-1}|\mathbf{y}_{1:k})\). Similarly, statistical linearization of \(\mathbf{h}\) only requires the marginal \(q_{\theta}^{i}(\mathbf{x}_{k}|\mathbf{y}_{1:k})\). Thus, the algorithm conceptually amounts to predicting forward in time, performing a measurement update and smoothing backwards in time in order to provide new linearization points (densities) for both the transition density as well as the measurement density simultaneously. These steps are then iterated until fixed point convergence, finally providing an approximate posterior \(q(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})\). The general algorithm is summarized in Algorithm 2 and schematically depicted in Fig. 1.
Fig. 1: Schematic illustration of a dynamically iterated filter. Ordinary iterated filters, marked in dotted orange, only re-linearize the measurement update. Dynamically iterated filters also re-linearize the time update through a smoothing step, marked in blue. The time update (TU) and the smoothing step (S) are linearized w.r.t. the smoothed distribution \(q(\mathbf{x}_{k-1}|\mathbf{y}_{1:k})\). The measurement update (MU) is linearized w.r.t. the current posterior \(q(\mathbf{x}_{k}|\mathbf{y}_{1:k})\). The steps are iterated until some convergence criterion is met.
Note that the algorithm is applicable to all possible combinations of models with linear and nonlinear \(\mathbf{f}\) and \(\mathbf{h}\). Further, even though the developed solution is essentially an iplf also encompassing the transition density, by changing the linearization method from statistical to analytical, an "extended" version is recovered in similar spirit to the iekf. Furthermore, an iukf version, similar to [5], may also be recovered by "freezing" the covariance matrices \(\mathbf{P}_{k-1|k}^{i}=\mathbf{P}_{k-1|k-1}\) and \(\mathbf{P}_{k|k}^{i}=\mathbf{P}_{k|k-1}^{i}\) and only updating these during the last iteration. It is also worthwhile to point out that the dynamically iterated filters are essentially "local" iterated smoothers, analogous to the _iterated extended Kalman smoother_ (ieks) [13] and the _iterated posterior linearization smoother_ (ipls) [10], operating on just one time instance and observation. Therefore, as noted in [9], a byproduct of the algorithm is a one-step smoothed state estimate and the method can thus be thought of as an iterated one-step fixed-lag smoother as well.
All that is left is to determine a stopping criterion for the iterations. Similarly to [8], a stopping criterion for the iterative updates may be formed on the basis of the kl divergence between two successive approximations of the posterior, i.e.,
\[\mathrm{KL}(q^{i}(\mathbf{x}_{k}|\mathbf{y}_{1:k})\|q^{i+1}(\mathbf{x}_{k}| \mathbf{y}_{1:k}))<\gamma.\]
Another possibility to check for fixed-point convergence is to instead use the smoothed density \(q(\mathbf{x}_{k-1}|\mathbf{y}_{1:k})\) in a similar manner as the posterior. This is not investigated in detail here. Instead, in the numerical example in Section V, a fixed number of iterations are used for simplicity.
## V Numerical Examples
To demonstrate the application of the dynamically iterated filters, we provide an illustrative example demonstrating the iterative procedure of the algorithm. We also provide a numerical example of maneuvering target tracking with a nonlinear transition model but a _linear_ measurement model.
### _Illustrative example_
To illustrate the iterative procedure of the algorithm, we use an example similar to that in [12] but alter it to include a dynamical model. Therefore, let the model be given by
\[\mathbf{x}_{k+1} \sim\mathcal{N}(\mathbf{x}_{k+1};a\mathbf{x}_{k}^{3},Q)\] \[\mathbf{y}_{k} \sim\mathcal{N}(\mathbf{y}_{k};\mathbf{x}_{k},R),\]
with \(a=0.01,Q=0.1\) and \(R=0.1\). We assume that a prior is given at time \(k-1\) as \(p(\mathbf{x}_{k-1}|\mathbf{y}_{1:k-1})=\mathcal{N}(\mathbf{x}_{k-1};3,4)\). We then apply an analytically linearized version of the dynamically iterated filter to this model and plot the intermediary and final approximate predictive, posterior, and smoothed densities. The true posterior is found simply through evaluating the posterior density over a dense grid. The example is illustrated in Fig. 2, where two iterations are enough for the posterior approximation to be accurate.
### _Maneuvering Target Tracking_
We consider a numerical example of maneuvering target tracking with a nonlinear transition model but a _linear_ measurement model. This is a typically "easy" tracking scenario where standard filters generally do well.
Three versions of the dynamically iterated filters are evaluated, an extended version (diekf), an unscented version (diukf), and a posterior linearization version (diplf) based on unscented transform. These are compared to their respective non-iterated counterparts, i.e., the ekf and the ukf. For the unscented filters, we use the tuning parameters
Fig. 2: Illustration of the (extended) dynamically iterated filter. The black curves in the top and bottom plots are the prior \(p(\mathbf{x}_{k-1}|\mathbf{y}_{1:k-1})\) and true posterior \(p(\mathbf{x}_{k}|\mathbf{y}_{1:k})\), respectively. The blue curves from top to bottom illustrate the approximate smoothed, predictive and posterior densities at iteration 0, respectively. The orange curves illustrate the same densities during the second iteration of the filter. The filter thus moves from the prior (top) to the predictive (middle) to the posterior (bottom) and back up to the smoothed (top). The time update, measurement update and smoothing step are indicated similarly to Fig. 1. Notice that iteration 0 exactly corresponds to an ekf.
\(\sqrt{3/n_{x}},\kappa=\frac{n_{x}(3/2-\alpha^{2})}{\alpha^{2}}\) and \(\beta=2\), where \(n_{x}\) is the dimension of \(\mathbf{x}\). This tuning corresponds to a weighting of \(1/3\) on the central sigma point.
We consider a target maneuver in a plane and describe the target using the state vector \(\mathbf{x}_{k}^{\top}=\begin{bmatrix}p_{k}^{x}&v_{k}^{x}&p_{k}^{y}&v_{k}^{y}& \omega_{k}\end{bmatrix}\). Here, \(p_{k}^{x},\;\;p_{k}^{y},\;v_{k}^{x},\;v_{k}^{y}\) are the Cartesian coordinates and velocities of the target, respectively, and \(\omega_{k}\) is the turn rate at time \(k\). The transition model is thus given by
\[\mathbf{x}_{k+1}=\mathbf{F}(\omega_{k})\mathbf{x}_{k}+\mathbf{w}_{k}, \tag{26}\]
where
\[\mathbf{F}(\omega_{k})=\begin{bmatrix}1&\frac{\sin(T\omega_{k})}{\omega_{k}}& 0&-\frac{(1-\cos(T\omega_{k}))}{\omega_{k}}&0\\ 0&\cos(T\omega_{k})&0&-\sin(T\omega_{k})&0\\ 0&\frac{(1-\cos(T\omega_{k}))}{\omega_{k}}&1&\frac{\sin(T\omega_{k})}{\omega_ {k}}&0\\ 0&\sin(T\omega_{k})&0&\cos(T\omega_{k})&0\\ 0&0&0&0&1\end{bmatrix},\]
\(T\) is the sampling period and \(\mathbf{w}_{k}\sim\mathcal{N}(\mathbf{w}_{k};\mathbf{0},\mathbf{Q})\) is the process noise at time \(k\), with
\[\mathbf{Q}=\begin{bmatrix}q_{1}\frac{T^{3}}{3}&q_{1}\frac{T^{2}}{2}&0&0&0\\ q_{1}\frac{T^{2}}{2}&q_{1}T&0&0&0\\ 0&0&q_{1}\frac{T^{3}}{3}&q_{1}\frac{T^{2}}{2}&0\\ 0&0&q_{1}\frac{T^{3}}{2}&q_{1}T&0\\ 0&0&0&0&q_{2}\end{bmatrix},\]
where \(q_{1}\) and \(q_{2}\) are parameters of the model.
In order to isolate the benefits of iterating over the time update, a linear positional measurement model is used, i.e.,
\[\mathbf{y}_{k}=\mathbf{H}\mathbf{x}_{k}+\mathbf{e}_{k}, \tag{27}\]
with \(\mathbf{H}=\mathrm{diag}\begin{bmatrix}1&0&1&0&0\end{bmatrix}\) and \(\mathbf{e}_{k}\sim\mathcal{N}(\mathbf{e}_{k};\mathbf{0},\sigma^{2}\mathbf{I})\).
The prior at time \(0\) is given by
\[p(\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{0};\hat{\mathbf{x}}_{0|0},\mathbf{ P}_{0|0}),\]
with \(\hat{\mathbf{x}}_{0|0}^{\top}=\begin{bmatrix}130&35&-20&-20&-4\frac{\pi}{180} \end{bmatrix}\) and \(\mathbf{P}_{0|0}=\mathrm{diag}\begin{bmatrix}\sigma_{p_{x}}^{2}&\sigma_{v_{x} }^{2}&\sigma_{p_{y}}^{2}&\sigma_{v_{y}}^{2}&\sigma_{\omega}^{2}\end{bmatrix}\), with \(\sigma_{p_{x}}^{2}=\sigma_{v_{x}}^{2}=\sigma_{p_{y}}^{2}=\sigma_{v_{y}}^{2}=5\) and \(\sigma_{\omega}^{2}=10^{-2}\). The initial state for the ground truth trajectories are drawn from this prior.
We fix \(q_{2}=10^{-2},T=1\) and sweep over all pairs of
\[q_{1} =\{10^{-4},10^{-3},10^{-2},10^{-1},10^{0}\}\] \[\sigma^{2} =\{10^{-2},10^{-1},10^{0},10^{1},10^{2}\},\]
i.e., 25 different noise configurations. For each noise configuration, we simulate \(10\) individual targets along \(20\) different trajectories of length \(K=130\) time steps, for a total of \(200\) simulations per configuration. Note that the \(20\) trajectories are different for each noise configuration and that the \(10\) targets for each trajectory differ only in their measurement noise realization. However, the trajectories and measurement noise realizations are exactly the same for each algorithm. Five example trajectories along with one measurement sample from each trajectory for a specific noise configuration is depicted in Fig. 3.
To evaluate the performance of each dynamically iterated filter, we calculate the average position and velocity rmse (separately) over the simulations for each of the filters and their corresponding baselines. We also compute a "relative" rmse, relative the non-iterated counterpart, i.e.,
\[V=\frac{\text{RMSE}_{\text{iter}}}{\text{RMSE}_{\text{base}}}, \tag{28}\]
where clearly, \(V\in[0,\infty]\) and lower is better. A relative score of \(0.9\) thus translates to a \(10\%\) lower rmse as compared to the baseline. This yields a "quick glance" picture of the expected rmse performance improvement in each particular noise configuration for each respective algorithm. For the diekf the non-iterated baseline is the ekf whereas for both the diukf and diplf the baseline is the ukf.
The results are presented as \(5\times 5\) matrices where each cell corresponds to a particular noise configuration for a particular pair of algorithms, e.g., the results for the diekf and ekf are summarized in one matrix. The results can be found in Fig. 4 where the position and velocity rmse are presented in Fig. 4 and Fig. 4, respectively. The leftmost matrix in each of the figures corresponds to the diekf and ekf. The middle matrix contains the results for the diukf and ukf and the rightmost matrix for the diplf and ukf. The top number in each cell is the rmse for the dynamically iterated filter whereas the bottom number corresponds to the baseline. The color of each cell represents the rmse of the dynamically iterated filter relative its baseline, according to (28). A deeper green color thus indicates a more substantial improvement than a lighter green. Lastly, an algorithm is considered to have diverged if its position rmse is approximately larger than \(\sigma\), where \(\sigma\) is the measurement noise standard deviation, as a position rmse of \(\sigma\) can be expected by just using the raw measurements. Divergence is illustrated by a "\(-\)" in the corresponding cell in the matrices.
From Fig. 4, it is clear that even though all of the dynamically iterated filters improve upon their baselines, the
Fig. 3: Five example trajectories from one noise configuration of the considered tracking problem. Each trajectory is depicted as a separate color. The black smaller dots are a specific measurement realization along each trajectory.
analytically linearized diekf benefits the most from the iterative procedure. Astonishingly, the ekf diverges for 22 out of 25 configurations whereas the diekf manages to lower that to 5 out of 25 and only diverges in the high noise scenario (\(\sigma^{2}=10^{2}\)). The performance increase in position rmse is more modest for the diukf and diplf but still sees improvement, particularly for low process noise regimes. For the velocity rmse in Fig. 4(b), the improvement for all of the three dynamically iterated filters is substantial. For low process noise regimes the improvement is up to 10-fold for the diekf and 5-fold for the diukf and diplf. Even for modest noise levels, the diukf and diplf roughly manage a 2-fold performance improvement. For the high noise scenario (\(\sigma^{2}=10^{2}\)), the diukf and diplf show a 10-fold performance improvement and bring the velocity rmse down to reasonable levels where the rmse for the ukf is very high.
## VI Conclusion
Dynamically iterated filters, a new class of iterated nonlinear filters, has been presented. The dynamically iterated filters, as opposed to previous iterated filters, are applicable to all possible combinations of (Gaussian) linear and nonlinear transition and measurement models. The filters were evaluated against their respective non-iterated baselines in a numerical example
\[\mathrm{KL}\left(p(\mathbf{x}_{k-1:k},\mathbf{g}_{k-1:k}|\mathbf{y}_{1:k}) \right)\|_{q}(\mathbf{x}_{k-1:k},\mathbf{g}_{k-1:k}|\mathbf{y}_{1:k})=\int p (\mathbf{x}_{k-1:k},\mathbf{g}_{k-1:k}|\mathbf{y}_{1:k})\log\frac{p(\mathbf{x }_{k-1:k},\mathbf{g}_{k-1:k}|\mathbf{y}_{1:k})}{q_{q}(\mathbf{x}_{k-1:k}, \mathbf{g}_{k-1:k}|\mathbf{y}_{1:k})}d\mathbf{x}_{k-1:k}d\mathbf{g}_{k-1:k}=\]
\[\int p(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})p(\mathbf{g}_{k}|\mathbf{x}_{k})p( \mathbf{g}_{k-1:k}|\mathbf{x}_{k-1:k})\log\frac{p(\mathbf{x}_{k-1:k}|\mathbf{ y}_{1:k})}{q_{q}(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})q_{q}(\mathbf{g}_{k-1:k}| \mathbf{x}_{k})}d\mathbf{x}_{k-1:k}d\mathbf{g}_{k-1:k}=\]
\[\int p(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})p(\mathbf{g}_{k}|\mathbf{x}_{k})p( \mathbf{g}_{k-1:k}|\mathbf{x}_{k-1:k})\log\frac{p(\mathbf{x}_{k-1:k}|\mathbf{ y}_{1:k})}{q_{q}(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})}+\log\frac{p(\mathbf{g}_{k}| \mathbf{x}_{k})}{q_{q}(\mathbf{g}_{k}|\mathbf{x}_{k})}+\log\frac{p(\mathbf{g}_{ k}|\mathbf{x}_{k})}{q_{q}(\mathbf{g}_{k}|\mathbf{x}_{k})}+\log\frac{p( \mathbf{g}_{k}|\mathbf{x}_{k})}{q_{q}(\mathbf{g}_{k}|\mathbf{x}_{k})}\Big{]}d \mathbf{x}_{k-1:k}d\mathbf{g}_{k-1:k}=\]
\[\mathrm{KL}\left(p(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})\right)\|_{q}q_{( \mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})})+\mathbb{E}_{p(\mathbf{x}_{k}|\mathbf{y} _{1:k})}\left[\mathrm{KL}(p(\mathbf{g}_{k}|\mathbf{x}_{k})\|q_{q}(\mathbf{g}_{ k}|\mathbf{x}_{k}))\right]+\mathbb{E}_{p(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})}\left[ \mathrm{KL}(p(\mathbf{g}_{k}|\mathbf{x}_{k})\|q_{q}(\mathbf{g}_{k}|\mathbf{x}_{ k}))\right]+\mathbb{E}_{p(\mathbf{x}_{k-1:k}|\mathbf{y}_{1:k})}\left[\mathrm{KL}(p( \mathbf{g}_{k-1}|\mathbf{x}_{k-1})\|q_{q}(\mathbf{g}_{k-1:k}))\right]\triangleq \mathcal{L}(\theta).\]
|
2305.18983
|
SO(2)-Equivariant Downwash Models for Close Proximity Flight
|
Multirotors flying in close proximity induce aerodynamic wake effects on each
other through propeller downwash. Conventional methods have fallen short of
providing adequate 3D force-based models that can be incorporated into robust
control paradigms for deploying dense formations. Thus, learning a model for
these downwash patterns presents an attractive solution. In this paper, we
present a novel learning-based approach for modelling the downwash forces that
exploits the latent geometries (i.e. symmetries) present in the problem. We
demonstrate that when trained with only 5 minutes of real-world flight data,
our geometry-aware model outperforms state-of-the-art baseline models trained
with more than 15 minutes of data. In dense real-world flights with two
vehicles, deploying our model online improves 3D trajectory tracking by nearly
36% on average (and vertical tracking by 56%).
|
H. Smith, A. Shankar, J. Gielis, J. Blumenkamp, A. Prorok
|
2023-05-30T12:27:47Z
|
http://arxiv.org/abs/2305.18983v3
|
# SO(2)-Equivariant Downwash Models for Close Proximity Flight
###### Abstract
Multirotors flying in close proximity induce aerodynamic wake effects on each other through propeller downwash. Conventional methods have fallen short of providing adequate 3D force-based models that can be incorporated into robust control paradigms for deploying dense formations. Thus, _learning_ a model for these downwash patterns presents an attractive solution. In this paper, we present a novel learning-based approach for modelling the downwash forces that exploits the latent geometries (i.e. symmetries) present in the problem. We demonstrate that when trained with only \(5\) minutes of real-world flight data, our geometry-aware model outperforms state-of-the-art baseline models trained with more than \(15\) minutes of data. In dense real-world flights with two vehicles, deploying our model online improves 3D trajectory tracking by nearly \(36\,\%\) on average (and vertical tracking by \(56\,\%\)).
## I Introduction
Multi-robot tasks often require aerial robots to fly in close proximity to each other. Such situations occur during collaborative mapping and exploration missions, which may require the robots to navigate constricted areas [1, 2], or when the task is constrained in a more limited workspace from the outset (ex. indoors) [3]. In some cases, such as aerial docking and payload transport [4, 5, 6], a close approach to another multirotor is indeed intended. The aerodynamic interference from other vehicles in all these cases is an additional risk and constraint for motion planners.
While it is possible to extract computational fluid models for multirotors that capture aerodynamic interactions over the entire state-space of the problem, such high-fidelity models [7, 8] are often too expensive and restrictive (computational time and run-time memory), or simply unnecessary (dynamically transitioning flight modes). To enable complex and fluid flight missions, we require a fast and accurate model of these exogenic forces that can facilitate onboard controllers robust to these disturbances.
In this work, we present a novel learning-based approach for estimating the downwash forces produced by a single multirotor. Unlike previous learning-based approaches, our _equivariant downwash model_ makes assumptions on the geometry present in the underlying downwash function. To encode these assumptions in our model, we extract _invariant geometric features_ from the input data, which decreases the dimensionality of the learning problem. Whereas traditional machine learning algorithms often require large amounts of training data to accurately learn the underlying function [9], our geometry-aware algorithm is sample-efficient.
We train the equivariant downwash model on real-world flight data collected by two multirotors using a baseline model-based controller (a linear quadratic regulator, LQR). When deployed online within the controller, our model achieves state-of-the-art (SOTA) performance on a variety of challenging experiments. To our knowledge, the equivariant downwash model is the first learning-based approach to uncover consistent patterns in the downwash in the lateral plane. Further, we empirically validate that our model is more sample-efficient than SOTA learning-based approaches.
### _Related Work_
**Multi-UAS Flights.** Despite recent and increasing interest in the problem of tight formations in aerial swarms [10], there is a dearth of work that attempts to _deploy_ teams in close proximity. Downwash-induced forces, and the resulting deviations from planned trajectories, can be trivially minimized by enforcing hard inter-agent separation constraints that are large enough for simplified motion planners [11]. Such approaches severely limit achievable swarm density by naively excluding navigable airspace, and worse, may fail their intended mission objectives due to the chaotic and directional nature of both single- and multi-agent aerodynamic interactions [12, 13].
Recent work has combined physics-based nominal dynamics models with deep neural networks (DNN) to predict exogenous forces from ground effect [14, 15] or from neighboring multirotors [16, 17, 15]. The predicted forces are then counteracted within an interaction-aware controller. We consider this model the SOTA for learning-based downwash prediction. However, these works model only the vertical downwash forces and do not exploit the problem geometry; the latter means that they suffer from comparatively low sample efficiency (see Section V).
**Geometric Deep Learning.** At its core, geometric deep learning involves imposing inductive biases (called "geometric priors") on the learning algorithm via one's knowledge of the problem geometry. As we will discuss in Section III, these geometric priors are represented using assumptions of
invariance and equivariance on the underlying function being learned [18, 19, 20]. The purpose of geometric priors is that they intuitively correspond to "parameter sharing," and thus have been shown to improve sample efficiency across various applications [21, 22, 23, 24].
Within the field of robotics, equivariant reinforcement learning has been employed for low-level quadrotor control [25] and robot manipulation tasks [26, 27]. In particular, recent research has developed equivariant deep-\(Q\) learning and soft actor-critic algorithms [23, 28] capable of learning complex manipulation tasks (ex. grasping, picking up and pushing blocks) using only a couple of hours of training data [26, 27]. These learning algorithms were performed entirely on physical robots (i.e. on-robot learning). To our knowledge, there has been no previous work that utilizes geometric priors to efficiently learn multirotor downwash forces.
### _Contributions_
The key contributions of our work are as follows:
1. We propose an equivariant model for multirotor downwash that makes assumptions on the downwash field geometry. This geometry-aware model represents data in a lower-dimensional space in order to satisfy the assumed rotational equivariance of our system.
2. We provide real-world experimental results that showcase the sample efficiency of our equivariant downwash model. Using only \(5\) minutes of flight data, we learn the downwash function with greater accuracy than SOTA learning-based approaches do with \(15\) minutes of data.
3. When deployed online within an optimal feedback controller, our model's predictions reduce vertical tracking errors by \(56\,\%\) and lateral tracking errors by \(36\,\%\).
## II Problem Formulation
Throughout the paper, we consider two identical multirotor vehicles, referred to as _Alpha_ (\(\mathcal{A}\)) and _Bravo_ (\(\mathcal{B}\)), operating in close proximity of one another. They have similar estimation and control stacks onboard, with the only difference being in their reference states/trajectories as well as the additional force correction terms. We will assume _Alpha_ is a "leader" aircraft, while _Bravo_ is a "follower" that suffers under the propeller downwash generated by _Alpha_.
**Notation.** We use \(\mathsf{SO}(n)\) to refer to the special orthogonal group. The elements in \(\mathsf{SO}(2)\) and \(\mathsf{SO}(3)\) represent two- and three-dimensional rotations about a point and a line, respectively. Unless explicitly specified, we will assume that all frames follow the North-East-Down (NED) convention with a right-hand chirality. We will let \(\mathcal{A}=\{a_{1},a_{2},a_{3}\}\) denote the body frame of _Alpha_ and \(\mathcal{M}=\{\hat{e}_{1},\hat{e}_{2},\hat{e}_{3}\}\) denote the inertial frame. The matrix \(R^{\mathcal{C}}_{\mathcal{M}}\in\mathsf{SO}(3)\) will denote the change-of-basis transformation from the inertial frame \(\mathcal{M}\) to a body frame \(\mathcal{C}\). Each \(R^{\mathcal{C}}_{\mathcal{M}}\) is a unitary transformation, meaning that it satisfies \((R^{\mathcal{C}}_{\mathcal{M}})^{\top}=(R^{\mathcal{C}}_{\mathcal{M}})^{-1}\).
Additionally, we will use the notation \(\mathbf{x}\) for vectors, and \(\boldsymbol{x}\) for vector-valued functions of time. For an \(n\)-dimensional vector \(\mathbf{x}\), we will let \([\mathbf{x}]_{i}\) denote its \(i\)th component. All vectors are assumed to be in the inertial frame, \(\mathcal{M}\). The position and velocity vectors corresponding to _Alpha_ and _Bravo_ will be written with the superscripts \(\mathcal{A}\) and \(\mathcal{B}\), respectively. We will abbreviate sine and cosine functions as \(s(\cdot)\) and \(c(\cdot)\).
Lastly, for the frame of the "leader," _Alpha_, we define the subspaces \(\mathcal{S}_{\mathcal{A}}\equiv\text{span}\{a_{1},a_{2}\}\) and \(\mathcal{S}^{\perp}_{\mathcal{A}}=\text{span}\{a_{3}\}\). The operator \(\text{proj}_{S}:\mathbb{R}^{3}\to\mathbb{R}^{3}\) maps each vector in \(\mathbb{R}^{3}\) to its orthogonal projection in subspace \(\mathcal{S}\).
**Multirotor Dynamics/Control.** We model a multirotor as a rigid body \(\mathcal{C}\) with six degrees of freedom with mass \(m\), and dynamics in an inertial NED frame governed by
\[m\mathbf{a}=-R^{\mathcal{C}}_{\mathcal{M}}T+\hat{e}_{3}mg, \tag{1}\]
where \(T\) is the collective thrust produced by the rotors and \(g\) is the acceleration due to gravity. The matrix \(R^{\mathcal{C}}_{\mathcal{M}}\) is composed from the Euler roll (\(\phi\)), pitch (\(\theta\)) and yaw (\(\psi\)) angles of the body in Z-Y-X rotation order. A nominal controller for this system generates the control targets \(\boldsymbol{u}=[\phi,\theta,\dot{\psi},T]^{\top}\) using a non-linear inversion map on (1) to affect a desired acceleration, \(\mathbf{a}\in\mathbb{R}^{3}\). This allows us to write the system of equations in a linear form,
\[\dot{\boldsymbol{x}}=A\boldsymbol{x}+B\boldsymbol{u},\text{ and, }y=C \boldsymbol{x} \tag{2}\]
with \(\boldsymbol{x}(t)=[p_{n},p_{e},p_{d},v_{n},v_{e},v_{d},\psi]^{\top}\equiv[ \boldsymbol{p},\boldsymbol{v},\psi]^{\top}\) representing the state vector, \(\boldsymbol{u}(t)=[a_{n},a_{e},a_{d},\psi]^{\top}\equiv[\mathbf{a},\dot{\psi} ]^{\top}\) the feedback-linearized control input, and
\[A=\begin{pmatrix}0_{3\times 3}&\mathbb{I}_{3\times 3}&0_{3\times 1}\\ 0_{3\times 3}&0_{3\times 3}&0_{3\times 1}\\ 0_{1\times 3}&0_{1\times 3}&0\end{pmatrix},B=\begin{pmatrix}0_{3\times 3}&0_{3 \times 1}\\ \mathbb{I}_{3\times 3}&0_{3\times 1}\\ 0_{1\times 3}&1\end{pmatrix},C=\mathbb{I}_{7\times 7}.\]
Since \((A,B)\) is controllable, it is then straightforward to derive an optimal stabilizing control law, \(\boldsymbol{u}(t)=-K(\boldsymbol{x}(t)-\boldsymbol{x}_{\boldsymbol{r}}(t))\), that regulates this second-order plant to a reference state \(\boldsymbol{x}_{\boldsymbol{r}}\). The gain matrix, \(K\), is designed with a linear quadratic regulator (LQR) to produce a high gain margin.
**Downwash Model.** In this work, we model the aerodynamic downwash effects, \(\mathbf{f}_{\text{ext}}\in\mathbb{R}^{3}\), experienced by a multirotor as additive exogenic forces (or equivalently, accelerations) acting on (2). We assume that these forces can be written as \(\mathbf{f}_{\text{ext}}\equiv\mathbf{f}_{\text{ext}}(\mathbf{x})\), where \(\mathbf{x}=[\mathbf{p}^{\mathcal{A}},\mathbf{p}^{\mathcal{B}},\mathbf{v}^{ \mathcal{A}},\mathbf{v}^{\mathcal{B}}]\) contains the instantaneous state information of _Alpha_ and _Bravo_. The second-order model described above abstracts the torques produced by per-motor thrust differentials and delegates the regulation of angular states to a well-tuned low-level autopilot. This method of successive loop-closure [29] allows us to model the short-term torque dynamics induced by aerodynamic interactions as collective forces, thereby generalizing the method to other types of aircraft. Hence, (2) can now be rewritten as \(\dot{\boldsymbol{x}}=A\boldsymbol{x}+B\boldsymbol{u}+B\mathbf{f}_{\text{ext}}=A \boldsymbol{x}+B(\boldsymbol{u}+\mathbf{f}_{\text{ext}})\). Since our control, \(\boldsymbol{u}\), is designed with very high gain margins, we can use this linear separability to adapt the feedback control to compensate for this effect as \(\boldsymbol{u}_{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol \boldsymbol \boldsymbol}}}}}} \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol}\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol \boldsymbol{\boldsymbol \leftleftleftleft\|}}}}} \mathbf{f}_{\text{ext}}(\mathbf{x})} \mathbf{f}_{\text{ext}}\). We note that, in the general case where the controller's stability margin may be narrow, this compensation can be incorporated through appropriate constraint-based methods (e.g. model predictive control).
**Problem.** Our objective is to learn a sample-efficient model that predicts \(\mathbf{f}_{\text{ext}}(\mathbf{x})\) such that a closed-loop controller can compensate for predicted exogenic disturbances online.
## III Establishing Geometric Priors
In order to efficiently and accurately model the downwash forces experienced by _Bravo_, we first make assumptions on the geometry present in \(\mathbf{f}_{\mathrm{ext}}(\mathbf{x})\). Our assumptions are formalized using the group-theoretic definitions of invariance and equivariance.
### _Geometric Invariance and Equivariance_
The geometric properties of functions are described in terms of group actions. To be specific, let \(G\) be a group and \(\mathcal{X}\) be a set. The _action_ of group \(G\) on set \(\mathcal{X}\) is a mapping \(\star:G\times\mathcal{X}\to\mathcal{X}\) which associates with each group element and set element a corresponding set element. The group action \(\star\) must satisfy certain properties [18, 19]. In this case, we say that "\(G\) acts on \(\mathcal{X}\) according to \(\star\)."
For instance, if \(G=\mathsf{SO}(2)\) and \(\mathcal{X}=\mathbb{R}^{2}\), then \(G\) can act on \(\mathcal{X}\) according to matrix multiplication: \(G_{\omega}\star\mathbf{w}=G_{\omega}\mathbf{w}\), where \(G_{\omega}\in\mathsf{SO}(2)\) is the rotation matrix corresponding to the angle \(\omega\in[0,2\pi)\) and \(\mathbf{w}\in\mathbb{R}^{2}\) is an arbitrary vector.1
Footnote 1: This group action is technically a group representation: an action of \(G\) on a vector space by invertible linear transformations.
Using group actions, one can define _invariant_ and _equivariant_ functions:
**Definition 1** (Invariance, Equivariance).: _Let \(G\) be a group and \(\mathcal{X},\mathcal{Y}\) be two sets. Suppose that \(G\) acts on \(\mathcal{X}\) according to \(\star_{1}\) and on \(\mathcal{Y}\) according to \(\star_{2}\)._
_A function \(f:\mathcal{X}\to\mathcal{Y}\) is **invariant** with respect to \(\star_{1}\) if it satisfies_
\[f(x)=f(g\star_{1}x),\quad\forall x\in\mathcal{X},\forall g\in G.\]
\(f\) _is **equivariant** with respect to \(\star_{1}\) and \(\star_{2}\) if it satisfies_
\[g\star_{2}f(x)=f(g\star_{1}x),\quad\forall x\in\mathcal{X},\forall g\in G.\]
Intuitively, invariance states that the output of \(f\) should be preserved regardless of whether or not \(g\in G\) acts on the input. Equivariance, on the other hand, states that \(g\in G\) acting on the input \(x\) according to \(\star_{1}\) is equivalent to \(g\) acting on the output of \(f\), \(f(x)\), according to \(\star_{2}\).
### _Geometric Assumptions on \(\mathbf{f}_{\mathrm{ext}}\)_
Now that we have detailed the geometric properties a function may have, we consider the particular structure of the interaction forces \(\mathbf{f}_{\mathrm{ext}}(\mathbf{x})\) that _Alpha_ exerts on _Bravo_.
Foremost, we know that \(\mathbf{f}_{\mathrm{ext}}(\mathbf{x})\) should not depend on positional shifts in the input space:
**Assumption 1** (Translation Invariance).: _Define the group \(\mathbb{T}\) consisting of all translations in \(\mathbb{R}^{3}\). \(\mathbb{T}\) is isomorphic to \(\mathbb{R}^{3}\). We assume \(\mathbf{f}_{\mathrm{ext}}(\mathbf{x})\) is invariant with respect to the group action \(\mathbf{t}\star_{1}\mathbf{x}=[\mathbf{t}+\mathbf{p}^{\mathcal{A}},\mathbf{t} +\mathbf{p}^{\mathcal{B}},\mathbf{v}^{\mathcal{A}},\mathbf{v}^{\mathcal{B}}]\) for \(\mathbf{t}\in\mathbb{T}\)._
Equivalently, Assumption 1 states that \(\mathbf{f}_{\mathrm{ext}}\) must be a function of \(\Delta\mathbf{p}\), \(\mathbf{v}^{\mathcal{A}}\), and \(\mathbf{v}^{\mathcal{B}}\) only. From here on, we will redefine \(\mathbf{x}\equiv[\Delta\mathbf{p}\), \(\mathbf{v}^{\mathcal{A}}\), \(\mathbf{v}^{\mathcal{B}}]\in\mathbb{R}^{9}\). Translation invariance is standard in downwash models [16, 15, 17].
However, beyond translation invariance, previous downwash models have failed to consider the geometry present in \(\mathbf{f}_{\mathrm{ext}}\). In particular, in many flight regimes, it is reasonable to assume that once the downward direction \(a_{3}\) of the "leader" _Alpha_ is fixed, how one defines the north and east directions is arbitrary. This is the subject of the following assumption:
**Assumption 2** (Rotational Equivariance).: _Define the group \(H_{\mathcal{A}}\leq\mathsf{SO}(3)\) containing all rotations that fix \(a_{3}\), the down direction in the body frame of Alpha:_
\[H_{\mathcal{A}}\!=\!\big{\{}H\in\mathsf{SO}(3)\mid\text{proj}_{\mathcal{S}^{ \perp}_{\mathcal{A}}}\left(H\mathbf{w}\right)\!=\!\text{proj}_{\mathcal{S}^{ \perp}_{\mathcal{A}}}\left(\mathbf{w}\right)\!,\!\forall\mathbf{w}\in\mathbb{R }^{3}\big{\}}. \tag{3}\]
\(H_{\mathcal{A}}\) _is isomorphic to the two-dimensional rotation group, \(\mathsf{SO}(2)\). Define the action of \(H_{\mathcal{A}}\) on the input space by \(H\star_{1}\)_\(\mathbf{x}=[H\Delta\mathbf{p},\mathbf{v}^{\mathcal{A}},H\mathbf{v}^{\mathcal{B}}]\) _and on the output space by \(H\star_{2}\mathbf{w}=H\mathbf{w},\mathbf{w}\in\mathbb{R}^{3}\) for \(H\in H_{\mathcal{A}}\). Then we assume \(\mathbf{f}_{\mathrm{ext}}=\mathbf{f}_{\mathrm{ext}}(\mathbf{x})\) is equivariant with respect to these group actions._
This rotational equivariance assumption is illustrated in Figure 1. Intuitively, Assumption 2 states that in the frame of the leader vehicle _Alpha_, rotating the relative position vector and the velocity vector of _Bravo_ in the \(\{a_{1},a_{2}\}\) axes by an angle of \(\omega\in[0,2\pi)\) is equivalent to rotating the force vector \(\mathbf{f}_{\mathrm{ext}}(\mathbf{x})\) of _Bravo_ by the same angle \(\omega\).
We clarify that the true downwash function will not necessarily satisfy Assumption 2 in all cases. This is the case when _Alpha_'s rotor speeds are highly asymmetric (ex. during aggressive maneuvering) [13]. However, as we will demonstrate in Section V, imposing this geometric prior on the learning algorithm results in significant improvements in sample efficiency without incurring a large bias. This result is in line with recent research [21] suggesting that even when the assumed geometric priors do not exactly match the underlying symmetry (i.e. "approximate" equivariances), they can still yield gains in sample efficiency while outperforming non-equivariant models.
## IV Geometry-Aware Learning
Now that we have stated our assumptions on \(\mathbf{f}_{\mathrm{ext}}(\mathbf{x})\), we encode them as geometric priors in our learning algorithm.
Fig. 1: An illustration of Assumption 2 on the downwash function \(\mathbf{f}_{\mathrm{ext}}(\mathbf{x})\). On the left, we provide two combinations of \((\Delta\mathbf{p},\mathbf{v}^{\mathcal{B}})\) that are related under the rotational equivariance property.
### _Rotationally Equivariant Model_
In order to present our model for \(\mathbf{f}_{\mathrm{ext}}(\mathbf{x})\), we first define a feature mapping \(h:\mathbb{R}^{9}\rightarrow\mathbb{R}^{6}\)
\[h(\mathbf{x})=\bigg{(}\frac{\mathrm{proj}_{\mathcal{S}_{\mathcal{A}}}(\Delta \mathbf{p})^{\top}\mathrm{proj}_{\mathcal{S}_{\mathcal{A}}}(\mathbf{v}^{ \mathcal{B}})}{\|\mathrm{proj}_{\mathcal{S}_{\mathcal{A}}}(\Delta\mathbf{p})\|_ {2}\|\mathrm{proj}_{\mathcal{S}_{\mathcal{A}}}(\mathbf{v}^{\mathcal{B}})\|_{2}}, \|\mathrm{proj}_{\mathcal{S}_{\mathcal{A}}}\left(\Delta\mathbf{p}\right)\|_{2}, \tag{4}\]
The mapping \(\mathbf{x}\mapsto h(\mathbf{x})\) transforms each input vector \(\mathbf{x}\) in Euclidean space into an invariant representation with respect to the action of \(H_{\mathcal{A}}\). It does so by separating each of the inputs \(\Delta\mathbf{p}\), \(\mathbf{v}^{\mathcal{A}}\), and \(\mathbf{v}^{\mathcal{B}}\) into their components in the subspaces \(\mathcal{S}_{\mathcal{A}}\) and \(\mathcal{S}_{\mathcal{A}}^{\perp}\).
In particular, because the components of \(\Delta\mathbf{p}\) and \(\mathbf{v}^{\mathcal{B}}\) contained in \(\mathcal{S}_{\mathcal{A}}^{\perp}\), \(\mathrm{proj}_{\mathcal{S}_{\mathcal{A}}^{\perp}}(\Delta\mathbf{p})\) and \(\mathrm{proj}_{\mathcal{S}_{\mathcal{A}}^{\perp}}(\mathbf{v}^{\mathcal{B}})\), are unaffected by the action of \(H_{\mathcal{A}}\), then our model has the freedom to operate on them arbitrarily. These components can be rewritten in the frame of _Alpha_ as \(\big{[}R_{\mathcal{M}}^{\mathcal{A}}\Delta\mathbf{p}\big{]}_{3}\) and \(\big{[}R_{\mathcal{M}}^{\mathcal{A}}\mathbf{v}^{\mathcal{B}}\big{]}_{3}\). The components contained in \(\mathcal{S}_{\mathcal{A}}\), on the other hand, \(\mathrm{proj}_{\mathcal{S}_{\mathcal{A}}}(\Delta\mathbf{p})\) and \(\mathrm{proj}_{\mathcal{S}_{\mathcal{A}}}(\mathbf{v}^{\mathcal{B}})\), are affected by the action \(\star_{1}\) of \(H_{\mathcal{A}}\). Therefore, we only consider the magnitudes of these vectors as well as the angles between them [30, 31]. Formal verifications of these statements are given in the proof of Theorem 1.
While the feature mapping (4) we proposed is invariant with respect to the action of \(H_{\mathcal{A}}\), we ultimately want our model for \(\mathbf{f}_{\mathrm{ext}}(\mathbf{x})\) to be equivariant with respect to \(\star_{1}\) and \(\star_{2}\). We achieve this by taking into account the polar angle that \(\mathrm{proj}_{\mathcal{S}_{\mathcal{A}}}(\Delta\mathbf{p})\) forms with the positive \(a_{1}\) axis in the subspace \(\mathcal{S}_{\mathcal{A}}\), which we denote by \(\varphi(\mathbf{x})\in[0,2\pi)\).
Now, for any neural network function \(f_{\Theta}:\mathbb{R}^{6}\rightarrow\mathbb{R}^{2}\) with parameters \(\Theta\), we will approximate the downwash forces \(\mathbf{f}_{\mathrm{ext}}\) felt by _Bravo_ as \(F_{\Theta}:\mathbb{R}^{9}\rightarrow\mathbb{R}^{3}\):
\[F_{\Theta}(\mathbf{x})\!=\!(R_{\mathcal{M}}^{\mathcal{A}})^{\top}\!\bigg{(}[ f_{\Theta}(h(\mathbf{x}))]_{1}\!:\![c(\varphi(\mathbf{x})),s(\varphi(\mathbf{x}))],[ f_{\Theta}(h(\mathbf{x}))]_{2}\bigg{)}. \tag{5}\]
### _Proof of Equivariance_
**Theorem 1**.: _The model \(F_{\Theta}(\mathbf{x})\) proposed in (5) for \(\mathbf{f}_{\mathrm{ext}}(\mathbf{x})\) satisfies Assumption 2._
Proof.: By Definition 1 of equivariance, we need to prove that for each \(H\in H_{\mathcal{A}}\),
\[H\star_{2}F_{\Theta}(\mathbf{x})=F_{\Theta}(H\star_{1}\mathbf{x}),\]
where \(\star_{1}\) and \(\star_{2}\) are the group actions in Assumption 2.
First, we point out the fact that
\[H_{\mathcal{A}}=\left\{(R_{\mathcal{M}}^{\mathcal{A}})^{\top} \begin{pmatrix}c(\omega)&-s(\omega)&0\\ s(\omega)&c(\omega)&0\\ 0&0&1\end{pmatrix}R_{\mathcal{M}}^{\mathcal{A}}\ \bigg{|}\ \omega\in[0,2\pi)\right\}. \tag{6}\]
In other words, each \(H\in H_{\mathcal{A}}\) can be parameterized by the angle of rotation \(\omega\in[0,2\pi)\) about the axis \(a_{3}\). Let \(H=(R_{\mathcal{M}}^{\mathcal{A}})^{\top}\Omega R_{\mathcal{M}}^{\mathcal{A}}\), where \(\Omega\) is the rotation matrix in (6).
As we previously discussed, we will first show that the feature mapping (4) is _invariant_ to the action of \(H_{\mathcal{A}}\) on the input space, \(\star_{1}\). Since \(H\) is a rotation which fixes \(a_{3}\), then
\[\big{[}R_{\mathcal{M}}^{\mathcal{A}}H\mathbf{w}\big{]}_{3}=\big{[}\Omega R_{ \mathcal{M}}^{\mathcal{A}}\mathbf{w}\big{]}_{3}=\big{[}R_{\mathcal{M}}^{ \mathcal{A}}\mathbf{w}\big{]}_{3},\ \forall\mathbf{w}\in\mathbb{R}^{3}.\]
Also, notice that
\[R_{\mathcal{M}}^{\mathcal{A}}\mathrm{proj}_{\mathcal{S}_{\mathcal{A}}}(\mathbf{w}) =\begin{pmatrix}\mathbb{I}_{2\times 2}&0\\ 0&0\end{pmatrix}R_{\mathcal{M}}^{\mathcal{A}}\mathbf{w}.\]
But for any vector \(\mathbf{w}\in\mathbb{R}^{3}\),
\[R_{\mathcal{M}}^{\mathcal{A}}\mathrm{proj}_{\mathcal{S}_{\mathcal{A} }}(H\mathbf{w}) =\begin{pmatrix}\mathbb{I}_{2\times 2}&0\\ 0&0\end{pmatrix}R_{\mathcal{M}}^{\mathcal{A}}H\mathbf{w}\] \[=\Omega R_{\mathcal{M}}^{\mathcal{A}}\mathrm{proj}_{\mathcal{A}}( \mathbf{w}).\]
Since \(\Omega\) and \(R_{\mathcal{M}}^{\mathcal{A}}\) are unitary, and the norm and dot product are preserved under unitary transformations, the previous two results imply \(h(H\star_{1}\mathbf{x})=h(\mathbf{x})\).
Hence, it only remains to consider the polar angle that \(\mathrm{proj}_{\mathcal{S}_{\mathcal{A}}}(H\Delta\mathbf{p})\), or equivalently \(R_{\mathcal{M}}^{\mathcal{A}}\mathrm{proj}_{\mathcal{S}_{\mathcal{A}}}(H\Delta \mathbf{p})\), forms with the positive \(a_{1}\) axis in \(S_{\mathcal{A}}\). But \(R_{\mathcal{M}}^{\mathcal{A}}\mathrm{proj}_{\mathcal{A}}(H\Delta\mathbf{p})= \Omega R_{\mathcal{M}}^{\mathcal{A}}\mathrm{proj}_{\mathcal{A}}(\Delta\mathbf{p})\) is just \(R_{\mathcal{M}}^{\mathcal{A}}\mathrm{proj}_{\mathcal{A}}(\Delta\mathbf{p})\) rotated by \(\omega\). Therefore, we know that
\[\varphi(H\star_{1}\mathbf{x})\equiv\varphi(\mathbf{x})+\omega\quad(\text{mod }2\pi).\]
Let \(\Omega_{2\times 2}\in\mathbb{R}^{2\times 2}\) be the submatrix formed by the first two rows and columns of \(\Omega\). Using our established congruence,
\[\Omega_{2\times 2}[c(\varphi(\mathbf{x})),s(\varphi(\mathbf{x}))] =[c(\varphi(\mathbf{x})+\omega),s(\varphi(\mathbf{x})+\omega)]\] \[=[c(\varphi(H\star_{1}\mathbf{x})),s(\varphi(H\star_{1}\mathbf{x}))].\]
Altogether, we conclude that
\[F_{\Theta}(H\star_{1}\mathbf{x})\] \[= (R_{\mathcal{M}}^{\mathcal{A}})^{\top}\!\bigg{(}[f_{\Theta}(h( \mathbf{x}))]_{1}\Omega_{2\times 2}[c(\varphi(\mathbf{x})),s(\varphi(\mathbf{x}))],[f_{\Theta}(h( \mathbf{x}))]_{2}\bigg{)}\] \[= (R_{\mathcal{M}}^{\mathcal{A}})^{\top}\!\Omega\bigg{(}[f_{\Theta}(h( \mathbf{x}))]_{1}\cdot[c(\varphi(\mathbf{x})),s(\varphi(\mathbf{x}))],[f_{ \Theta}(h(\mathbf{x}))]_{2}\bigg{)}\] \[= H(R_{\mathcal{M}}^{\mathcal{A}})^{\top}\!\bigg{(}[f_{\Theta}(h( \mathbf{x}))]_{1}\cdot[c(\varphi(\mathbf{x})),s(\varphi(\mathbf{x}))],[f_{ \Theta}(h(\mathbf{x}))]_{2}\bigg{)}\] \[= H\star_{2}F_{\Theta}(\mathbf{x}).\qed\]
### _Shallow Learning_
For our training pipeline, we collect time-stamped state and control information from real-world flights with _Alpha_ and _Bravo_, and compute the input data points \(\mathbf{x}\) offline. The labels that our model (5) learns to approximate are obtained from the feedback control equation (2), \(\mathbf{f}_{\mathrm{ext}}=\mathbf{a}-\boldsymbol{u}(t)\).
We justify our choice of a shallow neural network architecture via the \(\mathsf{SO}(2)\) invariant feature mapping (4). For a neural network trained only on the raw input data \(\mathbf{x}\), the model itself would be responsible for determining the geometries present in \(\mathbf{f}_{\text{ext}}\). However, for the equivariant model \(F_{\Theta}\), the feature mapping (4) encodes these geometries explicitly.
Since our equivariant model is not responsible for learning the geometry of \(\mathbf{f}_{\text{ext}}\), we can reduce the complexity of \(f_{\Theta}\) without sacrificing validation performance. We verify this claim empirically in Section V-B.
## V Real-world Flight Experiments
We conduct studies with our training procedure and present evaluations from real-world flight experiments. All tests are conducted indoors under partially controlled environments to limit the side-effects of external factors.
Consequently, we will consider the special case of the model (5) in which \(R_{\mathcal{M}}^{\mathcal{A}}=\mathbb{I}_{3\times 3}\). In other words, we assume _Alpha_'s frame is a translation of the inertial frame. This special case is realized when _Alpha_ is hovering or its instantaneous state is close to hovering. Since the leader's velocity is nearly zero, we omit the component \(\|\text{proj}_{\mathcal{S}_{\mathcal{A}}}(\mathbf{v}^{\mathcal{A}})\|_{2}\) from the feature mapping (4).
### _Sequential Data Collection_
In order to train our model, we first need to collect a dataset of real-world flights. This is a difficult task, since without a compensation model _a priori_, physical and control limitations will prevent the vehicles from flying in close proximity. To solve this problem, we adopt a sequential approach similar to [16] that splits data collection into different'stages'. In stage-0, we fly the vehicles with a relatively large vertical separation, \(1.75\,\mathrm{m}\) to \(1.35\,\mathrm{m}\), so that the forces acting on _Bravo_ can be compensated for by a disturbance-rejecting nominal controller.
This controller has no prior knowledge of the exogenic forces, so it simply relies on its feedback control input, \(\mathbf{u}(t)\), to track the desired reference trajectories according to (2). However, in regions of strong downwash forces, the measured accelerations, \(\mathbf{a}\) will not correspond to the desired acceleration input, \(\mathbf{u}\). These deviations become the computed force labels (represented in mass-normalized acceleration units) for this stage.
After training a stage-0 model using (5), we deploy it online within the control loop such that \(\mathbf{u}_{\mathbf{f}}\) now provides the feedback regulation. As a result, we can now decrease the separation between the vehicles and obtain a new stage-1 dataset along with its labels (the residual disturbances our stage-0 model is unable to account for). The stage-1 dataset is concatenated with the stage-0 dataset and used to train the stage-1 model.
We repeat this process for a total of three stages, so that at stage-2 the vehicles have a vertical separation of only \(\approx 0.5\,\mathrm{m}\) (approx. two body-lengths). Our full training dataset corresponds to approximately \(17\) minutes of flight data. Note that we can always extract the correct force labels at stage-i by logging the model predictions \(F_{\Theta}(\mathbf{x})\) from stage-(i-1) and subtracting these from the control.
### _Study: Model Training_
We first study the effect of geometric priors on the learning algorithm for modelling downwash forces \(\mathbf{f}_{\text{ext}}(\mathbf{x})\).
**Non-equivariant Baselines.** In order to analyze the effect of our geometric priors, we must first introduce two models
Fig. 3: _Downward Force Predictions._ Downward force predictions [\(\mathrm{m}/\mathrm{s}^{2}\)] made by the equivariant model (top) and deep non-equivariant model (bottom). On the left (top-down view), _Alpha_ is hovering \(1\,\mathrm{m}\) above _Bravo_ at \((\hat{e}_{1},\hat{e}_{2})=(0,0)\). On the right (sagittal view), _Alpha_ is hovering \(0.1\,\mathrm{m}\) east of _Bravo_ at \((\hat{e}_{1},\hat{e}_{3})=(0,0)\). In each plot, _Bravo_ is moving with velocity \(\mathbf{v}^{\mathcal{B}}=[0.5,0,0]^{\top}\).
Fig. 2: _Sample Efficiency and Accuracy._ Top: A visualization of the validation RMSE of the equivariant and non-equivariant models as a function of the training flight time. For each training time, we compute the average validation RMSE across \(5\) trials. Bottom: Summary statistics for the equivariant and non-equivariant models. Position and velocity tracking errors are reported for models trained on the full training dataset.
to which we can compare our \(\mathsf{SO}(2)\)-equivariant model (5). These "non-equivariant" models should not exploit the known geometry of \(\mathbf{f}_{\mathrm{ext}}\) delineated in Assumption 2.
The first non-equivariant model we propose has the same architecture as the shallow neural network (7), with the exception that the input to the network is \([\Delta\mathbf{p},\mathbf{v}^{\mathcal{B}}]\in\mathbb{R}^{6}\) rather than invariant feature representation \(h(\mathbf{x})\). Note that \(\mathbf{v}^{\mathcal{A}}\) is not included because of the near-hover assumption that we specified at the beginning of the section.
We also compare our equivariant model against the SOTA eight-layer, non-equivariant neural network discussed in Section I [16, 15]. During training, we bound the singular values of the weight matrices to be at most \(2\). This normalization technique, called "spectral normalization," constrains the Lipschitz constant of the neural network [16, 15].
**Efficiency of Geometric Learning.** As we suggested in Section I, the primary benefit of imposing geometric priors on a learning algorithm is that they have been empirically shown to improve sample efficiency [21, 22, 23, 24].
We investigate the sample efficiency of our \(\mathsf{SO}(2)\)-equivariant downwash model by considering the validation root mean-squared error (RMSE) as a function of the length of our training flights. We shorten the full training dataset by shortening each stage of data collection proportionally (ex. a total training time of three minutes corresponds to one minute of flight for each stage). Our validation dataset is roughly equal in size to the full training dataset.
In Figure 2, we observe that although the validation loss of the shallow non-equivariant network plateaus after approximately \(10\) minutes of training flight data, it cannot represent the downward aerodynamic forces as accurately as the other models (i.e. greater bias). Conversely, while the deep non-equivariant network accurately learns the downward forces, it requires much more training data to do so (i.e. lower sample efficiency). Neither non-equivariant model learns the lateral forces as accurately as the equivariant model.
Our equivariant model (5), on the other hand, displays both high sample efficiency and low bias. With only \(5\) minutes of flight data, it learns the lateral and downward forces more accurately than both non-equivariant models do with \(15\) minutes of data.
**Visualizing Downwash Predictions.** In Figure 3, we visualize the force predictions that our equivariant model \(F_{\Theta}(\mathbf{x})\) makes in the \(\hat{e}_{3}\) direction. When _Bravo_ passes through the downwash region of _Alpha_, there is a highly repeatable pattern in which it is first subjected to a positive force, which pushes it towards the ground, followed by a negative force, which pulls it upwards. The magnitudes of these positive and negative forces are dependent upon _(i) Bravo_'s speed as it passes through the downwash region, and _(ii)_ the distance of _Bravo_ from _Alpha_ in both \(\mathcal{S}_{\mathcal{A}}\) and \(\mathcal{S}_{\mathcal{A}}^{\perp}\). Similar patterns have been documented by previous downwash models [16].
From Figure 3(b), we observe that \(F_{\Theta}\) also uncovers consistent patterns in the lateral axes \(\hat{e}_{1}\) and \(\hat{e}_{2}\). When _Bravo_ translates laterally underneath _Alpha_, it is first pushed radially outwards, then pulled inwards immediately upon passing under _Alpha_, and lastly, pushed radially outwards once it has passed _Alpha_. These inwards forces are strongest when _Bravo_ is traveling at a high speed. In Figure 3(a), we show that the model's predictions are consistent with the observed deviations of _Bravo_ from its trajectory.
We believe that we are the first to demonstrate these lateral force patterns via a machine learning approach.
### _Real-World Experiments_
We evaluate the performance of our trained equivariant model (5) in two real-world experiments, and contrast it against a baseline controller as well as the deep non-equivariant model. Each model is trained on the full training dataset as described in Section V-A.
Our tests use two identical quadrotor platforms that are custom-built using commercial off-the-shelf parts. These span \(0.24\,\mathrm{m}\) on the longest body-diagonal, and weigh \(0.65\,\mathrm{kg}\) (including batteries). Each platform is equipped with a Raspberry Pi 4B (8GB memory) on which we run our control, estimation and model evaluations. The model-based LQG flight control and estimation (2) is perfomed by _Freyja_[32], while the neural-network encapsulation is done through PyTorch. The controller and the model evaluations are performed at \(50\,\mathrm{Hz}\) and \(45\,\mathrm{Hz}\) respectively.
Fig. 4: _Lateral Forces._ Force predictions and errors during a transition under _Alpha_ with a vertical separation of \(0.8\,\mathrm{m}\). _Alpha_ is hovering at \((\hat{e}_{1},\hat{e}_{2})=(0,0)\).
Prior to conducting tests with _Bravo_ in motion, we first ensure that a stationary hover under _Alpha_ is stable when the model's predictions are incorporated into _Bravo_'s control loop. This is essential to validate empirically that the predictions made by the model do not induce unbounded oscillations on _Bravo_. The table in Figure 2 lists the quantitative results averaged across all our experiments, compared against baseline methods.
**Lemniscate Trajectory.** We now evaluate the model deployed in a more dynamic scenario where _Bravo_ is commanded to follow a lemniscate trajectory ('figure-eight') under _Alpha_. This exposes the model to many different regions of the state space, while also requiring _Bravo_ to make continuous changes to its accelerations.
Figure 5 shows tracking results from executing one complete period of this trajectory. We observe that deploying our model produces a significant shift in the distribution of both position and velocity errors. Without any model, _Bravo_ loses vertical position tracking (top row) twice as it makes the two passes directly beneath _Alpha_, seen near \(8\,\mathrm{s}\) and \(20\,\mathrm{s}\). These spikes, also noticeable as vertical velocity errors (second row), are "absorbed" due to the predictions made by our equivariant model (as well as the baseline deep non-equivariant model). The equivariant model produces an improvement of nearly \(51\,\mathrm{\char 37}\) in both position and velocity tracking, whereas the non-equivariant model is still able to provide almost \(36\,\mathrm{\char 37}\) and \(46\,\mathrm{\char 37}\) improvement, respectively.
Our model's ability to represent geometric patterns in the lateral plane is also apparent when considering the full 3D errors (third and fourth rows). The non-equivariant model already improves position tracking by \(24\,\mathrm{\char 37}\) (\(0.137\,\mathrm{m}\) from \(0.181\,\mathrm{m}\)) and velocity tracking by nearly \(28\,\mathrm{\char 37}\) (\(0.091\,\mathrm{m}\mathrm{/s}\) from \(0.128\,\mathrm{m}\mathrm{/s}\)). The equivariant model decreases position errors further (down to \(0.113\,\mathrm{m}\), \(37\,\mathrm{\char 37}\) improvement), and also reduces velocity tracking errors (down to \(0.081\,\mathrm{m}\mathrm{/s}\), a \(36\,\mathrm{\char 37}\) improvement).
**Translation Trajectory** Next, we perform an analysis of _Bravo_'s tracking performance and the model's responses while executing a horizontal transect under _Alpha_. This trajectory is the same as the one shown in Figure 3(a), and is useful because it drives _Bravo_ rapidly through regions of near-zero to peak disturbances.
Figure 6 illustrates key results from one back-and-forth trajectory parallel to the \(\hat{e}_{2}\) axis. \(\hat{e}_{1}=0.2\) is fixed, and _Bravo_ is at a fixed vertical separation of \(0.6\,\mathrm{m}\) with _Alpha_ hovering at \(\mathbf{p}^{\mathcal{A}}=[0,0,-2.5]\). The first row shows the actual trajectory executed by _Bravo_ with our equivariant model deployed (green circles), with an overlay of the force predictions made by the model (solid black arrows). We first point out that the pattern is similar to the one found in Figure 3(a), but the peak errors have decreased significantly.
The distributions of errors shown in the second row demonstrate that the magnitudes of these predictions are also justified. Even though _Bravo_ is not directly underneath _Alpha_ in these tests, it is still well within _Alpha_'s downwash region. Across experiments, we observe a reduction in the mean 3D positioning error to \(0.098\,\mathrm{m}\) (from \(0.154\,\mathrm{m}\)), corresponding to an improvement of almost \(36\,\mathrm{\char 37}\) (the peak error is also reduced similarly). Velocity tracking error also shows a similar trend, with an average improvement of \(34\,\mathrm{\char 37}\). Considering
Figure 5: _Lemniscate._ An evaluation of _Bravo_’s trajectory tracking performance with _Alpha_ hovering at \(\hat{e}_{3}=-2.5\,\mathrm{m}\). The first two rows show the evolution of the position and velocity tracking errors, as well as their distributions. The last two rows show the same statistics in the 3D space. Compared against a baseline controller (without any model) and a non-equivariant model, our model shifts the distribution of full 3D errors in position (\(37\,\mathrm{\char 37}\) reduction in mean error) and velocity (\(36\,\mathrm{\char 37}\) reduction in mean). The non-equivariant model only offers \(24\,\mathrm{\char 37}\) and \(28\,\mathrm{\char 37}\) reductions, respectively.
Figure 6: _Translation._ An evaluation of _Bravo_’s flight performance over a lateral transect beneath _Alpha_. This trajectory is replicated from Figure 3(a). Solid black arrows represent the force predictions made by the equivariant model. Applying these predictions, our approach decreases both the mean position and velocity tracking errors by almost \(36\,\mathrm{\char 37}\) (see Figure 2). The gains are greatest in the \(\hat{e}_{3}\) axis, where the tracking performance is improved by over \(55\,\mathrm{\char 37}\).
only the vertical tracking performance in these tests (not shown in figures), these statistics jump to \(55\,\mathrm{\char 37}\) and \(49\,\mathrm{\char 37}\) (for position and velocity, respectively).
## VI Conclusion
This article proposes a sample-efficient learning-based approach for modelling the downwash forces produced by a multirotor on another. In comparison to previous learning-based approaches that have tackled this problem, we make the additional assumption that the downwash function is rotationally equivariant about the vertical axis of the leader vehicle. This "geometric prior" that we impose on the learning algorithm allows us to first encode our input data into a lower-dimensional space using an invariant feature mapping, before passing it as the input to a neural network.
Through a number of real-world experiments, we demonstrate that the equivariant model outperforms baseline feedback control as well as SOTA learning-based approaches. The advantage of our equivariant model is greatest in regimes where training data is limited.
In the future, we will further explore the potential of our equivariant model through flight regimes with larger force magnitudes. This includes outdoor flights, where the leader and follower can move at sustained greater speeds. Finally, we will consider how to model the geometries present in a multi-vehicle system. One naive approach to modelling multi-vehicle downwash would be to employ our two-vehicle model and sum the individual force contributions produced by each multirotor in the system. However, it may be the case that individual downwash fields interact in highly nonlinear ways, in which case a more complex model of the multi-vehicle geometry would be necessary.
## Acknowledgment
We thank Heedo Woo for his contributions to the construction of the quadrotors and Wolfgang Honig for his clarifications about the sequential data collection in [16].
|
2306.01982
|
Relativistic stochastic mechanics I: Langevin equation from observer's
perspective
|
Two different versions of relativistic Langevin equation in curved spacetime
background are constructed, both are manifestly general covariant. It is argued
that, from the observer's point of view, the version which takes the proper
time of the Brownian particle as evolution parameter contains some conceptual
issues, while the one which makes use of the proper time of the observer is
more physically sound. The two versions of the relativistic Langevin equation
are connected by a reparametrization scheme. In spite of the issues contained
in the first version of the relativistic Langevin equation, it still permits to
extract the physical probability distributions of the Brownian particles, as is
shown by Monte Carlo simulation in the example case of Brownian motion in
$(1+1)$-dimensional Minkowski spacetime.
|
Yifan Cai, Tao Wang, Liu Zhao
|
2023-06-03T02:29:21Z
|
http://arxiv.org/abs/2306.01982v3
|
# Relativistic stochastic mechanics I:
###### Abstract
Two different versions of relativistic Langevin equation in curved spacetime background are constructed, both are manifestly general covariant. It is argued that, from the observer's point of view, the version which takes the proper time of the Brownian particle as evolution parameter contains some conceptual issues, while the one which makes use of the proper time of the observer is more physically sound. The two versions of the relativistic Langevin equation are connected by a reparametrization scheme. In spite of the issues contained in the first version of the relativistic Langevin equation, it still permits to extract the physical probability distributions of the Brownian particles, as is shown by Monte Carlo simulation in the example case of Brownian motion in \((1+1)\)-dimensional Minkowski spacetime.
## 1 Introduction
General relativity and non-equilibrium statistical physics are two important frontiers of modern theoretical physics. In spite of the significant progresses in their respective fields, the study on the overlap between these two fields remains inactive. However, owing to the development in astrophysics, there are more and more scenarios in which both general relativity and non-equilibrium statistical physics are important. Therefore, it becomes necessary and of utmost importance to take the combination of general relativity and non-equilibrium statistical physics more seriously.
There are two major branches in non-relativistic non-equilibrium statistical physics, i.e. kinetic theory and stochastic mechanics. The study of kinetic theory started from Boltz
mann's works, and its relativistic version also has a long history (which can be traced back to Juttner's works in 1911 [1]). Currently, the framework of relativistic kinetic theory looks fairly complete [2, 3, 4, 5]. In contrast, the study of relativistic stochastic mechanics is still far from being accomplished. Since the relativistic Ornstein-Uhlenbeck process was proposed about 20 years ago [6], there appeared some attempts in relativistic stochastic mechanics [7, 8, 9, 10, 11, 12, 13, 14, 15], mostly in the special relativistic regime. However, apart from Herrmann [14, 15] and Haba's [16] works, the manifest covariance of stochastic mechanics is typically absent. Some work [11] considered concrete curved spacetime background without paying particular attention to general covariance. There are also some other works which focus on the covariance of stochastic thermodynamics [17, 18], but those works have nothing to do with relativity.
The random motion of heavy particles began to attract scientific interests in the late 19th and early 20th centuries, as it provides a simple example for the diffusion phenomena. Einstein [19, 20] and Smoluchowski [21] showed that the random motion is closely related to macroscopic environment, however, the microscopic description of the random motion has not been established. Later, Langevin [22] wrote down the first equation of motion for a Brownian particle by his physical intuition, which inspired subsequent explorations about the microscopic mechanisms of Brownian motion. In the 1960-1970s, a series of models [23, 24, 25] were proposed in this direction, which made it clear why the disturbance from the heat reservoir could be viewed as Gaussian noises, and hence a bridge between microscopic mechanical laws and non-equilibrium macroscopic phenomena is preliminarily established in the non-relativistic regime. Since the 1990s, the so-called stochastic thermodynamics based on top of Langevin equation was established [26, 27].
To some extent, the challenge in constructing a covariant Langevin equation arises from the underestimation about the role of the observer. Unlike general relativity which concentrates mainly on the universal observer independent laws about the spacetime, statistical physics concentrates more on the observational or phenomenological aspects, which are doomed to be observer dependent. The lack of manifest covariance in some of the works on relativistic Langevin equation, e.g. [6, 7, 8, 9, 10], stems from the choice of the coordinate time as evolution parameter. As exceptional examples, Herrmann [14, 15] and Haba's [16] work adopted the proper time of the Brownian particle evolution parameter and the corresponding versions of Langevin equation are indeed manifestly covariant. Nevertheless, the role of the observer is still not sufficiently stressed in those works, and it will be clear that, from the observer's point of view, the proper time of the Brownian particle should not be thought of as an appropriate evolution parameter. The present work aims to improve the situation by reformulating the relativistic Langevin equation from the observer's perspective and taking the observer's proper time as evolution parameter. In this way, we obtain the general relativistic Langevin equation which is both manifestly general covariant and explicitly observer
dependent.
This work is Part I of a series of two papers under the same main title "Relativistic stochastic mechanics". Part II will be concentrated on the construction of Fokker-Planck equations associated with the Langevin equations presented here.
This paper is organized as follows. In Sec.2, we first clarify certain conceptual aspects of relativistic mechanics which are otherwise absent in the non-relativistic context. These include the explanation on the role of observers, the choice of time and the conventions on the space of micro states (SoMS). Sec.3 is devoted to a first attempt for the construction of general relativistic Langevin equation. To make the discussions self contained, we start from a brief review about the non-relativistic Langevin equation, and then pay special attentions toward the form of the damping and additional stochastic forces in the relativistic regime. As the outcome of these analysis, we write down a first candidate for the relativistic Langevin equation [referred to as \(\mathrm{LE}_{\tau}\)], which is manifestly general covariant. It is checked that the stochastic motion of the Brownian particle following this version of the Langevin equation does not break the mass shell condition. However, since this version of the relativistic Langevin equation employs the proper time \(\tau\) of the Brownian particle as evolution parameter, there are still some issues involved in it, because, from the observer's point of view, \(\tau\) itself is a random variable, which is inappropriate to be taken as an evolution parameter. The problem with \(\mathrm{LE}_{\tau}\) is resolved in Sec.4 by introducing a reparametrization scheme, which yields another version of the relativistic Langevin equation [\(\mathrm{LE}_{t}\)], which employs the proper time \(t\) of the prescribed observer as evolution parameter. In Sec.5, the stochastic motion of Brownian particles in \((1+1)\)-dimensional Minkowski spacetime driven by a single Wiener process and subjects to an isotropic homogeneous damping force is analyzed by means of Monte Carlo simulation. It is shown that, in spite of the issues mentioned above, \(\mathrm{LE}_{\tau}\) still permits for exploring the physical probability distributions, and the resulting distributions are basically identical to those obtained from \(\mathrm{LE}_{t}\). Finally, we present some brief concluding remarks in Sec.6.
## 2 Observers, time, and the SoMS
As mentioned earlier, we are interested in describing the stochastic motion of Brownian particles in a generic spacetime manifold \(\mathcal{M}\). To achieve this goal, a fully general covariant description for the SoMS and equations of motion are essential.
Determining a micro state of a classical physical system requires the simultaneous determination of the concrete position and momentum of each individual particle at a given instance of time. In Newtonian mechanics, there is an absolute time, therefore, there is no
ambiguity as to what constitutes a "given instance of time". However, in relativistic regime, the concept of simultaneity becomes relative, and in order to assign a proper meaning for a micro state, one needs to introduce a concrete time slicing (or temporal foliation) of the spacetime at first. There are two approaches to do so, i.e. (1) choosing some coordinate system and making use of the coordinate time as the slicing parameter; (2) introducing some properly aligned observer field and choosing the proper velocity \(Z^{\mu}\) (\(Z^{\mu}Z_{\mu}=-1\)) of the observer field as normalized normal vector field of the spatial hypersurfaces consisting of "simultaneous events", which is also referred to as the configuration space. Let us recall that an observer in a generic \((d+1)\)-dimensional spacetime manifold \(\mathcal{M}\) is represented by a timelike curve with normalized future-directed tangent vector \(Z^{\mu}\) which is identified as the proper velocity of the observer. An observer field is a densely populated collection of observers whose worldlines span the full spacetime. The second slicing approach is always possible because each observer naturally carries a Frenet frame with orthonormal basis \(e^{\mu}{}_{\dot{\rho}}\) with \(e^{\mu}{}_{\dot{0}}:=Z^{\mu}\), and, as one of the basis vector field, \(Z^{\mu}\) naturally satisfies the Frobenius theorem
\[Z_{[\mu}\nabla_{\nu}Z_{\rho]}=0,\]
which in turn implies the existence of spacelike hypersurfaces which take \(Z^{\mu}\) as normal vector field.
In practice, the two time slicing approaches can be made identical. One only needs to choose the specific observers whose proper velocity covector field \(Z_{\mu}\) is proportional to \((\mathrm{d}x^{0})_{\mu}\). However, such an identification often obscures the role of the observer field and brings about the illusion that the corresponding description is necessarily coordinate dependent and lacks the spacetime covariance. Therefore, it will be preferable to take the choice of not binding the coordinate system and the observer field together and focusing on explicit general covariance.
While an observer field can be used to identify which events happens simultaneously, it cannot uniquely specify the timing of the configuration spaces. To achieve this, we need to pick a single observer, referred to as _Alice_, from the set of observers. The integral curve of this particular observer can be denoted as \(x^{\mu}(t)\), where \(t\) represents the proper time of this single observer. In principle, we can extend \(t\) into a smooth scalar field \(t(x)\) over the whole spacetime manifold, such that we can label the configuration space at the proper time \(t\) of Alice unambiguously as the hypersurface \(\mathcal{S}_{t}:=\{x\in\mathcal{M}|t(x)=t=\text{const.}\}\). The union of \(\mathcal{S}_{t}\) at all possible \(t\) covers \(\mathcal{M}\). Notice that, in general, \(t\), \(x^{0}\) (the zeroth component of the coordinate system) and \(\tau\) (the proper time of the Brownian particle) can all be different entities.
The momentum of a relativistic particle is a tangent vector of the spacetime manifold 1\(\mathcal{M}\). Accordingly, the momentum space of a particle should be a subset of the tangent bundle
\(T{\cal M}\) of the spacetime, because the momentum must obey the mass shell condition
Footnote 2: The mass shell is not a constant vector at any event with its dual tangent vector. Therefore, we are free to take the tangent space description instead of the cotangent space description in this work.
\[S(x,p):=p^{\mu}p_{\mu}+m^{2}=g_{\mu\nu}(x)p^{\mu}p^{\nu}+m^{2}=0. \tag{1}\]
Moreover, the momentum of a massive particle must be a future-directed timelike vector, i.e. \(p^{\mu}Z_{\mu}(x)<0\). Putting these requirements together, we conclude that the SoMS of a massive relativistic particle must be a subspace of the future mass shell bundle \(\Gamma^{+}_{m}\),
\[\Gamma^{+}_{m}:=\{(x,p)\in T{\cal M}|\ g_{\mu\nu}(x)p^{\mu}p^{\nu}=-m^{2}\ \mbox{and}\ p^{\mu}Z_{\mu}(x)<0\}.\]
The geometry of future mass shell bundle is decided by the Sasaki metric [28], and its associated volume element is
\[\eta_{\Gamma^{+}_{m}}=\frac{\det(g)}{p_{0}}{\rm d}x^{0}\wedge{\rm d}x^{1} \wedge...\wedge{\rm d}x^{d}\wedge{\rm d}p^{1}\wedge...\wedge{\rm d}p^{d}. \tag{2}\]
The momentum space at the event \(x\in{\cal M}\) is simply the fiber of the future mass shell bundle \(\Gamma^{+}_{m}\) at the base point \(x\),
\[(\Gamma^{+}_{m})_{x}:=T_{x}{\cal M}\cap\Gamma^{+}_{m}.\]
Please be aware that the SoMS of a massive relativistic particle is _not_ the full future mass shell bundle \(\Gamma^{+}_{m}\), because the configuration space is only the spacelike hypersurface \({\cal S}_{t}\) consisted of simultaneous events regarding to the proper time \(t\) of Alice. Therefore, the actual SoMS should be the proper subspace
\[\Sigma_{t}:=\bigcup_{x\in{\cal S}_{t}}(\Gamma^{+}_{m})_{x}=\{(x,p)\in\Gamma^{ +}_{m}|x\in{\cal S}_{t}\}\]
of the full future mass shell bundle \(\Gamma^{+}_{m}\).
Due to the mass shell condition, the actual momentum space \((\Gamma^{+}_{m})_{x}\) has one less dimension than the tangent space \(T_{x}{\cal M}\). It will be appropriate to think of \((\Gamma^{+}_{m})_{x}\) as a codimension one hypersurface in \(T_{x}{\cal M}\) defined via eq.(1), and its (unnormalized) normal covector can be defined as \({\cal N}_{\mu}:=\frac{\partial}{\partial p^{\mu}}S(x,p)=2p_{\mu}\). In view of this, any tangent vector field \({\cal V}\in T[(\Gamma^{+}_{m})_{x}]\) must be normal to \(p_{\mu}\), i.e., \({\cal V}^{\mu}p_{\mu}=0\). Using this property, we can select a basis for the tangent space of the momentum space, i.e.
\[\frac{\partial}{\partial\ddot{p}^{i}}:=\frac{\partial}{\partial p^{i}}-\frac{ p_{i}}{p_{0}}\frac{\partial}{\partial p^{0}}.\]
Therefore, the tangent vector field \({\cal V}\in T[(\Gamma^{+}_{m})_{x}]\) acquires two component-representations, one in the basis \(\frac{\partial}{\partial p^{\mu}}\) of \(T(T_{x}{\cal M})\), and one in the basis \(\frac{\partial}{\partial\ddot{p}^{i}}\) of \(T[(\Gamma^{+}_{m})_{x}]\). It is easy to check that these two representations are equivalent,
\[{\cal V}^{i}\frac{\partial}{\partial\ddot{p}^{i}}={\cal V}^{i}\frac{\partial} {\partial p^{i}}-{\cal V}^{i}\frac{p_{i}}{p_{0}}\frac{\partial}{\partial p^{0} }={\cal V}^{i}\frac{\partial}{\partial p^{i}}+{\cal V}^{0}\frac{\partial}{ \partial p^{0}}={\cal V}^{\mu}\frac{\partial}{\partial p^{\mu}}. \tag{3}\]
Therefore, when describing a vector in \(T[(\Gamma^{+}_{m})_{x}]\), the two component-representations \({\cal V}^{i}\) and \({\cal V}^{\mu}\) can be used interchangeably.
The covariant Langevin equation
### A short review of Langevin equation in non-relativistic setting
The main focus of this section is to construct a covariant Langevin equation in a generic spacetime. Before dwelling into the detailed construction, it seems helpful to make a brief review of Langevin equation in the non-relativistic setting.
The non-relativistic Langevin equation describes the motion of a Brownian particle in a fixed heat reservoir. The initial intuitive construction of Langevin equation is simply based on the second law of Newtonian mechanics, in which the motion of the Brownian particle is driven by drift and damping forces \(F_{\rm drift}(x)\), \(F_{\rm damp}(p)\) together with a random force \(\xi(t)\). The drift force \(F_{\rm drift}(x)\) is provided by a conservative potential and hence is dependent on the coordinate position \(x\) of the Brownian particle. The damping force \(F_{\rm damp}(p)\), however, is dependent on the momentum of the particle. In the absence of the drift force, the Langevin equation describing one-dimensional Brownian motion can be intuitively written as
\[\frac{{\rm d}p}{{\rm d}t}=F_{\rm damp}(p)+\xi(t). \tag{4}\]
However, it was soon realized that this intuitive picture cannot be mathematically correct, because, under the impact of the random force, the momentum of the Brownian particle cannot be differentiable with respect to the time \(t\), and hence the Langevin equation cannot actually be regarded as a differential equation.
The modern understanding of Langevin equation is as follows. Consider a scenario in which a large number of light particles exist in the heat reservoir, and they randomly collide with the heavy Brownian particle, causing the momentum of the latter to alter with each collision. If the mass ratio between the Brownian particle and the particle from the heat reservoir is sufficiently large, there will be a timescale \({\rm d}t\) during which a sufficiently large number of independent collisions happen. Since the Brownian particle is heavy, its state changes very little within this timescale. According to the central limit theorem, the probability distribution of the variations of the momentum during \({\rm d}t\) follows a Gaussian distribution. The average value of this distribution yields the damping force \(F_{\rm damp}\), while the remaining (rapid) portion is viewed as a stochastic force. Thus, the classical Langevin equation in one-dimensional space can be expressed as
\[{\rm d}\tilde{x}_{n} =\frac{\tilde{p}_{n}}{m}{\rm d}t\] \[{\rm d}\tilde{p}_{n} =R{\rm d}\tilde{w}_{n}+F_{\rm damp}{\rm d}t, \tag{5}\]
where the suffices \(n\) represents the \(n\)-th time step and \({\rm d}\tilde{w}_{n}\) is a random variable obeying
Gaussian distribution
\[\Pr[\mathrm{d}\tilde{w}_{n}=\mathrm{d}w_{n}]=\frac{1}{\sqrt{2\pi\mathrm{d}t}} \mathrm{e}^{-\frac{1}{2}\frac{\mathrm{d}w_{n}^{2}}{\mathrm{d}t}}.\]
The coefficient \(R\) appeared in eq.(5) is called stochastic amplitude. Notice that the variance of the above Gaussian distribution equals \(\mathrm{d}t\). In this paper, tilded variables such as \(\tilde{x},\tilde{p}\) represent random variables, and the corresponding un-tilded symbols (e.g. \(x,p\)) represent their concrete realizations.
The Langevin equation as presented above is technically a system of discrete-time difference equations, as the time scale \(\mathrm{d}t\) must be large enough to permit sufficient number of collisions to happen during this time interval. However, if \(\mathrm{d}t\) is far smaller than the relaxation time, it can be effectively thought of as infinitesimal. In the limit of continuity, \(\tilde{w}_{n}\) becomes a Wiener process \(\tilde{w}_{t}\), and there is an ambiguity in the coupling rule between the stochastic amplitude \(R\) and the Wiener process if \(R\) is dependent on the momentum. Unlike in normal calculus, in the continuity limit,
\[R(\tilde{p}_{n}+a\mathrm{d}\tilde{p}_{n})\mathrm{d}\tilde{w}_{n}\xrightarrow{ \mathrm{d}t\to 0}R(\tilde{p}_{t})\circ_{a}\mathrm{d}\tilde{w}_{t} \tag{6}\]
depends on the value of \(a\in[0,1]\)[29]. The continuum version of Langevin equation with the above coupling rule reads
\[\mathrm{d}\tilde{x}_{t} =\frac{\tilde{p}_{t}}{m}\mathrm{d}t\] \[\mathrm{d}\tilde{p}_{t} =R(\tilde{p}_{t})\circ_{a}\mathrm{d}\tilde{w}_{t}+F_{\mathrm{d} \mathrm{d}\mathrm{m}}\mathrm{d}t. \tag{7}\]
In particular, the coupling rule with \(a=0\) is known as Ito's rule and is denoted as \(R\circ_{I}\mathrm{d}\tilde{w}_{t}\), while the rule with \(a=1/2\) is known as Stratonovich's rule and is denoted as \(R\circ_{S}\mathrm{d}\tilde{w}_{t}\).
Ito's rule allows Langevin equation to be understood as an equation describing a Markov process, making it easier to analyze. However, since \(\mathrm{d}t\) is equal to the variance of the Wiener process, \(\mathrm{d}\tilde{w}_{t}\) should be in the same order of magnitude with \(\sqrt{\mathrm{d}t}\). This fact leads to some profound consequences. For instance, it can be easily verified that any coupling rule \(\circ_{a}\) can be related to Ito's rule via
\[R(\tilde{p}_{t})\circ_{a}\mathrm{d}\tilde{w}_{t}=R(\tilde{p}_{t})\circ_{I} \mathrm{d}\tilde{w}_{t}+aR\frac{\partial}{\partial p}R\,\mathrm{d}t,\]
which is a straightforward consequence of eq.(6). Moreover, it can also be checked that Ito's rule breaks the chain rule of calculus,
\[\mathrm{d}h(\tilde{p}_{t}) =\frac{\partial h}{\partial p}\mathrm{d}\tilde{p}_{t}+\frac{1}{2} \frac{\partial^{2}h}{\partial p^{2}}\mathrm{d}\tilde{p}_{t}^{2}\] \[=\left(\frac{\partial h}{\partial p}F_{\mathrm{d}\mathrm{a}amp}+ \frac{1}{2}\frac{\partial^{2}h}{\partial p^{2}}R^{2}\right)\mathrm{d}t+\frac{ \partial h}{\partial p}R(\tilde{p}_{t})\circ_{I}\mathrm{d}\tilde{w}_{t}\neq \frac{\partial h}{\partial p}\mathrm{d}\tilde{p}_{t}.\]
On the other hand, Stratonovich's rule is the unique rule that preserves the chain rule,
\[\mathrm{d}h(\tilde{p}_{t}) =\frac{\partial h}{\partial p}\mathrm{d}\tilde{p}_{t}+\frac{1}{2} \frac{\partial^{2}h}{\partial p^{2}}\mathrm{d}\tilde{p}_{t}^{2}\] \[=\frac{\partial h}{\partial p}\left(R(\tilde{p}_{t})\circ_{a} \mathrm{d}\tilde{w}_{t}+F_{\mathrm{damp}}\mathrm{d}t\right)+\frac{1}{2}\frac{ \partial^{2}h}{\partial p^{2}}\left(R(\tilde{p}_{t})\circ_{a}\mathrm{d}\tilde{ w}_{t}+F_{\mathrm{damp}}\mathrm{d}t\right)^{2}\] \[=\left(\frac{\partial h}{\partial p}F_{\mathrm{damp}}+a\frac{ \partial h}{\partial p}R\frac{\partial}{\partial p}R+\frac{1}{2}\frac{ \partial^{2}h}{\partial p^{2}}R^{2}\right)\mathrm{d}t+\frac{\partial h}{ \partial p}R(\tilde{p}_{t})\circ_{I}\mathrm{d}\tilde{w}_{t}\] \[\xlongequal{a=1/2}{\overleftarrow{\tilde{\tilde{\tilde{\tilde{ \tilde{\tilde{\tilde{\tilde{\tilde{\tilde{\tilde{\tilde{\tilde{\tilde{\tilde{\tilde{ \tilde{\tilde{\tilde{\tilde{\tilde{ \tilde{ \tilde{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\,\,\,\,\,\,\,\,\,\\\\\\\\\\\}}\,\,\,\,\,\,\,\\,\\\,\
If \(K\) is independent of \(p\), the stochastic amplitude \(R\) can be easily derived using the thermal equilibrium between the Brownian particle and the heat reservoir, yielding
\[D:=R^{2}=2\,TK,\]
where \(T\) is the temperature of the reservoir. This relation is known as the Einstein relation. However, for nonlinear damping forces, the situations become much more complicated. Ref. [33] demonstrated that, provided \(R\) is momentum dependent, there exists a non-zero force \(\frac{1}{2}\partial(R^{2})/\partial p\) acting on the Brownian particle even if its momentum vanishes. This extra force term is also a consequence of the thermal equilibrium between the Brownian particle and the heat reservoir. There are two options for interpreting this extra force. The first option is to consider it as a part of the damping force, so that the full damping force takes the form
\[F_{\rm damp}=R\frac{\partial}{\partial p}R-Kp=-K_{\rm eff}\,p,\]
where the effective damping coefficient reads
\[K_{\rm eff}:=K-\frac{R}{p}\frac{\partial}{\partial p}R.\]
Consequently, there will be a modified Einstein relation
\[R^{2}=2T\left[K_{\rm eff}+\frac{R}{p}\frac{\partial}{\partial p}R\right].\]
This option seems to have several issues. (1) It looks strange that the damping force still exists when the momentum is zero; (2) More importantly, we cannot define an effective damping coefficient in higher spatial dimensions using this approach. The second option is to split the extra force term \(\frac{1}{2}\partial R^{2}/\partial p\) into two equal halves: one half is to be combined with the Ito's coupling to give rise to Stratonovich's coupling with Gaussian noises, and the other half is understood as an "additional stochastic force"
\[F_{\rm add}:=\frac{1}{2}R\frac{\partial}{\partial p}R.\]
Hence, the more general Langevin equation in \(d\)-dimensional flat space can be written as
\[{\rm d}\tilde{x}^{i}_{t} =\frac{\tilde{p}^{i}_{t}}{m}{\rm d}t\] \[{\rm d}\tilde{p}^{i}_{t} =\left[R^{i}{}_{a}\circ_{S}{\rm d}\tilde{w}^{a}_{t}+F^{i}_{\rm add }{\rm d}t\right]-K^{i}{}_{j}\tilde{p}^{j}_{t}{\rm d}t,\]
where the indices \(i,j\) label different spatial dimensions and \(\mathfrak{a},\mathfrak{b}\) are used to distinguish independent Gaussian noises. It should be remarked that the number \(\mathfrak{d}\) of Gaussian noises is independent of the dimension \(d\) of the space. The additional stochastic force now reads
\[F^{i}_{\rm add}=\frac{\delta^{\mathfrak{a}\mathfrak{b}}}{2}R^{i}{}_{a}\frac {\partial}{\partial p^{j}}R^{j}{}_{b}. \tag{8}\]
The discussions made so far in this subsection have been restricted to the non-relativistic situations. In the next subsection, it will be clear that the mass shell condition in the relativistic setting requires that the damping coefficients have to be momentum dependent. Therefore, the additional stochastic force should also appear in the relativistic Langevin equation. To derive the concrete expression for this additional stochastic force, we need to make use of the Fokker-Planck equation and the relativistic Einstein relation. However, since the focus of the present work is on the relativistic Langevin equation, we will provide the detailed derivation in Part II of this series of works. At present, we simply provide the result,
\[\mathcal{F}^{\mu}_{\text{add}}=\frac{\delta^{\text{\text{\text{ \text{ab}}}}}}{2}\mathcal{R}^{\mu}{}_{a}\nabla^{(h)}_{i}\mathcal{R}^{i}{}_{b}, \tag{9}\]
where \(h\) refers to the metric on the mass shell \((\Gamma^{+}_{m})_{x}\). It is important to note that the stochastic amplitudes \(\mathcal{R}^{\mu}{}_{a}\) in the relativistic Langevin equation should be a set of vectors on the curved Riemannian manifold \((\Gamma^{+}_{m})_{x}\), i.e. \(\mathcal{R}^{\mu}{}_{a}\in T[(\Gamma^{+}_{m})_{x}]\). As such, the derivative operator \(\partial/\partial p^{i}\) that appeared in eq.(8) needs to be replaced by the covariant derivative \(\nabla^{(h)}_{i}\) on the momentum space \((\Gamma^{+}_{m})_{x}\).
### Relativistic damping force
Recall that the damping force arises from the interaction between the Brownian particle and the heat reservoir, but only accounts for a portion of the total interaction, neglecting thermal fluctuations. We can directly measure the damping force when the thermal fluctuations can be ignored and establish a phenomenological model. It is reasonable to expect that the damping force should vanish if the Brownian particle comoves with the heat reservoir. Hence, the damping force can be regarded as an excitation of the relative velocity, with the damping coefficients serving as response factors. This idea was also explored in previous works [6, 8] in the special relativistic context. Here we shall extend the construction to the general relativistic case and point out some crucial subtleties which need to be taken care of.
In the relativistic context (be it special or general), the concept of velocity is replaced by proper velocity. However, the relative velocity _cannot_ be defined simply as the difference between two proper velocities, because the temporal component of the difference should not be considered as part of the relative velocity but rather as the energy difference. In order to have an appropriate definition for the relative velocity, one must project one of the two proper velocities onto the orthonormal direction of the other. Let \(U^{\mu}\) be the velocity of the heat reservoir and \(p^{\mu}\) be the proper momentum of the Brownian particle. One can associate with the Brownian particle an orthonormal projection tensor
\[\Delta^{\mu}{}_{\nu}(p):=\delta^{\mu}{}_{\nu}+\frac{p^{\mu}p_{\nu}}{m^{2}}\]
which obeys
\[\Delta^{\mu}{}_{\nu}(p)p^{\nu}=0.\]
Then the relative velocity between the Brownian particle and the heat reservoir can be defined as \(\Delta^{\mu}{}_{\nu}(p)U^{\nu}\). This definition has two important features, i.e. (1) when the Brownian particle is comoving with the heat reservoir, the relative velocity vanishes; (2) the relative velocity is always normal to \(p_{\mu}\), so that it is a vector in \(T[(\Gamma_{m}^{+})_{x}]\).
The relativistic damping force needs to have the following properties. First, it must contain the relative velocity as a factor, and a tensorial damping coefficient \(\mathcal{K}^{\mu\nu}\) as another factor, i.e.
\[\mathcal{F}^{\mu}_{\text{damp}}=\mathcal{K}^{\mu\nu}\Delta_{\nu}{}^{\rho}(p)U_ {\rho}.\]
Second, the damping force needs to be a tangent vector of the momentum space \((\Gamma_{m}^{+})_{x}\), i.e. \(\mathcal{F}^{\mu}\in T[(\Gamma_{m}^{+})_{x}]\). This latter requirement implies that \(\mathcal{K}^{\mu\nu}\) must satisfy the relation
\[\mathcal{K}^{\mu\nu}(x,p)=\Delta^{\mu}{}_{\alpha}(p)\mathcal{K}^{\alpha\beta} (x,p)\Delta_{\beta}{}^{\nu}(p). \tag{10}\]
In the light of eq.(10) and the idempotent property of the projection tensor, the relativistic damping force can be simply rewritten as
\[\mathcal{F}^{\mu}_{\text{damp}}=\mathcal{K}^{\mu\nu}U_{\nu}. \tag{11}\]
The constraint condition (10) over the tensorial damping coefficient has a very simple special solution
\[\mathcal{K}^{\mu\nu}=\kappa(x,p)\Delta^{\mu\nu}(p),\]
where \(\kappa(x,p)\) is a scalar function on the SoMS \(\Sigma_{t}\) and is referred to as the friction coefficient. This particular choice of damping coefficient corresponds to isotropic damping force. If \(\kappa(x,p)\) is constant, then damping force will become homogeneous. Therefore, the isotropic homogeneous damping force can be written as
\[\mathcal{F}^{\mu}_{\text{damp}}=\kappa\Delta^{\mu}{}_{\nu}(p)U^{\nu}.\]
Let \(e^{\mu}{}_{\hat{i}}\) be the spatial comoving frame covectors associated with the Brownian particle and \(E^{\hat{i}}{}_{\nu}\) be the dual vectors. Then the projection tensor \(\Delta^{\mu}{}_{\nu}(p)\) can be written as
\[\Delta^{\mu}{}_{\nu}(p)=e^{\mu}{}_{\hat{i}}E^{\hat{i}}{}_{\nu}.\]
The isotropic homogeneous damping force can be re-expressed as
\[\mathcal{F}^{\mu}_{\text{damp}}=\kappa e^{\mu}{}_{\hat{i}}E^{\hat{i}}{}_{\nu} U^{\nu}=\kappa U^{\hat{i}}e^{\mu}{}_{\hat{i}},\]
or more concisely as
\[\mathcal{F}_{\text{damp}}^{\hat{\text{i}}}=\kappa U^{\hat{\text{i}}},\]
where \(\mathcal{F}_{\text{damp}}^{\hat{\text{i}}}=\mathcal{F}_{\text{damp}}^{\mu}E_{\ \mu}^{\hat{\text{i}}}\), which represents the spatial components of the damping force under the comoving frame. This equation has the same form as the one given in [8, 9]. However, our expression (11) for the damping force is more general and does not rely on a particular choice of frame basis.
### Covariant relativistic Langevin equation: a first attempt
Although the intuitive form (4) of Langevin equation is mathematically unsounded, it is still inspiring while considering the extension of Langevin equation to generic spacetime manifolds. One can imagine that the relativistic Langevin equation should arise as the free geodesic motion of the Brownian particle perturbed by the extra damping and stochastic forces. Taking the proper time \(\tau\) of the Brownian particle as evolution parameter, the geodesic equation can be rearranged in the form
\[\text{d}x_{\tau}^{\mu} =\frac{p_{\tau}^{\mu}}{m}\text{d}\tau,\] \[\text{d}p_{\tau}^{\mu} =-\frac{1}{m}\Gamma^{\mu}{}_{\alpha\beta}p_{\tau}^{\alpha}p_{ \tau}^{\beta}\text{d}\tau,\]
where \(\Gamma^{\mu}{}_{\alpha\beta}\) is the usual Christoffel connection on the spacetime manifold \(\mathcal{M}\). Therefore, with the supplementation of Stratonovich's coupling with Gaussian noises, the additional stochastic force (9) and the relativistic damping force (11), we can write down, as a first attempt, the following set of equations as candidate of relativistic Langevin equation,
\[\text{d}\tilde{x}_{\tau}^{\mu} =\frac{\tilde{p}_{\tau}^{\mu}}{m}\text{d}\tau, \tag{12}\] \[\text{d}\tilde{p}_{\tau}^{\mu} =[\mathcal{R}^{\mu}{}_{\text{a}}\circ_{S}\text{d}\tilde{w}_{\tau }^{\text{a}}+\mathcal{F}_{\text{add}}^{\mu}\text{d}\tau]+\mathcal{K}^{\mu\nu} U_{\nu}\text{d}\tau-\frac{1}{m}\Gamma^{\mu}{}_{\alpha\beta}\tilde{p}_{\tau}^{\alpha} \tilde{p}_{\tau}^{\beta}\text{d}\tau. \tag{13}\]
As previously mentioned, the Stratonovich's rule is the unique coupling rule which preserves the chain rule of calculus. Meanwhile, we have been very careful while introducing the damping and stochastic forces so that each of the first three force terms appearing on the right hand side of eq.(13) are tangent vectors of the momentum space \((\Gamma_{m}^{+})_{x}\). With all these considerations combined together, eqs.(12)-(13) are guaranteed to be general covariant and have taken the damping and stochastic impacts from the heat reservoir into account. Moreover, since \(\mathcal{R}^{\mu}{}_{\text{a}},\ \mathcal{F}_{\text{add}}^{\mu}\) and \(\mathcal{K}^{\mu\nu}\) are all tensorial objects on the future mass shell \((\Gamma_{m}^{+})_{x}\), one can easily check that, provided the initial state is on the mass shell \((\Gamma_{m}^{+})_{x}\), all future states evolving from eqs.(12)-(13) will remain on \((\Gamma_{m}^{+})_{x}\), because, for any \((\tilde{x}_{\tau},\tilde{p}_{\tau})\) obeying the mass shell condition
\[\tilde{S}_{\tau}=S(\tilde{x}_{\tau},\tilde{p}_{\tau})=g_{\mu\nu}(\tilde{x}_{ \tau})\,\tilde{p}_{\tau}^{\mu}\tilde{p}_{\tau}^{\nu}+m^{2}=0,\]
we have
\[\mathrm{d}\tilde{S}_{\tau} =\frac{\partial S}{\partial x^{\mu}}\mathrm{d}\tilde{x}_{\tau}^{\mu} +\frac{\partial S}{\partial p^{\mu}}\mathrm{d}\tilde{p}_{\tau}^{\mu}\] \[=\frac{1}{m}\partial_{\mu}g_{\alpha\beta}\,\tilde{p}_{\tau}^{\mu} \tilde{p}_{\tau}^{\alpha}\tilde{p}_{\tau}^{\beta}\,\mathrm{d}\tau+2g_{\mu\rho} \tilde{p}_{\tau}^{\rho}\left(\mathcal{R}^{\mu}{}_{\mathrm{a}}\circ_{S}\mathrm{ d}\tilde{w}_{\tau}^{\mathrm{a}}+\mathcal{F}^{\mu}\mathrm{d}\tau-\frac{1}{m} \Gamma^{\mu}{}_{\alpha\beta}\tilde{p}_{\tau}^{\alpha}\tilde{p}_{\tau}^{\beta} \mathrm{d}\tau\right)\] \[=2(\tilde{p}_{\mu})_{\tau}\Big{(}\mathcal{R}^{\mu}{}_{\mathrm{a}} \circ_{S}\mathrm{d}\tilde{w}_{\tau}^{\mathrm{a}}+\mathcal{F}^{\mu}\mathrm{d} \tau\Big{)}=0, \tag{14}\]
where we have denoted \(\mathcal{F}^{\mu}=\mathcal{F}^{\mu}_{\mathrm{add}}+\mathcal{K}^{\mu\nu}U_{\nu}\) for short. Eq.(14) implies that the \((d+1)\) components of \(\tilde{p}^{\mu}\) are not all independent, and there is a redundancy contained in eq.(13), which makes no harm due to the reason explained by eq.(3). In the end, it looks reasonable to consider eqs.(12)-(13) as a viable candidate for the relativistic Langevin equation in curved spacetime, and we will henceforth refer to this system of equations as \(\mathrm{LE}_{\tau}\).
In the next section, we shall show that, from the phenomenological point of view, \(\mathrm{LE}_{\tau}\) still contains some issues which needs to be resolved. The crucial point lies in that, while considering the stochastic distribution of Brownian particles, one cannot rely on a comoving frame or observer. If we change to the view point of a regularly moving observer, the proper time \(\tau\) of the Brownian particle itself will become a random variable and hence inappropriate to be used for parametrizing the stochastic motion of the system. Thus we need a reparametrization scheme to rewrite the relativistic Langevin equation in terms of the observer's proper time \(t\) instead of \(\tau\).
## 4 Reparametrization
Recall that the configuration space \(\mathcal{S}_{t}\) is inherently connected with a concrete choice of observer and is defined as the level set \(t(x)=t\). Therefore, \(\partial_{\mu}t\) must be proportional to the unit normal covector \(Z_{\mu}\) (i.e. the proper velocity of the chosen observer). Let
\[\lambda:=\sqrt{-g^{\mu\nu}\partial_{\mu}t\partial_{\nu}t}=|\nabla t|,\]
we can write
\[\partial_{\mu}t=-\lambda Z_{\mu}. \tag{15}\]
Therefore, on the worldline of the Brownian particle, we have
\[\mathrm{d}t=\partial_{\mu}t\mathrm{d}\tilde{x}^{\mu}=-\lambda Z_{\mu}\mathrm{ d}\tilde{x}^{\mu}=-\lambda Z_{\mu}\frac{\mathrm{d}\tilde{x}^{\mu}}{\mathrm{d} \tau}\mathrm{d}\tau=-\lambda\frac{Z_{\mu}\tilde{p}^{\mu}}{m}\mathrm{d}\tau. \tag{16}\]
Since \(Z_{\mu}\tilde{p}^{\mu}<0\), the last equality explains the sign convention that appeared in eq.(15). Let
\[\gamma(\tilde{x},\tilde{p}):=-\frac{\lambda Z_{\mu}\tilde{p}^{\mu}}{m}, \tag{17}\]
the relation (16) can be rewritten as:
\[\mathrm{d}t=\gamma(\tilde{x},\tilde{p})\mathrm{d}\tau. \tag{18}\]
\(\gamma(\tilde{x},\tilde{p})\) plays the role of a local Lorentz factor. Since \(\tilde{x}^{\mu},\tilde{p}^{\mu}\) are both random, the regularity of the prescribed observer implies that the proper time \(\tau\) of the Brownian particle becomes essentially a random variable. This poses a serious challenge to understanding eqs.(12)-(13) as the relativistic Langevin equation, because Langevin equation requires a regular evolution parameter.
In spite of the challenge just mentioned, we still wish to make some sense of eqs.(12)-(13) and try to find a resolution of the problem that we encountered. For this purpose, we temporarily adopt a comoving description for the Brownian particle but nevertheless let Alice be bound together with the coordinate system, so that the coordinate time \(x^{0}\) equals the proper time \(t\) of Alice. Let us stress that binding the observer with the coordinate system is not an absolutely necessary step, but it indeed simplifies the following discussions about the probability distributions. In this description, \(\tau\) appears to be a regular variable, but then the spacetime position \(\tilde{x}^{\mu}\) (which contains the observer's proper time as a component) and momentum \(\tilde{p}^{i}\) will become random variables depending on \(\tau\). Due to the mass shell condition, there is no need to include \(\tilde{p}^{0}\) in the set of micro state variables.
Unlike regular variables, random variables do not have a definite value, but rather a probability distribution. Thus \(\mathrm{LE}_{\tau}\) provides insight into the evolution of the probability distribution of the random variables involved. The reason that \(\mathrm{LE}_{\tau}\) can provide a probability distribution relies on the fact that Stratonovich's coupling can be turned into Ito's coupling and that a stochastic differential equation with Ito's coupling can be viewed as a Markov process. Writing \(X:=(x^{\mu},p^{i})\), the Markov process described by \(\mathrm{LE}_{\tau}\) provides the transition probability
\[\Pr[\tilde{X}_{\tau+\mathrm{d}\tau}=X_{\tau+\mathrm{d}\tau}|\tilde{X}_{\tau}=X _{\tau}] \tag{19}\]
during an infinitesimal proper time interval \([\tau,\tau+\mathrm{d}\tau]\). Over a finite period of time, this will amount to the joint probability of the state of the Brownian particle and the observer's proper time at a given \(\tau\),
\[\Phi_{\tau}(t,x^{i},p^{i}):=\Pr[\tilde{x}^{0}_{\tau}=t,\tilde{x}^{i}_{\tau}=x ^{i},\tilde{p}^{i}_{\tau}=p^{i}], \tag{20}\]
which is normalized in the whole future mass shell \(\Gamma^{+}_{m}\) under the measure provided by the volume element (2). \(\Phi_{\tau}(t,x^{i},p^{i})\) is connected with the transition probability (19) via
\[\Phi_{\tau+\mathrm{d}\tau}(X)=\int\mathrm{d}Y\Pr[\tilde{X}_{\tau+\mathrm{d} \tau}=X|\tilde{X}_{\tau}=Y]\Phi_{\tau}(Y).\]
Although there are clear logic and corresponding mathematical tools to deal with the evolution equation of \(\Phi_{\tau}\) from the comoving description of the Brownian particle, the probability function \(\Phi_{\tau}\) is not a suitable object in statistical mechanics. Recall that the physically viable distribution in statistical mechanics must be a probability distribution on the SoMS, while the definition of the SoMS, especially the configuration space \(\mathcal{S}_{t}\), relies on the choice of observer. The problem of the probability distribution (20) lies in that, for fixed \(\tau\), different realizations \(x^{\mu}\) of \(\tilde{x}^{\mu}\) do not necessarily fall in the same configuration space \(\mathcal{S}_{t}\).
Nevertheless, as we shall show in the next section by Monte Carlo simulation in the example case of \((1+1)\)-dimensional Minkowski spacetime, we can still extract the physical probability distribution out of the result of eqs.(12)-(13). The point lies in that one should not look at the distribution of the end points of each realization of the Brownian motion after the fixed proper time period \(\tau\). Rather, one should look at the distribution of the intersection points of the stochastic worldlines with the physical configuration space \(\mathcal{S}_{t}\) (as will be shown in Fig.1). The latter distribution is physical, but it looks challenging to obtain such a distribution by means of analytical analysis.
A better way to obtain the physical probability distribution for the Brownian particle is to introduce a reparametrization for the Langevin equation, replacing the random parameter \(\tau\) by the regular proper time \(t\) of Alice, as will be discussed as follows. Let us mention that Dunkel _et al_[13] has attempted to use reparametrization to make their special relativistic Langevin equation covariant. However, a general discussion for the necessity of reparametrization has not yet been persued.
At the first sight, the reparametrization could be accomplished simply by substituting eq.(18) into eqs.(12)-(13). However, things are not that simple. In order to get a physically viable Langevin equation, one need to ensure that the resulting equation should describe a Markov process driven by a set of Wiener processes. To achieve this goal, we propose to first use discrete time nodes \(t_{n}\) to treat the stochastic process as a Markov process, and then take the continuity limit. Here proper time \(t\) needs _not_ be identified with the coordinate time \(x^{0}\). By defining a sequence of random variables using the discrete time nodes \(t_{n}\), namely
\[\tilde{\tau}_{n}:=\tilde{\tau}_{t_{n}},\qquad\tilde{y}_{n}^{\mu}:=\tilde{x}_{ \tilde{\tau}_{n}}^{\mu},\qquad\tilde{k}_{n}^{\mu}:=\tilde{p}_{\tilde{\tau}_{n }}^{\mu},\qquad\tilde{Y}_{n}:=\tilde{X}_{\tilde{\tau}_{n}}, \tag{21}\]
we can calculate their discrete time differences,
\[\mathrm{d}\tilde{\tau}_{n} =\gamma^{-1}(\tilde{Y}_{n})\mathrm{d}t_{n}, \tag{22}\] \[\mathrm{d}\tilde{y}_{n}^{\mu} =\tilde{x}_{\tilde{\tau}_{n+1}}^{\mu}-\tilde{x}_{\tilde{\tau}_{n }}^{\mu}=\frac{\tilde{k}_{n}^{\mu}}{m}\gamma^{-1}(\tilde{Y}_{n})\mathrm{d}t_{n},\] (23) \[\mathrm{d}\tilde{k}_{n}^{\mu} =\tilde{p}_{\tilde{\tau}_{n+1}}^{\mu}-\tilde{p}_{\tilde{\tau}_{n }}^{\mu}=F^{\mu}(\tilde{Y}_{n})\gamma^{-1}(\tilde{Y}_{n})\mathrm{d}t_{n}+{ \mathcal{R}^{\mu}}_{a}(\tilde{Y}_{n})\mathrm{d}\tilde{w}_{\tilde{\tau}_{n}}^{a}. \tag{24}\]
In deriving eq.(24), we have changed the Stratonovich's rule in eq.(13) into Ito's rule before
introducing the discretization, so that the total force \(F^{\mu}\) reads
\[F^{\mu}=\frac{\delta^{\text{ab}}}{2}\mathcal{R}^{\nu}{}_{\text{a}}\frac{\partial} {\partial p^{\nu}}\mathcal{R}^{\mu}{}_{\text{b}}+\mathcal{F}^{\mu}_{\text{add} }+\mathcal{K}^{\mu\nu}U_{\nu}-\frac{1}{m}\Gamma^{\mu}{}_{\alpha\beta}p^{\alpha} p^{\beta}.\]
It is remarkable that, although eqs.(22)-(24) appear to be complicated, they bear the enlightening feature that, at each time step, the increment of \((\tilde{\tau}_{n},\tilde{y}^{\mu}_{n},\tilde{k}^{\mu}_{n})\) depend only on their values at the nearest proceeding time step. Therefore, we can understand these equations as describing a Markov process. However, these equations are not yet the sought-for reparametrized Langevin, because \(\tilde{w}^{\text{a}}_{\tilde{\tau}_{n}}\) is no longer a Wiener process after the reparametrization.
Fortunately, we can define a stochastic process
\[\tilde{W}^{\text{a}}_{n}:=\sum_{i=0}^{n}\gamma^{1/2}(\tilde{Y}_{i})\text{d} \tilde{w}^{\text{a}}_{\tilde{\tau}_{i}}, \tag{25}\]
whose increment at the \(n\)-th time step reads
\[\text{d}\tilde{W}^{\text{a}}_{n}=\gamma^{1/2}(\tilde{Y}_{n})\text{d}\tilde{w} ^{\text{a}}_{\tilde{\tau}_{n}}.\]
The conditional probability \(\Pr[\text{d}\tilde{W}^{\text{a}}_{n}=\text{d}W^{\text{a}}_{n}|\tilde{Y}_{n}=Y _{n}]\) can be easily calculated as
\[\Pr[\text{d}\tilde{W}^{\text{a}}_{n}=\text{d}W^{\text{a}}_{n}| \tilde{Y}_{n}=Y_{n}] =\frac{1}{\gamma^{\beta/2}(Y_{n})}\frac{1}{(2\pi\text{d}\tau_{n}) ^{\mathfrak{s}/2}}\exp\left[-\frac{1}{2}\frac{\delta_{\text{ab}}\text{d}w^{ \text{a}}_{\tau_{n}}\text{d}w^{\text{b}}_{n}}{\text{d}\tau_{n}}\right]\] \[=\frac{1}{(2\pi\text{d}t_{n})^{\mathfrak{s}/2}}\exp\left[-\frac{1 }{2}\frac{\delta_{\text{ab}}\text{d}W^{\text{a}}_{n}\text{d}W^{\text{b}}_{n}} {\text{d}t_{n}}\right].\]
Since the above conditional probability is independent of the realization of \(\tilde{Y}_{n}\), we can drop the condition,
\[\Pr[\text{d}\tilde{W}^{\text{a}}_{n}=\text{d}W^{\text{a}}_{n}] =\int\text{d}Y_{n}\Pr[\text{d}\tilde{W}^{\text{a}}_{n}=\text{d}W^ {\text{a}}_{n}|\tilde{Y}_{n}=Y_{n}]\Pr[\tilde{Y}_{n}=Y_{n}]\] \[=\frac{1}{(2\pi\text{d}t_{n})^{\mathfrak{s}/2}}\exp\left[-\frac{ 1}{2}\frac{\delta_{\text{ab}}\text{d}W^{\text{a}}_{n}\text{d}W^{\text{b}}_{n} }{\text{d}t_{n}}\right]\int\text{d}Y_{n}\Pr[\tilde{Y}_{n}=Y_{n}]\] \[=\frac{1}{(2\pi\text{d}t_{n})^{\mathfrak{s}/2}}\exp\left[-\frac{ 1}{2}\frac{\delta_{\text{ab}}\text{d}W^{\text{a}}_{n}\text{d}W^{\text{b}}_{n }}{\text{d}t_{n}}\right].\]
Thus, in the continuum limit \(\text{d}t_{n}\rightarrow\text{d}t\), \(\tilde{W}^{\text{a}}_{n}\) becomes a standard Wiener process \(\tilde{W}_{t}\) with variance \(\text{d}t\). In the end, we obtain the following stochastic differential equations as the continuum limit of eqs.(23) and (24),
\[\text{d}\tilde{y}^{\mu}_{t} =\frac{\tilde{k}^{\mu}_{t}}{m}\gamma^{-1}\text{d}t, \tag{26}\] \[\text{d}\tilde{k}^{\mu}_{t} =\left[\hat{\mathcal{R}}^{\mu}{}_{\text{a}}\circ_{S}\text{d} \tilde{W}^{a}_{t}+\hat{\mathcal{F}}^{\mu}_{\text{add}}\text{d}t\right]+\hat{ \mathcal{K}}^{\mu\nu}U_{\nu}\text{d}t-\frac{1}{m}\Gamma^{\mu}{}_{\alpha\beta} \tilde{k}^{\alpha}_{t}\tilde{k}^{\beta}_{t}\gamma^{-1}\text{d}t, \tag{27}\]
in which we introduced the following notations,
\[\hat{\mathcal{R}}^{\mu}{}_{\mathfrak{a}}:=\gamma^{-1/2}\mathcal{R}^{\mu}{}_{ \mathfrak{a}},\qquad\hat{\mathcal{K}}^{\mu\nu}:=\gamma^{-1}\mathcal{K}^{\mu\nu },\qquad\hat{\mathcal{F}}^{\mu}_{\rm add}:=\gamma^{-1}\mathcal{F}^{\mu}_{\rm add }-\frac{\delta^{\mathfrak{a}\mathfrak{b}}}{2}\mathcal{R}^{\mu}{}_{\mathfrak{a }}\mathcal{R}^{j}{}_{\mathfrak{b}}(\gamma^{-1/2}\nabla^{(h)}_{j}\gamma^{-1/2}).\]
Notice that we have changed the coupling rule once again into the Stratonovich's rule, with guarantees that the resulting equations (26)-(27) are manifestly general covariant. Moreover, after the reparametrization, eqs.(26)-(27) still describe a stochastic process driven by some Wiener noises, and are now parametrized by the regular evolution parameter \(t\) instead of the random variable \(\tau\). Therefore, eqs.(26)-(27) fulfill all of our anticipations, and we will refer to this system of equations as \({\rm LE}_{t}\).
Please be reminded that, although the observer's proper time \(t\) needs not to be identical with the coordinate time \(y^{0}\), they _can be made_ identical by introducing the artificial choice for the observer which is at rest in the coordinate system. On such occasions, \(y^{0}\) and \(t\) should be treated as equal, and we need to check that the zeroth component of eq.(26) represents an identity. According to eq.(15), when \(y^{0}=t\), we have
\[Z_{\mu}=-\lambda^{-1}\partial_{\mu}t=-\lambda^{-1}\delta_{\mu}{}^{0}.\]
Thus we have
\[\gamma^{-1}=-\frac{m}{\lambda Z_{\mu}\tilde{k}^{\mu}_{t}}=\frac{m}{\tilde{k} ^{0}_{t}}.\]
Inserting this result into eq.(26), one sees that the zeroth component yields an identity \({\rm d}\tilde{y}^{0}_{t}={\rm d}t\).
## 5 Monte Carlo simulation in the Minkowski case
As advocated in the last section, it is possible to extract reasonable information about the physical distribution on the SoMS from \({\rm LE}_{\tau}\), although the evolution parameter \(\tau\) itself is a random variable from the observer's perspective. In this subsection, we shall exemplify this possibility by studying the simple case of stochastic motion of Brownian particles in \((1+1)\)-dimensional Minkowski spacetime driven by a single Wiener process and subjects to an isotropic homogeneous damping coefficients.
To be more concrete, we use the orthonormal coordinates \(x^{\mu}=(t,x)\) and let \(E:=p^{0}\) and \(p:=p^{1}\), so that the mass shell condition becomes \(E=\sqrt{p^{2}-m^{2}}\). For isotropic thermal perturbations, the stochastic amplitude should satisfy
\[\mathcal{R}^{\mu}\mathcal{R}^{\nu}=D\Delta^{\mu\nu}(p)=\frac{D}{m^{2}}\left[ \begin{array}{cc}p^{2}&Ep\\ Ep&E^{2}\end{array}\right],\]
where \(D\) is the diffusion coefficient. It's obvious that the stochastic amplitude should read
\[\mathcal{R}^{\mu}=\frac{\sqrt{D}}{m}\begin{bmatrix}p\\ E\end{bmatrix}.\]
If we put the observer and the heat reservoir at rest, i.e. \(Z=U=\partial_{t}\), the coordinate time will be automatically the proper time of the observer, and the isotropic damping force should be
\[\mathcal{F}^{\mu}_{\rm damp}=\kappa\Delta^{\mu\nu}(p)U_{\nu}=-\frac{\kappa}{m^ {2}}\begin{bmatrix}p^{2}\\ Ep\end{bmatrix}.\]
The above choice of observer implies \(\gamma=E/m\).
Since the projection tensor \(\Delta_{\mu\nu}(p)\) is simultaneously the metric of the momentum space \((\Gamma_{m}^{+})_{x}\) with "determinant"
\[\det\Delta_{ij}=\Delta_{11}=\frac{m^{2}}{E^{2}},\]
the additional stochastic force can be evaluated to be
\[\mathcal{F}^{\mu}_{\rm add}=\frac{1}{2}\mathcal{R}^{\mu}\nabla_{i}^{(h)} \mathcal{R}^{i}=\frac{1}{2}\mathcal{R}^{\mu}\frac{1}{\sqrt{\Delta_{11}}} \frac{\partial}{\partial p}\left(\sqrt{\Delta_{11}}\frac{\sqrt{D}}{m}E\right) =0.\]
With the above preparation, we can now write down the two systems of Langevin equations with evolution parameters \(\tau\) and \(t\) as
\[\mathrm{d}\tilde{x}_{\tau}=\frac{\tilde{p}_{\tau}}{m}\mathrm{d}\tau,\qquad \mathrm{d}\tilde{p}_{\tau}=\frac{\sqrt{D}}{m}E\circ_{I}\mathrm{d}\tilde{w}_{ \tau}+\frac{D\tilde{p}_{\tau}}{2m^{2}}\mathrm{d}\tau-\frac{\kappa E\tilde{p}_ {\tau}}{m^{2}}\mathrm{d}\tau, \tag{28}\]
and
\[\mathrm{d}\tilde{y}_{t}=\frac{\tilde{k}_{t}}{E}\mathrm{d}t,\qquad\mathrm{d} \tilde{k}_{t}=\sqrt{\frac{DE}{m}}\circ_{I}\mathrm{d}\tilde{W}_{t}+\frac{D \tilde{k}_{t}}{2Em}\mathrm{d}t-\frac{\kappa}{m}\tilde{k}_{t}\mathrm{d}t. \tag{29}\]
Since the observer is now bound together with the coordinate system, the temporal components of the Langevin equations become either trivial or redundant. Therefore, in eqs.(28) and (29), only the spatial components are presented.
We are now ready to make the numeric simulation based on the above two systems of equations. The (initial) values of the simulation parameters are listed in Tab.1.
The left picture in Fig.1 depicts 50 random worldlines generated by eq.(28) after a fixed "evolution time" \(\tau=20\). The end point of each random worldline is marked by a solid triangle, and the horizontal line at \(t=20\) represents the configuration space \(\mathcal{S}_{20}\). We can see that all worldlines fall strictly in the future lightcone of the initial event \((t,x)=(0,0)\), and the end points of different random worldlines do not fall in the same configuration space.
Nevertheless, we can extract the intersection point of each worldline with the configuration space \(\mathcal{S}_{20}\) (marked with round dots) and try to identify their distribution.
The right picture in Fig.1 depicts 50 random worldlines generated by eq.(29) after the fixed evolution time \(t=20\). Since \(t\) is the regular evolution parameter, the end points of all worldlines automatically fall in the same configuration space \(\mathcal{S}_{20}\) and are marked with round dots. This gives an intuitive illustration for the power of the reparametrization introduced in the last section. One can feel how similarly the round points in both pictures in Fig.1 are distributed.
With a little more efforts, we have generated \(10^{6}\) random phase trajectories using both eq.(28) and eq.(29) and collected the data for the inse
\begin{table}
\begin{tabular}{c c c c|c c c c|c c} \hline \hline Parameters & \(D\) & \(\kappa\) & \(m\) & \(\tilde{x}|_{\tau=0}\) & \(\tilde{p}|_{\tau=0}\) & \(\tilde{y}|_{t=0}\) & \(\tilde{k}|_{t=0}\) & \(\mathrm{d}t\) & \(\mathrm{d}\tau\) \\ \hline (Initial) values & 1.0 & 1.0 & 1.0 & 0 & 0 & 0 & 0 & 0.02 & 0.02 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The (initial) values of the simulation parameters
Figure 1: Random worldlines generated by eq.(28) (left) and eq.(29) (right)
Figure 2: The distributions of Brownian particles in configuration space \(\mathcal{S}_{20}\) (left) and momentum space (right) from the two systems of equations (28) and (29)
with the configuration space \(\mathcal{S}_{20}\) in the case of eq.(28). Using the data thus collected, we can depict separately the configuration space and momentum space distributions of the Brownian particles and make comparisons between the results that follow from eq.(28) and eq.(29). As can be seen in Fig.2, the results from eq.(28) and eq.(29) are almost identical.
We also generated the joint distributions in positions and momenta from both eqs.(28) and (29) at \(t=20\). The results are presented in Fig.3. We can hardly find any differences between the two pictures.
As a more serious comparison between the distributions generated by eqs.(28) and (29), the Pearson \(\chi^{2}\)-test is utilized with the assistance of Wolfram Language to determine whether the distributions were indeed identical. The resulting P-values were found to be 0.774 for the distributions presented in the left plots of Fig.2, 0.967 for distributions presented in the right plots of Fig.2, and 0.972 for the two distributions presented in Fig.3. These results provide more solid evidence for the expectation that the distributions generated by the two systems of equations (28) and (29) are identical.
## 6 Concluding remarks
We have thus formulated two different versions of the relativistic Langevin equation, i.e. \(\mathrm{LE}_{\tau}\) and \(\mathrm{LE}_{t}\) in a generic curved spacetime background, which are both manifestly general covariant. The two versions differ from each other in that \(\mathrm{LE}_{\tau}\) takes the proper time \(\tau\) of the Brownian particle as evolution parameter, while \(\mathrm{LE}_{t}\) takes the proper time \(t\) of the prescribed observer Alice as evolution parameter.
Figure 3: Joint probability distributions in positions and momenta at \(t=20\): The left picture arises from eq.(28), and the right one arises from eq.(29). The distributions presented in Fig.2 correspond to vertical and horizontal projections of these joint distributions.
The importance of the prescribed regularly moving observer is stressed throughout the analysis, especially while clarifying the SoMS of the Brownian particle and interpreting the probability distributions of the Brownian particle. It is argued that, in order to get the physical probability distribution, \(\mathrm{LE}_{t}\) is more preferable than \(\mathrm{LE}_{\tau}\). We also discussed the conditions which the relativistic damping coefficients need to obey, and clarified the concept of relative velocity in the relativistic context.
We also demonstrated, by means of Monte Carlo simulation in the particular example case of Brownian motion in \((1+1)\)-dimensional Minkowski spacetime, that although \(\mathrm{LE}_{\tau}\) contains some conceptual issues, it is indeed possible to extract physically reasonable probability distributions from it. However, since the Brownian particles after a fixed proper time \(\tau\) do not fall in the same configuration space, it would be more difficult to obtain the physical probability distributions from \(\mathrm{LE}_{\tau}\).
This work is the first of our attempts for a systematic study of general relativistic stochastic mechanics. In a forthcoming work, we will proceed to formulate the corresponding Fokker-Planck equations and discuss the physical consequences. In particular, the general relativistic variant of Einstein relation will be considered, and the relationship between different probability distributions will be clarified.
Before ending this paper, let us mention that there is another complementary approach, i.e. the 2-jet bundle approach [31, 32] using Ito's formalism, for describing the covariant Brownian motion [34, 35, 36, 37, 38], see also [39] for a more recent review. Our formalism does not need to make use of the jet bundle, and the resulting equations are more in line with the original intuitive construction of Langevin. There is some other recent work [40] which focuses on the heat distribution in Minkowski spacetime, which has some overlap in research subjects with the present work.
## Acknowledgement
This work is supported by the National Natural Science Foundation of China under the grant No. 12275138.
## Data Availability Statement
All data used in this research was generated from eqs.(28) and (29) by use of numeric programs written in C++ and python. The P-values given in Sec.5 are calculated using Wolfram Language. All programs are available upon request.
## Declaration of competing interest
The authors declare no competing interest.
|
2303.12846
|
Tidal Peeling Events: low-eccentricity tidal disruption of a star by a
stellar-mass black hole
|
Close encounters between stellar-mass black holes (BHs) and stars occur
frequently in dense star clusters and in the disks of active galactic nuclei
(AGNs). Recent studies have shown that in highly eccentric close encounters,
the star can be tidally disrupted by the BH (micro-tidal disruption event, or
micro-TDE), resulting in rapid mass accretion and possibly bright
electromagnetic signatures. Here we consider a scenario in which the star might
approach the stellar-mass BH in a gradual, nearly circular inspiral, under the
influence of dynamical friction on a circum-binary gas disk or three-body
interactions in a star cluster. We perform hydro-dynamical simulations of this
scenario using the smoothed particle hydrodynamics code PHANTOM. We find that
the mass of the star is slowly stripped away by the BH. We call this gradual
tidal disruption a "tidal-peeling event", or a TPE. Depending on the initial
distance and eccentricity of the encounter, TPEs might exhibit significant
accretion rates and orbital evolution distinct from those of a typical
(eccentric) micro-TDE.
|
Chengcheng Xin, Zoltan Haiman, Rosalba Perna, Yihan Wang, Taeho Ryu
|
2023-03-22T18:06:45Z
|
http://arxiv.org/abs/2303.12846v1
|
# Tidal Peeling Events: low-eccentricity tidal disruption of a star by a stellar-mass black hole
###### Abstract
Close encounters between stellar-mass black holes (BHs) and stars occur frequently in dense star clusters and in the disks of active galactic nuclei (AGNs). Recent studies have shown that in highly eccentric close encounters, the star can be tidally disrupted by the BH (micro-tidal disruption event, or micro-TDE), resulting in rapid mass accretion and possibly bright electromagnetic signatures. Here we consider a scenario in which the star might approach the stellar-mass BH in a gradual, nearly circular inspiral, under the influence of dynamical friction on a circum-binary gas disk or three-body interactions in a star cluster. We perform hydro-dynamical simulations of this scenario using the smoothed particle hydrodynamics code PHANTOM. We find that the mass of the star is slowly stripped away by the BH. We call this gradual tidal disruption a "tidal-peeling event", or a TPE. Depending on the initial distance and eccentricity of the encounter, TPEs might exhibit significant accretion rates and orbital evolution distinct from those of a typical (eccentric) micro-TDE.
0000-0002-1881-7888]Cheng Cheng Xin
## 1 Introduction
Stars and their compact remnants, which include stellar-mass black holes (BHs), are expected to be abundant in dense stellar clusters of all kinds (Mackey et al., 2007; Strader et al., 2012), and they can also be found in the disks of Active Galactic Nuclei (AGNs). Dynamical interactions between compact objects and stars in clusters are frequently expected (Rodriguez et al., 2016; Kremer et al., 2018). As a result, stars in a cluster will inevitably undergo close encounters with stellar-mass BHs. These close encounters between stars and BHs, which are of particular interest here, can lead to binary formation or to tidal disruption of the star by the BH (the so-called micro-TDEs, Perets et al., 2016).
Stars and stellar-mass BHs found in an AGN disk are likely the result of two mechanisms: _(i)_ Capture from the nuclear star cluster (Artymowicz et al., 1993), which consists mostly of massive stars (e.g. O- and B-type stars with masses \(\gtrsim\) 2-15\(M_{\odot}\)). These stars' orbits will eventually align with the AGN disk after a number of crossings of the disk (Yang et al., 2020). _(ii)_ In-situ formation: Gravitational instabilities in the outer parts of the disk trigger star formation (Goodman, 2003; Dittmann and Miller, 2020), and those stars, as well as their remnant compact objects, remain embedded in the disk. The unusual disk environment causes stars to accrete and grow in mass (Cantiello et al., 2021; Jermyn et al., 2021), which makes BH remnants a common outcome upon their death. Once trapped in the AGN disk, BHs can go through radial migration and undergo close encounters with stars or compact objects (e.g., Tagawa et al., 2020). Therefore, micro-TDEs can also occur in AGN disks, in addition to the stellar cluster environment.
Micro-TDEs are expected to be ultra-luminous events, and their expected accretion rates and electromagnetic (EM) features have recently begun to be investigated in more detail via smooth particle hydrodynamical (SPH)
simulations (Lopez et al., 2019; Kremer et al., 2021; Wang et al., 2021; Kremer et al., 2022; Ryu et al., 2022) and moving-mesh (Ryu et al., 2023). Existing studies have performed numerical experiments to investigate nearly parabolic encounters with eccentricity \(e\sim 1\). Kremer et al. (2022) recently presented a variety of hydrodynamical simulations of the typical micro-TDE with parabolic orbits to show that stars _in vacuum_ can experience different degrees of tidal disruption depending on pericenter distance and stellar mass, while the peak luminosity of the EM emission might be super-Eddington when pericenter distance is within \(\sim 2R_{t}\), where \(R_{t}=(M_{\rm BH}/M_{\rm s})^{1/3}R_{\rm s}\) is the order-of-magnitude estimate of the tidal radius for a star with mass \(M_{\rm s}\) and radius \(R_{\rm s}\) disrupted by a BH with mass \(M_{\rm BH}\).
On the other hand, low-eccentricity micro-TDEs in compact orbits are of particular interest in this paper for the following reasons. First, observational work has suggested that binaries in clusters have lower eccentricity as they become more compact (Meibom and Mathieu, 2005; Hwang et al., 2022). 3D hydro-simulations by Ryu et al. (2023) further suggest that three-body interactions in clusters such as encounters between binary stars and stellar-mass BHs can also lead to eventual close interactions between one star in the original binary and the BH, where, in some cases, a low-eccentricity micro-TDE in a close orbit can form if the star becomes bound to the BH. Additionally, star-BH binaries in an AGN disk can become tightly bound due to external torques exerted by the dynamical friction of the AGN disk gas. Hydrodynamical simulations have shown that a circumbinary disk tends to shrink the orbit of the binary within an AGN disk (Li et al., 2021; Kaaz et al., 2021; Li and Lai, 2022) and drive it to low eccentricity, either \(e\to 0\) or \(e\to 0.45\), depending on the initial value (Munoz et al., 2019; D'Orazio and Duffell, 2021; Zrake et al., 2021).
Unlike the abrupt disruption that the star experiences in a parabolic TDE or micro-TDE, lower-eccentricity micro-TDEs gradually strip mass from the star, typically over many orbital times, analogous to the extreme-mass-ratio inspiral of a white dwarf (WD) and an intermediate mass BH, in which the WD loses mass periodically during the inspiral (Zalamea et al., 2010; Chen et al., 2022). We call this "tidal-peeling event" (TPE) in this paper.
In this paper, we numerically model the general case of TPEs with SPH simulations using PHANTOM, without including the low-density background gas such as the AGN disk. We focus on exploring the BH mass accretion rate and orbital evolution in TPEs under different assumptions for the initial mass of the star, eccentricity and pericenter distance of the encounter.
We organize this paper as follows. We describe our simulation models, analysis method and a resolution study in SS 2, 3 and 4, respectively. In SS 5, we show the morphological evolution of the TPEs. Section SS 6 illustrates our prediction for the EM signatures of TPEs, based on the computation of the BH mass accretion rates, stellar mass loss via tidal interactions and the orbit evolution of the remnant. In SS 7, we explore the effect of having more massive stars undergoing TPEs. Finally, we discuss some implications of our results in SS 8, and we summarize our conclusions in SS 9.
## 2 Simulation Methods
We perform SPH simulations of TPEs of stars by a \(10M_{\odot}\) BH using PHANTOM(Price et al., 2018). We run simulations for (4 stellar masses) \(\times\) (4 eccentricities) \(\times\) (6 penetration factors) = 96 models in total, where the penetration factor \(\beta\) is defined as the ratio between the tidal radius and the pericenter distance, or \(R_{t}/r_{p}\). We consider main-sequence (MS) stars with four different masses, \(M_{\rm s}\) = 1, 5, 10 or 15 \(M_{\odot}\), and investigate the dependence of the initial eccentricities of outcomes by considering \(e_{0}\) = 0.0, 0.2, 0.4 and 0.6. We begin all simulations by placing the star at the apocenter of the orbit. Finally, we consider the following penetration factors \(\beta=R_{t}/r_{p}\) = 1, 0.67, 0.5, 0.4, 0.33 and 0.25, which corresponds to the pericenter distances \(r_{p}\) = 1, 1.5, 2, 2.5, 3 and 4 times the tidal radii. For simplicity, we introduce the letter \(\mathcal{M}(M_{\rm s},e_{0},\beta)\) to denote any specific model, where \(M_{\rm s}\) is given in units of \(M_{\odot}\). We fix the BH mass in all the simulation models at \(M_{\rm BH}=10M_{\odot}\).
We first use the 1D stellar evolution code MESA(Pastron et al., 2019) to generate the profile of each MS star with the core H fraction of 0.5, where we assume solar abundances for composition, hydrogen and metal mass fractions \(X=0.74\) and \(Z=0.02\) respectively (helium mass fraction \(Y=1-X-Z\)), and mean molecular weight \(\mu\sim 0.59\) (fully ionized gas). For the stellar masses that we consider, MESA uses the OPAL and HELM table for the equation of state (Paxton et al., 2019), which we adopt in the TPE simulations. We then take the density and internal energy profile of MESA MS stars to start the simulations in PHANTOM. We first map the 1D MESA model onto our 3D SPH grid and relax it for a few stellar dynamical times (\(t_{\rm dyn}=\sqrt{R_{s}^{3}/GM_{s}}\)) until it reaches hydrostatic equilibrium. \(t_{\rm dyn}\) is typically 1 to a few hours depending on the mass and radius of the star.
In the TPE simulations with PHANTOM, we use artificial viscosity varying between \(a^{\rm AV}{}_{\rm min}=0.1\) to \(a^{\rm AV}{}_{\rm max}=1\). This is the typical range for \(a^{\rm AV}\) to evolve, which contributes to shock capture (e.g. Coughlin et al., 2017). We
adopt an equation of state that includes radiation pressure assuming instantaneous local thermodynamic equilibrium. This assumption is valid because the gas in our simulations is expected to be optically thick. We employ \(10^{5}\) SPH particles in each simulation, which is justified in SS 4, and each simulation uses up to 6,000 CPU hours on processor Intel Xeon Gold 6226 2.9 Ghz. For this resolution, the smallest spatial scale within which accretion can be resolved is \(r_{\rm acc}=100r_{g}\), where \(r_{g}=GM_{\rm BH}/c^{2}\). If a SPH particle falls within the "accretion" radius, it is accreted onto the BH. The particles are removed from the simulation once accreted by the BH; the removed mass is added to the mass of the sink particle.
## 3 Analysis
In this study, we focus on some key physical quantities, such as the amount of mass lost in TPEs and the accretion rate, directly measured from our simulation output. Also, we investigate their dependence on different initial conditions - the mass of the star (\(M_{s}\)), the initial eccentricity (\(e_{0}\)), and the penetration parameter (\(\beta\)) that is inversely proportional to the initial pericenter distance.
First, we measure the mass accretion onto the BH, \(M_{\rm acc}\), by evaluating the mass accreted onto the sink particle representing the BH. The BH accretion rate \(\dot{M}_{\rm BH}\) is computed as the finite difference of \(M_{\rm acc}\) divided by the time difference (\(\sim 0.4\) hours) between two adjacent outputs of the simulation.
In a TPE, the star's mass is slowly stripped by the BH, which leads to the star being partially or totally disrupted. In past studies of TDEs or micro-TDEs using numerical simulations (e.g., Mainetti et al., 2017; Kremer et al., 2022), the mass bound to the star or BH is usually computed using an iterative process described in Lombardi, Jr. et al. (2006). However, since the iteration evaluates the specific binding energy of each particle, including a gravitational potential term, it assumes spherical geometry for the remnant, which is not always applicable in our TPE simulations, see Fig. 5 for example. Additionally, in some TPEs, the remnant is not isolated as it is connected with debris, for which the iterative process can lead to inaccurate identification of the remnant. Alternatively, we define the mass of the stellar remnant (\(M_{\rm rem}\)) as the total mass of particles within the initial radius of the star (measured from the densest point in the star).
In addition to the stellar material lost to \(M_{\rm acc}\), the star can also lose mass to the surroundings when the stellar material is unbound during the disruptions. We measure the fraction of total mass removed from the star, \(f_{\rm rm}\). The mass removed consists of mass accreted by the BH (\(M_{\rm acc}\)) and mass ejected (total stellar mass minus remnant mass; \(M_{\rm s}\)-\(M_{\rm rem}\)). Note that the mass removed from the star includes the mass unbound to the remnant, but bound and not yet accreted by the BH. So \(f_{\rm rm}\)=(\(M_{\rm s}\) - \(M_{\rm rem}\) + \(M_{\rm acc}\))/\(M_{s}\).
The orbital features of the stellar remnant can be described by the evolution of the orbital separation (\(r\)), semi-major axis (SMA; \(a\)) and eccentricity (\(e\)) over time. We define \(r\) to be the distance between the particle of the highest density in the stellar remnant, typically at the core of the star (small deviation can happen due to any oscillation in the star during the disruption), and the position of the sink particle (BH). The SMA and the eccentricity are calculated using the specific energy and specific angular momentum of the binary, adapted from the calculation in Munoz et al. (2019), where the equation of motion of the binary are evaluated with the external gravitational and accretion forces. In SS 6, we evaluate the evolution of \(a\) and \(e\), as well as their change per each orbit around the BH.
## 4 Resolution Tests for Initial Stellar Profile
A typical choice for resolutions of hydro-simulations of TDEs or micro-TDEs is \(N\sim 10^{5}\) particles (e.g. Mainetti et al., 2017; Kremer et al., 2022). We performed resolution tests to determine whether or not a higher resolution is needed, by using PHANTOM to model the initial stellar profile using different numbers of SPH particles \(N=10^{5},2\times 10^{5},4\times 10^{5},8\times 10^{5},10^{6}\). In particular, we compare the radial density profiles of the fully relaxed \(1M_{\odot}\) star with the numbers of SPH particles given above in Fig. 1. The gray region shows where the initial profile varies the most, which occurs at the surface of the star. We find that different resolutions only cause the density to fluctuate by \(\sim\)0.01%, which only takes place in less than 1% of the SPH particles by mass and \(\lesssim 0.2R_{\odot}\) by radius. Overall, the density profiles for resolutions from \(N=10^{5}\) to \(N=10^{6}\) particles show excellent agreement. Therefore, we run all TPE simulations, starting from their stellar profiles, with particle number \(N=10^{5}\). As a comparison, we also depict the polytropic star with \(\gamma=4/3\) of the same mass using a purple dashed line.
## 5 Morphology of TPE
The stars in our TPE simulations encounter the BH in low-eccentricity (\(e=0-0.6\)) and ultra-compact (\(\beta=0.25-1\)) orbits. Depending on the initial conditions, the mass of the star can be slowly peeled by the BH, and stellar material is lost on the timescale of many orbital periods. In general, TPEs will have novel morphological evolution, e.g. distinct morphology from that seen in
TDEs or micro-TDEs, and in particular, 1) gradual tidal stripping and formation of spirals, 2) possible debris-star interactions, and 3) efficient circularization of debris into an accretion disk. Each of these is demonstrated in the following examples.
Fig. 2 shows a typical morphology of a TPE, where the column density of the gas particles is shown in the color bar and the BH is represented by the green dot. In this example (Model \(\mathcal{M}(1,0.4,1)\); recall the definition in SS 3), the \(1M_{\odot}\) star on an eccentric orbit with \(\beta=1\) is "peeled" due to the tidal influence of the BH, which continues for four orbits before the star is totally disrupted (at the \(\sim 4^{\rm th}\) orbits). The snapshots are taken at \(t=0,4.9,12.0,18.2,23.5\) and \(36.3\) hours since the onset of the simulation, where the orbital period is \(P\approx 5.7\) hrs. Some stellar debris circularizes and forms an accretion disk around the BH, while some becomes unbound and are ejected into infinity, including mass lost through the "L3" point; we show the initial equipotential surface of the binary in each panel. This can be more clearly seen in Fig. 3 that shows the edge-on view of Fig. 2. The disk is initially smaller than the pericenter distance of the orbit for a short period of time, before it inflates and puffs up later on due to radiation pressure and shock heating, similar to the findings of Wang et al. (2021).
Generally, tidal peeling is more violent for smaller orbital separations. All of our TPE simulations result in super-Eddington BH accretion rates. However, a significant fraction of the star being tidally disrupted, leaving most of the dense stellar material around the BH, results in large optical depth that likely will delay and dim the EM emission from a TPE. In reality, the luminosity could be modulated by several mechanisms such as jet emission or wind outflow from the accretion disk, which are not included in this study. Additionally, in some configurations, such as \(\mathcal{M}(1,0.6,0.67)\) in Fig. 4, the star intersects with its own tidal streams periodically, which will form a shock front that further modifies the luminosity from the TPE. In this model, the remnant remains intact for many orbits. In the second panel, the star encounters the tail of its own stream formed in the last orbit, leaving behind a hot ploom near the star as seen in the last panel. Although these phenomena cannot be resolved in our simulations, in the following sections, we will qualitatively discuss their implications for the overall EM signature of the TPEs in addition to the accretion rates that we measure directly from the simulations.
Finally, TPEs from the interaction of BHs with more massive stars are considered since stars near the galactic center (Genzel et al., 2003; Levin, 2003; Paumard et al., 2006) and those formed in an AGN disk (Levin, 2003; Goodman and Tan, 2004) are also thought to be preferentially massive, and they offer morphology different from TPEs with a solar-like star. In Fig. 5, we demonstrate the TPE between a \(5M_{\odot}\) star and the BH in circular orbit with the initial separation of one tidal radius. The surface of this star is almost in contact with the BH, \(a=r_{p}\approx 1.3R_{s}\). Compared to a solar-like star in the same initial orbit, a more massive star experiences more rapid tidal peeling. As a result, the spirals formed from the disrupted material are more closely packed, compared to those in Fig. 2. The snapshots of the TPE are taken at \(t=0,0.88,1.77,2.66,3.54\) and \(4.43\) hours, and this TPE model has orbital time \(P\approx 1\) hr. The massive star is totally disrupted within the first orbit, and the stellar material eventually circularizes into a smooth disk.
## 6 Accretion Rate and Orbital Evolution of Tpes
### Overview using two examples
Fig. 6 demonstrates six key features of mildly eccentric TPEs for the case of the \(10M_{\odot}\) BH and the \(1M_{\odot}\) star. This figure presents two models - \(\mathcal{M}(1,0.4,1)\) (left): initial eccentricity (\(e_{0}\)) is 0.4 and initial pericenter distance \(r_{p}/R_{t}\)=1 (\(\beta=1\)), and \(\mathcal{M}(1,0.6,0.67)\) (right): a more eccentric and less compact model with \(e_{0}\)=0.6 and \(r_{p}/R_{t}\)=1.5 (\(\beta=0.67\)). We show the time evolution of (i) the mass accreted onto the BH (\(M_{\rm acc}\)), (ii)
Figure 1: The radial density profile of a fully relaxed \(1M_{\odot}\) star in PHANTOM, using \(N=10^{5},2\times 10^{5},4\times 10^{5},8\times 10^{5},10^{6}\) SPH particles. The density is normalized to the core density \(\rho_{\rm c}\). Different resolutions yield converging initial density profiles for the star, despite a small surface layer (R\(>0.8R_{\odot}\); gray region), containing \(\lesssim\)\(0.9\%\) of stellar mass. This justifies our choice to use \(N=10^{5}\) particles throughout the simulations. As a sanity check, we overlay the analytical solution of 4/3-polytrope (purple dashed line).
Figure 3: \(\mathcal{M}(1,0.4,1)\) – Same snapshots of the simulation as in Fig. 2 on the x-z plane, or edge-on view of the orbit and the accretion disk.
Figure 2: \(\mathcal{M}(1,0.4,1)\) – Tidal peeling morphology of a \(1M_{\odot}\) star and a \(10M_{\odot}\) BH, where the orbit is initially a low-eccentricity inspiral (\(e_{0}=0.4\)), and the pericenter distance between star and BH is 1 tidal radius (\(r_{p}=2.2R_{\odot}\); \(\beta=1\)). The color bar shows the projection of log-scale column density in the x-y plane. We overlay the _initial_ equipotential surface of the binary to show that the stellar material fills up the Roche Lobe around the BH, and the star loses mass through the Lagrangian points. The initial orbital period is quoted in parathesis, specifically, \(P\approx 5.7\) hours in this model. We show six time frames of the event that demonstrate the tidal “peeling” process, until the star is completely disrupted by the BH. The star orbits around the BH and passes through the pericenter four times until it is torn apart by the BH.
Figure 4: \(\mathcal{M}(1,0.6,0.67)\) – Tidal peeling of the same BH-star binary described in Fig. 2, but with initial eccentricity \(e_{0}=0.6\) and \(\beta=0.67\). The initial orbital period (in parenthesis) is 19.2 hours.
Figure 5: \(\mathcal{M}(5,0.0,1)\) – Tidal peeling of BH-star with a higher stellar mass, \(M_{\rm s}=5M_{\odot}\), initially circular orbit (\(e_{0}\)=0) and pericenter distance equal to the tidal radius \(\beta=1\). The initial orbital period of the binary is \(\sim 1\) hour. The star is completely disrupted soon after the beginning of the simulation.
the mass accretion rate (\(\dot{M}_{\rm BH}\)) in Eddington luminosity \(\dot{M}_{\rm Edd}=L_{\rm Edd}/0.1c^{2}\), (iii) the fraction of mass removed from the star (\(f_{\rm rm}\)), (iv) the orbital separation (\(r\)), (v) the evolution of the SMA normalized to its initial value (\(a/a_{0}\)) and (vi) the evolution of eccentricity (\(e\)). The bottom four panels of Fig. 6 reflect the properties of the stellar remnant and are therefore only computed before total disruption; the time after total disruption of the star is labeled with hatched lines. Finally, we show the times of pericenter and apocenter passages with red-dashed lines and blue-solid lines, respectively.
In the first model, \(\mathcal{M}(1,0.4,1)\), the mass of the BH grows monotonically with time, while the accretion rate increases until a plateau around 5 hours (\(\sim P\)), exceeding the Eddington limit by more than seven orders of magnitude. In fact, the values of \(\dot{M}_{\rm BH}\) that we find are typically super-Eddington within the first few orbits of disruption, if \(r_{p}\) within \(\sim 3R_{t}\). In this model, the stellar remnant orbits around the BH on a \(\sim 5\)hr orbital timescale, during which the binary separation shrinks and the fraction of stellar mass removed becomes larger until the star gets totally disrupted after approximately 4 orbital times. The large fluctuations in \(a\) and \(e\) indicate that the star-BH orbit is not Keplerian due to tidal effects and shocks, resulting in the dissipation of orbital energy and asymmetric mass loss.
For an initially less compact binary, e.g. \(\mathcal{M}(1,0.6,0.67)\) (right-hand side of Fig. 6), the stellar remnant does not undergo total disruption in the first few orbits. In fact, the mass accretion rate spikes after each pericenter passage (minima in \(r\)) with a small time delay, while the peak level decreases over time. Similar observations have been reported in simulations of binary stars, where the peak of mass transfer rate is found shortly after each binary orbit's pericenter (Lajoie & Sills, 2011). \(a\) and \(e\) show fluctuations unique to TPEs, discussed further in SS 8, indicating non-Keplerian orbital evolution, even for a slightly tidally disrupted star's orbit.
### Dependence on the initial conditions
In this section, we investigate the dependence of the six key quantities above on different initial conditions, namely \(M_{s}\), \(e_{0}\), and \(\beta\), providing characteristics of the EM emission of TPEs. We measure these quantities during the first three orbits of the remnant around the BH, from one apocenter to the next (between blue solid lines in Fig. 6). In particular, we compute the change per-orbit of mass accreted onto the BH, the BH accretion rate, and the fractional stellar mass removed, which are denoted by \(M_{\rm acc,a}\), \(\dot{M}_{\rm BH,a}\) and \(f_{\rm rm,a}\), respectively. This allows us to take into account any enhancements in \(\dot{M}_{\rm BH}\) during each orbit, including the peaks near the pericenters as seen in the right-hand side of Fig. 6. We also evaluate the total change in SMA (\(\Delta a/a_{0}\)) and eccentricity (\(\Delta e\)) each orbit.
In comparison with a typical TDE or micro-TDE, where the star is on a parabolic orbit and more than half of its mass can be lost at the first pericenter passage (e.g. Mainetti et al., 2017; Bartos et al., 2017; Yang et al.
Figure 6: Time evolution of key physical quantities characterizing TPEs, for the models \(\mathcal{M}(1,0.4,1)\) (left) and \(\mathcal{M}(1,0.6,0.67)\) (right). The six panels, from top to bottom, show the (i) mass accreted by the BH, (ii) accretion rate in Eddington units, (iii) the fraction of mass removed from the star, (iv) the separation between the remnant and the BH, (v) the evolution of the semi-major axis and (vi) the eccentricity. The pericenter and apocenter passages are labeled with red-dashed and blue-solid lines, respectively. The hatched regions represent total disruption of the star.
2020; Kremer et al., 2022), in a TPE, the star typically loses mass to the BH more gradually over many orbits around the BH. The degree of mass loss from the star and the mass accretion onto the BH can be different, depending on the choices of \(M_{s}\), \(e_{0}\), and \(\beta\).
Fig. 7 shows the orbital change of mass accretion, \(M_{\rm acc,a}\), of TPEs with the \(1M_{\odot}\) star and the \(10M_{\odot}\) BH, under different assumptions for \(\beta\) (x-axis) and \(e_{0}\) (y-axis). In the most compact models (\(\beta=1\)), the star gets totally disrupted within the first three orbits, which are denoted with crosses. This is roughly consistent with the analytical expectation that the star undergoes tidal disruption when the pericenter distance of the orbit is comparable to the tidal radius, i.e. \(r_{p}/R_{t}\sim 1\) (red dot-dashed line). More generally, \(M_{\rm acc,a}\) is larger for initially more compact orbits, meaning smaller \(r_{p}\) (larger \(\beta\)) and smaller \(e_{0}\). The latter is equivalent to having smaller initial orbital separation, since we initially place the star at the apocenter distance \(r_{\rm apo}=a_{0}(1+e_{0})\). However, we see a smaller dependence of \(M_{\rm acc,a}\) on the initial eccentricity than on initial pericenter distance. The amount of mass accreted by the BH inevitably increases over time once mass transfer begins, resulting in the highest values of \(M_{\rm acc,a}\) in the third orbit. In the models with the largest pericenter distances, \(r_{p}/R_{t}\gtrsim 3\), there is no mass accretion onto the BH in the first three orbits, denoted by the open circles.
We see similar trends in the fraction of stellar material removed from the star (or \(f_{\rm rm,a}\); Fig. 8). Tidal peeling can remove stellar mass slowly over a few orbital times, which can be seen from the persistent increase of \(f_{\rm rm,a}\) over the first three orbits. Generally, a larger fraction of the star is removed when the initial orbit has smaller \(r_{p}\) and \(e_{0}\), and as time goes on. Note that even in the widest binaries (\(r_{p}/R_{t}\gtrsim 3\)), a small amount of stellar mass is removed under tidal effects, which is beyond the analytical prediction (e.g. Zalamea et al., 2010) for the onset of mass loss (red dot-dashed line), although the mass accretion onto the BH can be zero (as seen in Fig. 7). Finally, we again observe larger variations in \(f_{\rm rm,a}\) due to \(r_{p}\) than due to \(e_{0}\).
Fig. 9 shows that typically \(\dot{M}_{\rm BH,a}\) range from \(\sim 10^{4}\) to \(10^{8}\) times the Eddington accretion rate of the BH. The values of \(\dot{M}_{\rm BH,a}\) are overall higher when the initial binary orbit is more compact and less eccentric, although, like in Fig. 7 and 8, the impact of the initial value of \(r_{p}\) is larger than the impact of \(e_{0}\). Like the trend in both BH mass accretion and fraction of stellar mass loss, the values of \(\dot{M}_{\rm BH,a}\) tend to increase over time, except in some models with \(e_{0}\)=0.6, e.g. \(\mathcal{M}(1,0.6,0.67)\) in Fig. 6, where the tidal influence is the weakest due to the large initial separation between the star and the BH.
Most TPE models in our simulations indicate partial disruption of the star, which suggests EM emission from TPEs persisting over many orbital times. Although we only simulate the first few orbital times of TPEs in this work, we investigate the orbital evolution of the stellar remnant during this time, and we attempt to find patterns in the evolutions of the SMA and the eccentricity that could predict whether the binary separation widens or becomes more compact. Future work should investigate the long-term behavior of star-BH TPEs, in order to determine (1) the full duration of their EM emission, and (2) whether or not the star will be eventually totally disrupted by the BH.
In Fig. 10, we demonstrate the variations in SMA (\(\Delta a\)) per orbit evaluated during the first three orbits of TPEs with the \(1M_{\odot}\) star around the BH. We investigate the
Figure 7: The change of mass accretion onto the BH per orbit, \(M_{\rm acc,a}\), for TPEs with a \(1M_{\odot}\) star, as a function of initial \(e_{0}\) and \(\beta\), evaluated for the first three orbits of the stellar remnant around the BH. We show the pericenter distances corresponding to each \(\beta\) in the parentheses. The darker end of the color bar represents larger \(M_{\rm acc,a}\) values, which decrease as the initial orbit becomes wider and more eccentric. In the most compact configurations, the star is totally disrupted (crosses), while in the least compact orbits, zero mass is accreted by the BH (open circles). The onset of mass transfer is analytically expected to occur when \(r_{p}\approx R_{t}\) (red dotted line).
Figure 8: Similar to Fig. 7, but we show the orbit-averaged fraction of mass removed from the star and the BH, \(f_{\rm rm,a}=M_{\rm rm}/M_{s}\), where \(M_{\rm rm}=M_{s}-M_{\rm rem,s}+M_{\rm acc,BH}\), where \(M_{s}\) is the total mass of the star, \(M_{\rm rem,s}\) the remnant mass and \(M_{\rm acc,BH}\) the mass accreted onto the BH. The crosses represent total disruption. The red dot-dashed lines again represent the onset of mass transfer limit at \(r_{p}\approx R_{t}\).
Figure 10: The change of semi-major axis (\(\Delta a\)) normalized by its initial value (\(a_{0}\)) during the first, second and third orbit around the BH, given in percentages. The yellow colors represent (near) zero changes in SMA during the orbit, while the redder (bluer) points represent orbit expanding (shrinking).
Figure 9: Mass accretion rate onto the BH, \(\dot{M}_{\rm BH}\) as a function of initial eccentricity and penetration factor (the corresponding pericenter distances are quoted in parentheses), taken at the 1st to 3rd pericenter passages. \(\dot{M}_{\rm BH}\) are given in units of \(\dot{M}_{\rm Edd}\sim 2.2\times 10^{-7}M_{\odot}/\)yr for a \(10M_{\odot}\) BH. The open circles represent zero accretion rate. The open circles represent mass accretion rate of zero. The crosses represent total disruption.
change in \(\Delta a\), normalized by the initial SMA \(a_{0}\) of each model, due to different initial conditions \(\beta\) and \(e_{0}\). The color bars show percentage values of \(\Delta a/a_{0}\), which typically fluctuate within \(\sim\)4%. We observe that in most models, \(\Delta a\) remains roughly zero (yellow points), corresponding to very small variation in \(a\) during one orbit, meaning that the orbital separation at one apocenter is not too different from the next one. The redder points in Fig. 10 correspond to the models where the orbits are widening (\(\Delta a>0\)); the bluer points corresponds to shrinking orbits (\(\Delta a<0\)). There is a lack of overall trends that dictates whether \(\Delta a\) increases or decreases with the two initial conditions, except that the most compact orbits tend to decay.
Fig. 11 shows the change of eccentricity \(\Delta e\) in the first three orbits for the same models in Fig. 10. Most models show small variations in \(\Delta e\) (yellow points), except for the initially circular models (bottom points) and the most compact models with different \(e_{0}\) (points in the first column), which is consistent with the behaviors in \(\Delta a\). The stars in these models are the most tidally influenced by the BH, where \(\Delta e\) shows significant fluctuations in all three orbits - some orbits become more eccentric then later circularize, and vice versa.
### Sources of luminosity
In a TPE, the super-Eddington accretion onto the BH powers outflow from the accretion disk. The EM emission from the TPE is delayed by the photon diffusion time (\(\tau_{\rm diff}\)), which dilutes the emission from the accretion disk. From our simulation, \(\tau_{\rm diff}=\tau H/c\sim 10^{5}\) years, similar to the photon diffusion time in the sun. In this relation, \(H\sim 1.5R_{\odot}\) is the thickness of the accretion disk formed from the TPE. \(\tau\) is the optical depth to electron scattering, computed assuming fully ionized gas as,
\[\tau=\int_{r}^{\infty}\rho(r^{\prime})\frac{\sigma_{T}}{m_{p}}dr^{\prime} \approx 10^{11}, \tag{1}\]
where \(\sigma_{T}\) is the electron scattering cross-section. \(\rho\) is the 3-dimensional density of the accretion disk taken directly from our simulations, which is typically very high since a large fraction of the star is stripped to form the disk in a TPE. Overall, the photon diffusion time \(\tau_{\rm diff}\) is much longer compared to the viscous timescale of the accretion disk (eq. 4 in D'Orazio et al., 2013),
\[\tau_{\rm visc}\simeq 1060\left(\frac{\mathcal{M}}{10}\right)^{2}\left(\frac{ 0.01}{\alpha}\right)t_{\rm orb}\approx 20\ {\rm days}, \tag{2}\]
where \(\mathcal{M}\) is the Mach number, \(\alpha\) is the Shakura-Sunyaev viscosity parameter, and \(t_{\rm orb}\) is the orbital time that is typically a few hours. However, given the super-Eddington accretion rate of a TPE, a relativistic jet may be launched and break out from the disk, possibly allowing the TPE to shine through. Since \(\dot{M}_{\rm BH}\gg\dot{M}_{\rm Edd}\), there could be strong accretion disk outflow that might also modify the EM emission of a TPE. If the TPE is embedded in an AGN disk, the star and the BH will accrete mass from the disk. We use the calculations in Tagawa et al. (2020) to estimate that the mass accretion rates onto the star and the BH are both approximately \(10^{3}\dot{M}_{\rm Edd}\), with the BH's accretion rate \(\sim 5\%\) of the star's. We assume that the TPE is located at \(r\sim 10^{-2}\) pc to the central massive BH of mass \(10^{6}M_{\odot}\), where the disk density is \(\rho_{\rm AGN}\sim 10^{12}M_{\odot}/{\rm pc}^{3}\) and the aspect ratio is of \(\simeq 10^{-3}\). The accretion rates from the AGN disk are also super-Eddington, although they are still few orders of magnitude lower than \(\dot{M}_{\rm BH}\) in the TPE. Modeling these aspects of TPEs would require higher resolution, radiative transfer, and/or perhaps a different numerical code that can include the low-density background AGN disk, which could be addressed in future work.
## 7 Massive stars
Due to different stellar physics in massive stars, we investigate the behavior of TPEs where stars more massive than solar mass are involved, \(M_{s}=5,10\), and \(15M_{\odot}\). Fig. 12 shows (i) the properties of TPEs depending on the initial stellar mass and initial pericenter distance, at fixed \(e_{0}=0.4\) (left panels), and (ii) the same properties depending on initial \(M_{s}\) and \(e_{0}\), at fixed \(r_{p}\) (right panels). From top to bottom, we show the change in mass accretion onto the BH, the fraction of mass removed from the star, and mass accretion rate per orbit. The crosses indicate that more massive stars are more likely to undergo total disruption given the same initial orbital configurations. In Fig. 13, we see that this is because a more massive star's radius is closer to the pericenter, even though its density profile is steeper. Here we show the density profiles of the initial stars \(M_{s}=1,5,10\) and \(15M_{\odot}\), as labeled. The dashed lines (top x-axis) represent the ratio between stellar radius and tidal radius, which is larger for a more massive star.
In Fig. 12, we normalize the BH mass accretion by the initial mass of the star, \(M_{\rm acc,a}/M_{s}\) in the top two panels. Therefore, any change in \(M_{\rm acc,a}/M_{s}\), as well as in \(f_{\rm rm,a}\) and \(\dot{M}_{\rm BH}\), with the initial \(M_{s}\) (along the y-axes) reflects different interior structures of the stars due to different masses. There are minimal changes in \(M_{\rm acc,a}/M_{s}\), \(f_{\rm rm,a}\) and \(\dot{M}_{\rm BH}\) along the \(M_{s}\) axis, at any fixed \(r_{p}\) or \(e_{0}\), especially for \(M_{s}\geq 5M_{\odot}\). This indicates that the stellar interiors, mainly the envelopes that are responding to the tidal stripping of the BH, are not significantly different for different stellar masses, unless the core of the star is also disrupted, i.e. the ca
Overall, these three quantities show more variation due to different initial \(r_{p}\) and \(e_{0}\), compared to the effect of stellar mass. At fixed \(e_{0}\), \(M_{\rm acc,a}/M_{\rm s}\), \(f_{\rm rm,a}\) and \(\dot{M}_{\rm BH}\) decrease as the initial pericenter distance becomes wider, where \(M_{\rm acc,a}/M_{\rm s}\) and \(\dot{M}_{\rm BH}\) reduce to zero (open circles) even for more massive stars. Similarly, at fixed \(r_{p}\), these quantities decrease as \(e_{0}\) gets larger, due to the fact that elliptical orbits with larger eccentricities (given the same pericenter distances) are longer orbits. Consistent with the \(M_{s}=1M_{\odot}\) cases, the impact of \(r_{p}\) is overall more significant than the impact of \(e_{0}\). Generally, having a more massive star in the TPE results in more mass accretion onto the BH and higher accretion rates. Our figures show the fractions of star lost or accreted by the BH, which indicates the importance of different stars' interior structures.
Finally, as a sanity check, we evaluate the mass loss rate of a \(1M_{\odot}\) star from the analytical solution described in Zalamea et al. (2010), and compare this solution to our simulation results. This analytical solution predicts the rate of mass loss of a white dwarf (WD) when it is tidally disrupted by a SMBH, which can be directly applied to our TPE scenario. Zalamea et al. (2010) predicts that an outer shell of the star with thickness \(\Delta R\) is removed at each tidal stripping, as long as \(R_{s}<2R_{t}\), where \(\Delta R=R_{s}-R_{t}\ll R_{s}\). The only differences being (i) our stellar density profile describes a solar-like MS star that is governed by gas+radiation pressure, instead of a WD governed by electron degeneracy pressure, and (ii) the pericenter is much closer to the tidal radius, since we have a stellar-mass BH rather than a SMBH. Adopting these changes, the analytical calculation of the stellar mass loss rate (\(\dot{M}_{\rm loss}\); red) from our simulation is shown Fig. 14, along with the mass loss rate we evaluate from the simulation output (black). This figure shows reasonable consistency between the two, where the analytical solution is roughly half the simulation results at first. However, the analytical solution shows a slower drop in amplitude.
## 8 Discussions
### Comparing TPEs to micro-TDEs, TDEs by intermediate-mass and supermassive BHs
The orbit of the star in a micro-TDE is typically expected to be parabolic when tidally disrupted by the BH. From a recent study of hydro-simulations of micro-TDEs (e.g. Kremer et al., 2022), they are likely ultra-luminous transients, similar to our finding for TPEs. We find that similar to micro-TDEs, TPEs have super-Eddington accretion rates, up to \(\sim 10^{8}\dot{M}_{\rm Edd}\), which is in order of magnitude comparable to that of "normal" micro-TDEs, see Figure 11 in Kremer et al. (2022). However, the method that Kremer et al. (2022) use to measure the accretion rate by assuming that some disk mass is accreted by the BH within the viscous time, or eq. 3, is different from our method of using a sink particle to measure BH accretion rate. They assume:
\[\dot{M}_{\rm BH}\propto\Bigg{(}\frac{M_{\rm disk}}{t_{\rm visc}}\Bigg{)} \Bigg{(}\frac{R_{\rm in}}{R_{\rm disk}}\Bigg{)}^{s}. \tag{3}\]
In this relation, we choose an accretion disk with radius \(R_{\rm disk}=R_{t}=2.2R_{\odot}\) that includes particles within the initial Roche Lobe radius around the BH, see the last panel of Fig. 2. \(M_{\rm disk}\) is the disk mass, which eventually reaches \(0.8M_{s}\). \(t_{\rm visc}\) is the viscous timescale that we adopt from eq. 2, but using Mach number \(\mathcal{M}=3\) and \(\alpha=0.1\). \(R_{\rm in}\) is the inner edge of the disk - we choose \(R_{\rm in}=10r_{\rm sch}\), where \(r_{\rm sch}=2GM_{\rm BH}/c^{2}\). Finally, the choice of power-law index \(s\) account for different levels of mass loss due to outflows. In Fig. 15, we compare the accretion rates computed with eq. 3 to that found with the sink particle, on a TPE model \(\mathcal{M}(1,0.4,1)\). The top panel shows the mass accretion rate onto the sink
Figure 11: The change of eccentricity (\(\Delta e\)) during the first, second and third orbit around the BH. The yellow colors represent (near) zero changes in eccentricity during the orbit, while the redder (bluer) points represent orbit becoming more (less) eccentric.
Figure 12: Mass accreted by the BH (\(M_{\rm acc,a}\)) normalized to stellar mass, the fraction of mass removed (\(f_{\rm rm,a}\)), and accretion rates onto the BH (\(\dot{M}_{\rm BH,a}\)) as function of (1) stellar mass and penetration factor at fixed initial \(e_{0}=0.4\) (left column), and (2) stellar mass and eccentricity at fixed pericenter \(\beta=0.67\). These are evaluated in the first orbit of the simulation. As in previous figures, cross indicates full disruption. In general, \(M_{\rm acc,a}\), \(f_{\rm rm,a}\) and \(M_{\rm BH,a}\) decreases for larger initial separation and eccentricity. The more massive stars are, the more likely to be completely disrupted due to larger stellar radius compared to tidal radius. There is a lack of trend in \(M_{\rm acc,a}\), \(f_{\rm rm,a}\) and \(\dot{M}_{\rm BH,a}\) depending on \(M_{s}\) on for \(M_{s}>1M_{\odot}\), indicating that the stellar structures are not significantly different for those stars.
particle (\(\dot{M}_{\rm BH,sink}\)), and the bottom panel shows the accretion rate from the disk calculation (\(\dot{M}_{\rm BH,disk}\)) assuming three choices for the power-law index \(s=0,0.5,1\). \(\dot{M}_{\rm BH,disk}\) is overall comparable to \(\dot{M}_{\rm sink}\), while it rises earlier - some mass falls within \(R_{\rm disk}\) instantaneously after the simulation begins. We perform the same comparison for a parabolic micro-TDE model that was reported in Kremer et al. (2022), with \(M_{\rm BH}=10M_{\odot}\), \(M_{\rm s}=1M_{\odot}\), \(e_{0}=1\) and \(\beta=1\), see Fig. 16. We adopt a disk with radius \(R_{\rm disk}\sim 3.7R_{\odot}\), value used by Kremer et al. (2022), and \(t_{\rm visc}\) evaluated with Mach number \(\mathcal{M}=1\). The two methods again yield similar accretion rates.
Despite having similar accretion rates, the orbital periods are generally shorter for a TPE, which are between a few to few tens of hours, compared to periods of days to weeks for a micro-TDE. Some micro-TDE models in Kremer et al. (2022), such as the model with a more massive \(M_{s}=5M_{\odot}\) star and a \(10M_{\odot}\) BH, show multiple passages and therefore periodic accretion onto the BH just like in TPE. However, the orbital period in this model is \(\sim\)4 days, significantly longer than TPE periods, so we will be able to distinguish that from a TPE.
Figure 16: Same comparison as Fig. 15, but for parabolic micro-TDE model \(\mathcal{M}(1,1,1)\) from Kremer et al. (2022). Similar to Fig. 15, the accretion rates from the sink and disk methods show roughly consistent results.
Figure 14: Comparing the rate of stellar mass loss from our simulation, \(\mathcal{M}(1,0.2,0.67)\) (black), to that predicted by the analytical solution (red) in Zalamea et al. (2010).
Figure 13: Initial density profiles of stars used in TPEs – \(M_{s}=1,5,10\) and \(15M_{\odot}\) as labeled in the legend. The log-scale density (y-axis) is normalized by the core density of the star, which is a function of radius (bottom x-axis; normalized by stellar radius). The dashed line (top x-axis) of corresponding colors indicate the ratio of stellar radius to the tidal radius of each star.
Figure 15: Accretion rates of the BH in TPE model \(\mathcal{M}(1,0.4,1)\) evaluated using (i) mass accreted onto the sink particle (top panel) and (ii) using mass accretion calculation in eq. 3, following Kremer et al. (2022) (bottom panel). The bottom panel adopts three choices of power-law index: \(s=0,0.5\) and \(1\).
But generally, stars in most micro-TDEs undergo tidal stripping only once, leaving very different morphological evolution, accretion and orbital signatures compared to a TPE.
Another important comparison should be made between TPEs and tidal disruptions of a solar-like star by an intermediate-mass BH (IMBH). Recent work by Kiroglu et al. (2022) find, using hydro-simulations, that in all cases where a \(1M_{\odot}\) star is disrupted by a IMBH, the stellar remnant is eventually ejected to be unbound, either after the first pericenter or after many pericenter passages. In our TPE simulations, all stars remain in a binary with the BH, or are eventually completely disrupted by the BH. If the star survives for many pericenter passages with IMBH, then the star is only partially disrupted and the accretion rate increases with the number of orbits. This is also not the case in TPEs, see the RHS of Fig. 6, where \(\dot{M}_{\rm BH}\) decreases with the number of orbits. Finally, the orbital periods of tidal disruptions by IMBH typically span a wide range, from 10s of hours to 10 thousand years. The shortest-period events with comparable periods to TPEs correspond to the lowest BH mass (\(M_{\rm BH}\lesssim\)\(10M_{\odot}\)) and smallest pericenter distance (\(r_{p}/R_{t}\ll 1\)). Therefore, these events are basically the micro-TPEs in Kremer et al. (2022) and their similarities and differences to TPEs are already mentioned above.
The best indicator that a micro-TDE is present in an AGN, rather than a TDE of a solar-like star by a SMBH, is if the mass of the SMBH is above the Hill's limit \(\gtrsim 10^{8}M_{\odot}\), beyond which the Schwarzschild radius of the BH is greater than the tidal radius. However, micro-TDEs or TPEs have distinguishable signatures even if they exist near a smaller SMBH. First, the spectra of micro-TDEs and TDEs are expected to be very different, because the remnant produced in micro-TDEs tend to be optically thick - this is even more so the case in TPEs, which lead to a hotter accretion disk that cools less efficiently (Wang et al., 2021) and result in emission in the higher-end of X-rays. Additionally, like in most micro-TDEs, SMBH in a TDE typically will disrupt the star once and strip \(\sim\)half to all of its mass, while partial disruptions are more common in TPEs. Partial disruptions in TDEs, however, will have periodic flares on a yearly scale, such as recently observation of repeated bursts in AT2018fyk (Wevers et al., 2022), much longer than the expected periods of micro-TDEs and TPEs.
Overall, our simulations show that TPEs are novel transient phenomena that can be distinguished from other ultra-luminous transients such as micro-TDEs, tidal disruptions by IMBHs and SMBHs, and partial disruptions in sTDEs.
### Simulation caveats
Theoretical investigations of TPEs have many important implications, such as understanding interactions in compact star-BH binaries in star clusters or AGN disks and observations of ultra-luminous transient events especially those near the galactic center. Our results offer first-hand understanding of TPEs with simulations, while they should be treated as numerical experiments rather than accurate physical descriptions of TPEs in a cluster or embedded in an AGN disk. We list the following caveats of our simulations that should be improved in the future. First, we start the simulations with already very compact orbits, while in reality, they should be expected at the end of some dynamical process such as a long AGN-disk mediated inspiral or interactions between multiple stars or compact objects in a star cluster. Since the BH and star should approach each other from a much larger distance, we might expect the star to have already been partially disrupted by the BH, although no mass will be accreted by the BH beyond the separation of \(r_{p}\sim 3R_{t}\), as shown in our results. The binary could, however, accrete from the external AGN gas, if embedded in an AGN disk. In future work, we will investigate the effect of torques from the circumbinary gas on binary, which can shrink the orbital separation. Additionally, one could also include the low-density AGN disk gas as a background of the TPE simulations, instead of using vacuum. This is challenging with SPH simulations, but could instead be feasible with grid-based codes. Finally, it is also important to add radiation outflows from the optically-thick accretion disk and shock properties due to the relative motion of the star/BH and the debris, in order to more accurately describe TPEs. Future work should perform simulations or make analytical predictions for TPEs considering all of these additional factors above.
### Detectability of TPEs as transients in AGNs
The AGNs are extremely dynamical locations to host luminous transients. Identifying TPEs among different transient events in AGNs will require careful examination of their EM signatures. AGNs around heavy SMBHs (\(M_{\rm SMBH}\gtrsim 10^{8}M_{\odot}\)) are shown to be the ideal place for identifying micro-TDEs (Yang et al., 2022) and other transients alike. In order to observe TPEs, they need to outshine the AGN disk. Since our results show that TPEs result in super-Eddington accretion onto the BH, there could be super-luminous jet launching from the BH. Therefore, the EM emissions from TPEs can be subject to jet modulation, among many other mechanism such as accretion disk outflows and shocks, as mentioned in SS 6.3. Even though the accretion disk formed
from the stellar remnant is optically thick, and the AGN can also trap the radiation, the emissions from TPEs can be more visible if (i) the jet can eject gas from the circumbinary disk (Tagawa et al., 2022), and (ii) stellar-mass BHs can open cavities in the AGN disk (Kimura et al., 2021) - both of these will reduce the opacity of the surrounding gas. Finally, if the AGN does not launch any jets, then TPEs can outshine the AGN more easily in the radio or in the gamma rays.
Here, we focus on the existing observational signatures of two micro-TDE candidates observed in AGNs that might also indicate TPE origins. Micro-TDE candidates in AGNs with a SMBH too massive for tidal disruption of a solar-type star (ASASSN-15lh and ZTF19aailpwl; Yang et al., 2022), have peak luminosity \(L_{\rm peak}\approx 5\times 10^{45}\) erg s\({}^{-1}\) and \(L_{\rm peak}\approx 10^{45}\) erg s\({}^{-1}\). Yang et al. (2022) hypothesize that the higher peak luminosity of ASASSN-15lh indicates a micro-TDE, unless it is a result of tidal disruption of a star more massive than solar. From our simulations, we see that TPEs with a more massive star also produce higher accretion rates. The observations of ZTF19aailpwl show a longer rise time than a typical TDE, indicating a more gradual tidal disruption than a TDE with SMBH, e.g. produced by micro-TDEs with low eccentricity such as a tidal peeling event. Finally, the rate of micro-TDEs are expected to be low in AGNs, at roughly 2 Gpc\({}^{-3}\) yr\({}^{-1}\)(Yang et al., 2022), and even lower in star clusters or stellar triple systems with BHs, while these predictions have large uncertainties. Only the brightest events are expected to be eventually observed, since the emission of most weaker micro-TDEs and TPEs will be dimmed significantly by the surrounding AGN gas. The mechanism that the emission from an event like the TPE propagates through an AGN disk is analogous to the propagation of GRB afterglow in a dense medium (Perna et al., 2021; Wang et al., 2022). Therefore, bright TPEs might have observational signatures similar to that of ultra-long GRBs.
## 9 Summary
In this paper, we perform the first hydro-simulations of TPEs with the SPH simulation code PHANTOM to investigate their morphology, accretion signature and orbital evolution. We explore a range of initial conditions, including stellar mass, initial eccentricity and penetration factor, which make up 96 simulation models in total. We examine the impacts of these initial parameters on the behaviors of TPEs.
First, we observe the "tidal peeling" feature from our simulations where a solar-like or massive star is slowly and periodically tidally disrupted by a stellar-mass BH and its mass is slowly removed over many orbits. Due to low eccentricity, the orbital periods of TPEs are generally shorter (\(P\sim\) few-few 10s of hours) compared to the micro-TDEs and TDEs. In the most compact orbits, \(r_{p}\approx R_{t}\), the star gets completely disrupted very quickly, after \(\sim\)1-4 orbits; otherwise, the star ends up being partially disrupted. Out of the three initial conditions, the penetration factor has the largest effect on the accretion and orbital signatures of interest, namely mass accreted onto the BH, accretion rate, the fraction of mass removed from the star, the orbital separation, semi-major axis and eccentricity. As the orbit becomes more compact, there is more mass accreted by the BH, higher accretion rate and higher fraction of mass removed from the star. Lower eccentricity has a similar effect, since lower \(e_{0}\) means that the orbit is shorter (recall that the star is placed at the apocenter at the start of the simulations). A few models with higher eccentricities show a periodic fluctuation in \(\dot{M}_{\rm BH}\) that peaks after each pericenter passage.
The orbital separation, semi-major axis and eccentricity demonstrate less obvious trends, especially when \(\beta<1\) (less compact systems). It is clear from the fluctuations in \(a\) and \(e\) that the orbit of a star in a TPE deviates from Keplerian due to the tidal influence and possibly also shocks from the stellar remnant encountering the tidal streams. In the most compact configurations, \(\beta=1\), the orbital separation always shrinks regardless of the choice of \(e_{0}\) and \(M_{s}\), so both the semi-major and eccentricity decrease with the number of orbits. In these cases, the star is always completely disrupted at the end, consistent with the analytical limit of onset mass loss of tidal stripping at \(\beta=1\)(e.g. Zalamea et al., 2010). Finally, if there is a more massive star in the TPE, the stellar radius is larger and, at fixed \(\beta\), it is closer to the tidal radius. Therefore, the disruption is more rapid and total disruption of the star is more common. There is higher mass loss from the star as well as more accretion by the BH. However, for stars more massive than \(1M_{\odot}\), the fraction of the initial stellar mass lost or accreted by the BH does not vary significantly due to different stellar masses. This indicates the similarity in the stellar structures of the more massive stars.
The resulting accretion rates of TPEs are typically highly super-Eddington, \(\dot{M}_{\rm BH}\sim 10^{4-8}\dot{M}_{\rm Edd}\). However, since the accretion disk formed from the dense stellar material around of the BH is extremely opaque, the emission from TPEs will be affected by photon diffusion. Other mechanisms might exist to modulate the luminosity of the TPE, other than the BH accretion rate, such as relativistic jet launching from the BH and shocks due to relative motion of the star remnant and the tidal streams. A jet might empty a cocoon of low-density
region around the TPE, possibly allowing the emission to be less affected by the thick accretion disk or AGN disk. Our results are also subject to a few caveats due to the limitations of our simulations. Future work should address more realistic aspects of TPEs, such as the radiation for the hot accretion disk, shocks, binary inspiral from a farther separation, and/or AGN background gas.
Finally, better theoretical understanding of TPEs is highly motivated by the existing observations of abnormal flaring events from AGNs, such as SASSN-15lh and ZTF19aailpwl, that can not be well explained by AGN variability, or other luminous transients such as TDEs by SMBHs. AGNs are extremely dynamical playgrounds for interacting stars and compact objects. Our results suggest that identifying TPEs among many different ultra-luminous transients can be feasible due to its unique accretion signatures and orbital evolution that we find in this work.
## Acknowledgements
ZH acknowledges support from NASA grant 80NSSC22K082. RP acknowledges support by NSF award AST-2006839. YW acknowledges support from Nevada Center for Astrophysics. CX acknowledges the support from the Department of Astronomy at Columbia University for providing computational resources for this research.
|
2305.10307
|
FACE: Evaluating Natural Language Generation with Fourier Analysis of
Cross-Entropy
|
Measuring the distance between machine-produced and human language is a
critical open problem. Inspired by empirical findings from psycholinguistics on
the periodicity of entropy in language, we propose FACE, a set of metrics based
on Fourier Analysis of the estimated Cross-Entropy of language, for measuring
the similarity between model-generated and human-written languages. Based on an
open-ended generation task and the experimental data from previous studies, we
find that FACE can effectively identify the human-model gap, scales with model
size, reflects the outcomes of different sampling methods for decoding,
correlates well with other evaluation metrics and with human judgment scores.
|
Zuhao Yang, Yingfang Yuan, Yang Xu, Shuo Zhan, Huajun Bai, Kefan Chen
|
2023-05-17T15:44:57Z
|
http://arxiv.org/abs/2305.10307v4
|
# FACE: Evaluating Natural Language Generation
###### Abstract
Measuring the distance between machine-produced and human language is a critical open problem. Inspired by empirical findings from psycholinguistics on the periodicity of entropy in language, we propose FACE, a set of metrics based on _F_ourier Analysis of the estimated _C_ross-_Entropy of language, for measuring the similarity between model-generated and human-written languages. Based on an open-ended generation task and the experimental data from previous studies, we find that FACE can effectively identify the human-model gap, scales with model size, reflects the outcomes of different sampling methods for decoding, correlates well with other evaluation metrics and with human judgement scores. FACE is computationally efficient and provides intuitive interpretations.
## 1 Introduction
The concept of _entropy_ from Information Theory is broadly applied in Natural Language Processing (NLP) technology and computational linguistic studies. The most notable example is the use of _cross-entropy_ in training and evaluating language models, where the exponentiation of cross-entropy, perplexity, is adopted to measure models' performance in next-word (or masked-word) prediction task. However, low perplexity alone does not guarantee good performance in language generation tasks, which not only depend on model sizes but are also closely related to the sampling techniques used in _decoding_ stage. The complexity of the generation task makes it especially important to have different metrics that can reflect the generation quality from multiple angles. One particular perspective is that the language generated from a good model should have a similar distribution of words/tokens as in the "natural" human language. For example, Zipf's law can be used to distinguish between human and model distributions [29].
Recent advances in psycholinguistics put forward new directions for developing more sophisticated metrics other than Zipf's coefficient. In particular, studies on temporal and spectral patterns in dialogue [7; 46] reveal that cross-entropy changes _periodically_ in natural language, which points out the potentials of using fine-grained transformation of cross-entropy to quantify the differences in language data (see Section 3 for a detailed review). It motivates the basic idea of this study: Can we
the cross-entropy and the effectively quantify the _periodical_ pattern of the cross-entropy, and use it as an indicator to distinguish human and model-generated languages?
We summarize our contributions as follows: 1. We propose a set of metrics based on the frequency spectra obtained from the Fast Fourier Transform (FFT) of the cross-entropy sequences of language data, named FACE (_F_ourier _A_nalysis of _Cross-Entropy). 2. We empirically show FACE's performance on identifying human-model gap and how it scales with model sizes in Section 4.1. 3. We explore FACE's correlations with sampling methods and human evaluation in Section 4.2 and Section 4.3, respectively. 4. We validate the statistical soundness of FACE in Section 4.4. 5. We discuss an intuitive interpretation of the metrics and how it reflects the characteristics of language use in Section 4.5.
## 2 Face
The basic idea of FACE is to obtain the spectra of cross-entropy from different data sources (human or models) and compute their similarities. The overall workflow is shown in Figure 1, which we describe in five steps:
1. Collect the datasets for human-written and model-generated texts, \(\mathcal{D}_{h}\) and \(\mathcal{D}_{m}\).
2. Use a third pre-trained language model \(m_{\text{est}}\) to estimate the cross-entropy of text in \(\mathcal{D}_{h}\) and \(\mathcal{D}_{m}\), resulting in two sequences of cross-entropy output, \(\mathcal{E}_{h}\) and \(\mathcal{E}_{m}\).
3. Obtain the frequency spectra for each cross-entropy sequences, \(\mathcal{E}_{h}\Rightarrow\mathcal{F}_{h}\) and \(\mathcal{E}_{m}\Rightarrow\mathcal{F}_{m}\).
4. Develop FACE metrics that quantify the spectral similarity between \(\mathcal{F}_{h}\) and \(\mathcal{F}_{m}\).
5. Evaluate FACE on different model types/sizes, sampling methods, and the correlations with other metrics for Natural Language Generation (NLG).
We describe the steps in detail from Section 2.1 to Section 2.3.
### Estimate cross-entropy
We use a pre-trained language model \(m_{\text{est}}\) as the estimator for cross-entropy, which runs in the evaluation model (no gradients produced). It takes as input a sequence of \(T\) tokens, \([t_{1},t_{2},\dots,t_{T}]\); for each position \(i=1,\dots,T\), it predicts the probability of the next token \(P(t_{i+1}|t_{1},\dots,t_{i})\); the cross-entropy between this probability and the ground truth token \(t_{i+1}\) is then computed, resulting in the cross-entropy sequence that consists of \(T-1\) real values \(\mathcal{E}=[c_{1},c_{2},\dots,c_{T-1}]\), as the first token is not predicted:
\[\mathcal{E}=[c_{1},c_{2},\dots,c_{T-1}]\triangleq[-\log P(t_{2}|t_{1}),-\log P (t_{3}|t_{1},t_{2}),\dots,-\log P(t_{T}|t_{1},t_{2},\dots,t_{T-1})] \tag{1}\]
Note that \(\sum c_{i}=-\sum_{i=2}^{T}\log P(t_{i}|t_{1}\dots t_{i-1})\) is exactly the definition of negative log-likelihood loss, i.e., cross-entropy loss, for training a language model, where \(c_{i}\) is the negative logarithm of the predicted probability for each token \(t_{i+1}\). In psycholinguistic studies, this \(c_{i}\) quantity is usually referred to several different terms, including _surprisal_[15; 16], _information density_[24; 19; 47], and _entropy_[13; 14; 45; 46], each of which has a specific theoretical flavor. There have been debates
Figure 1: Overall workflow of this study.
over the justifability of using "entropy" to denote the negative log-likelihood, because it is not a weighted summation as originally defined in [36]. Albeit, we decide to use _cross-entropy_ as it is the most broadly communicated term and we believe it will not cause confusion as its mathematical form is clearly defined. Apparently, the choice for \(m_{est}\) will influence the next steps, because better language models produce lower perplexity scores, that is, lower cross entropy. Therefore, we discuss how different choices for \(m_{est}\) affect our metrics in Section 4.4.
### Fast Fourier transform
We treat the estimated cross-entropy sequence \([c_{1},\dots,c_{T-1}]\) as a finite discrete signal in the time domain, where the sampling interval is approximated with the average duration of one token. With this simplified assumption, we find that the discrete Fourier transform (DFT) is the most suitable spectral analysis tool [38]3. The formula for DFT is as follows:
Footnote 3: [https://ccrma.stanford.edu/~jos/sasp/Fourier_Transforms_Continuous_Discrete_Time_Frequency.html](https://ccrma.stanford.edu/~jos/sasp/Fourier_Transforms_Continuous_Discrete_Time_Frequency.html)
\[X(\omega_{k})\triangleq\sum_{n=0}^{N-1}x(t_{n})e^{-j\omega_{k}t_{n}},\ k=0,1, \dots,N-1 \tag{2}\]
in which \(x(t_{n})\) is the signal at time \(t_{n}\), corresponding to the \(n\)-th cross-entropy value \(c_{n}\) (\(n=1\dots,T-1\) and \(N\triangleq T-1\)). \(X(\omega_{k})\) is a complex number that reflects the magnitude (strength) of the \(k\)-th frequency component \(\omega_{k}=2\pi k/N\). In practice, DFT is implemented with an efficient algorithm known as Fast Fourier Transform [5] that runs in \(O(n\log n)\) time.
We compared two methods, periodogram and vanilla FFT. The periodogram approach computes the Fourier transform after applying auto-correlation and time-averaging windows to the signal for de-noising purposes [42]. However, we think de-noising is inappropriate because our "signal" is a time series of cross-entropy, whose value reflects the sampling result at each time step from a large. Auto-correlation or time averaging will remove the distinctiveness of rare tokens. Therefore, we use vanilla FFT and take the _real_ part of \(X(\omega_{k})\) to represent the magnitude spectrum for the frequency component \(\omega_{k}\), which is written as \(X(\omega_{k})\) for brevity.
For an input cross-entropy sequence \(\mathcal{E}=[c_{1},\dots,c_{T-1}]\) obtained from Section 2.1, the resulting frequency spectrum can be represented as a list of tuples of the same length, \(\mathcal{F}=[\langle\omega_{1},X(\omega_{1})\rangle,\dots,\langle\omega_{T-1},X(\omega_{T-1})\rangle]\), where \([\omega_{1},\dots,\omega_{T-1}]\) are the \(T-1\) sample frequencies, and \([X(\omega_{1}),\dots,X(\omega_{T-1})]\) are the corresponding magnitudes.
### Spectral similarity metrics
We develop four metrics to measure the similarity between spectra \(\mathcal{F}_{h}\) and \(\mathcal{F}_{m}\): Spectral Overlap (_SO_), Spectrum Angle Mapper (_SAM_) [6], Pearson's correlation (_CORR_), and Spearman's correlation (_SPEAR_), as summarized in Figure 2. Before computing the metrics, two spectra \(\mathcal{F}_{h}\) and \(\mathcal{F}_{m}\) which are of different lengths \(N_{1}\) and \(N_{2}\), are first interpolated to the same length: \(\mathcal{F}_{h}\in\mathbb{R}^{N_{1}}\Rightarrow\mathcal{F}_{h}^{\prime}\in \mathbb{R}^{N_{C}}\), \(\mathcal{F}_{m}\in\mathbb{R}^{N_{2}}\Rightarrow\mathcal{F}_{m}^{\prime}\in \mathbb{R}^{N_{C}}\). Here, \(N_{C}\) is the maximum length of the spectrum in our data. Thereafter, the computation of the subsequent metrics can commence.
**Spectral Overlap (_SO_)** is inspired by the power spectrum overlap proposed in [27], which is used in [46] for measuring the spectral similarity between dialogue participants. The frequency magnitudes
Figure 2: Definitions of four FACE metrics.
in \(\mathcal{F}^{\prime}_{h}\) and \(\mathcal{F}^{\prime}_{m}\) are converted to absolute values, i.e., \(X(\omega_{k})\Rightarrow|X(\omega_{k})|\), and then compute the Area-Under-Curve (AUC) for the interaction \(\mathcal{F}^{\prime}_{h}\cap\mathcal{F}^{\prime}_{m}\) and the union \(\mathcal{F}^{\prime}_{h}\cup\mathcal{F}^{\prime}_{m}\), respectively. _SO_ is defined as the ratio of the two: \(\text{\it SO}=\text{AUC}(\mathcal{F}^{\prime}_{h}\cap\mathcal{F}^{\prime}_{m} )/\text{AUC}(\mathcal{F}^{\prime}_{h}\cup\mathcal{F}^{\prime}_{m})\). The procedure of converting to absolute values is indispensable, since negative values in \(X(\omega_{k})\) will result in negative AUCs. _SO_ has the range \([0,1]\), and a higher value indicates a stronger resemblance between the two spectra.
**Spectrum Angle Mapper (_SAM_)** calculates the angles between \(\mathcal{F}^{\prime}_{h}\) and \(\mathcal{F}^{\prime}_{m}\), treating them as two vectors in a space [21]. The angle is measured in radians, which is calculated by the inverse function \(\arccos(\mathcal{F}^{\prime}_{h}\cdot\mathcal{F}^{\prime}_{m}/|\mathcal{F}^{ \prime}_{h}||\cdot|\mathcal{F}^{\prime}_{m}||)\), producing a value within \([0,\pi/4]\). We understand _SAM_ is equivalent to the cosine similarity score, which is more commonly-used in NLP, but here we just follow the conventions in [21; 2]. A smaller _SAM_ value indicates a greater similarity between \(\mathcal{F}^{\prime}_{h}\) and \(\mathcal{F}^{\prime}_{m}\).
**Pearson's correlation (_CORR_)** can also be leveraged to measure spectral similarities as discussed in [21]. \(\text{\it CORR}=cov(\mathcal{F}^{\prime}_{h},\mathcal{F}^{\prime}_{m})/\sigma (\mathcal{F}^{\prime}_{h})\sigma(\mathcal{F}^{\prime}_{m})\), with a \([-1,1]\) range. A positive _CORR_ value indicates high similarity (negative for dissimilarity), and 0 indicates weak correlation between \(\mathcal{F}^{\prime}_{h}\) and \(\mathcal{F}^{\prime}_{m}\).
**Spearman's correlation (_SPEAR_)**[40] is commonly used to assess the monotonic relationship between the comparison and reference groups and to capture the presence of non-linear associations between the two. It has not been used for spectral similarity to the best of our knowledge, but we test it in our experiments. _SPEAR_ also has the range \([-1,1]\) with meanings similar to _CORR_.
## 3 Related Work
**Entropy as a metric in psycholinguistics.** The entropy of human language has long been a research interest in computational linguistics and psycholinguistics. The entropy of written text is estimated with the average per-word negative log-probability in sentences, and then used to validate the principle of entropy rate constancy in human language [13; 14]. Similar studies were conducted in dialogue [45; 31]. Entropy is also defined in probabilistic grammars to describe the capacity of a language [39], and is used to develop complexity metrics to measure the cognitive load of processing syntactic expressions [15; 23; 16]. In the line of work on language production, a different term _information density_ with the same mathematical formulation is used instead of entropy. It is found that speakers reduce syntactic complexity when the information density (or entropy) is high [24; 19]. In conclusion, entropy is commonly used as a metric for essential properties of human language.
**Periodical change of cross-entropy in language.** We draw inspiration from the following studies about the distribution of information in dialogue. Humans are sensitive to the _peaks_ and _troughs_ of entropy in speech, with evidence from human-system dialogues and crowd-sourced ratings from human judges [7]. The entropy of utterances from two speakers converge towards each other within the scope of topical segments in spontaneous dialogues [47]. They measure the entropy of utterances from two participants of a task-oriented dialogue, and have found that the frequency domain features - power spectrum overlap and phase delay - are useful predictors of task outcomes. Both works reviewed above suggest that the periodical up-and-downs of entropy are commonly observable in the human language. It naturally leads to the question of whether and to what extent model-generated language aligns with this empirical finding.
**Automatic measures for text generation.** Previous measures for discriminating human-written text and model-generated text can be subdivided into three branches: (1) statistics-based; (2) language modeling; (3) reference-based. Table 1 gives a brief summary of these three categories, as well as our proposed frequency-based FACE.
_Statistics-based measures_ compare the model-generated distribution \(M\) with respect to the human-written distribution \(H\) in terms of some statistic. The Zipf coefficient [29] can be used to describe the distribution of word frequencies in text. Self-BLEU [51] is derived by calculating the BLEU [28] score for each generated text utilizing all other generations as references. Repetition measures the sequence-level degree of repetition on the basis of the percentage of duplicated n-grams in the generated continuations \(\mathbf{x}_{\text{cont}}\sim M\)[43]. Meanwhile, we aggregate the 2-gram, 3-gram, and 4-gram repetition rates to evaluate the lexical diversity in an inverse manner.
_Language modeling metrics_ measure how un(certain) human text \(\mathbf{x}\sim H\) follows the model distribution \(M\), using the probability distribution \(M(\mathbf{x})\). In our work, the perplexity is calculated upon the set of human texts to quantify how well the distribution \(M\) predicts a text continuation. Coherence is approximated by cosine similarity between the sentence embeddings of prompt \(\mathbf{x}_{\text{pre}}\sim H\) and contin
uation \(\mathbf{x}_{\text{cont}}\sim M\) as proposed in [41], where the embedding \(\mathrm{EMB}(\cdot)\) is produced by the pre-trained SimCSE sentence embedding [11]. Metrics under this category never observe model-generated text samples, and hence, they cannot justify how likely \(\mathbf{x}_{\text{cont}}\) is under the human distribution \(H\).
_Reference-based measures_ assess the generated text with respect to a small set of reference text, rather than calculating over the full sequence distributions. Some recent reference-based approaches encompass: (1) [4; 35; 37; 49] aim to capture distributional semantic information in high-dimensional space; (2) [50] concerns Euclidean distance between vector representations of \(n\)-grams and their document frequencies; (3) [30] straightforwardly computes the similarity of one learned distribution from a text generation and the other distribution of human-written text using information divergence frontiers [9; 22; 33]. Reference-based metrics are well-suited for targeted generation tasks (e.g., machine translation). Nevertheless, they become unfavorable in the open-ended generation scenario where multiple reasonable and diverse continuations are preferred.
**Non-automatic metrics.** Recent works [12; 30; 18; 25] regarding evaluation metrics and decoding strategies achieve high ratings and explainable correlations from human judgments, assuming that human annotations are the gold standard. Considering the expense of Human Unified with Statistical Evaluation (HUSE) [17], we adopt a pairwise evaluation protocol based on human preferences, to serve as a non-automatic complement of FACE metrics. We leverage the Bradley-Terry model [3] to predict the outcome of a head-to-head comparison given \(n\) players with scores \(\beta_{1},\cdots,\beta_{n}\).
## 4 Experiments
**Task formulation.** Given an input text passage as prefix, the _open-ended_ generation aims to produce texts that form a fluent and coherent continuation. More formally, given a sequence of \(m\) tokens denoted \(\left[x_{1}\ldots x_{m}\right]\), as the **prompt**, the goal is to generate the next \(n\)**continuation** tokens to form a complete sequence \(\left[x_{1}\ldots x_{m+n}\right]\). The continuation probability at the decoding time by conditioning on the preceding context is defined as: \(P\left(x_{m+1}\ldots x_{m+n}\mid x_{1}\ldots x_{m}\right)=\prod_{i=m+1}^{m+n}P \left(x_{i}\mid x_{1}\ldots x_{i-1}\right),\) where \(P\left(x_{i}\mid x_{1}\ldots x_{i-1}\right)\) is the next-token distribution.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Type** & **Metric** & **Measure** & **Definition/Approximation** \\ \hline \hline \multirow{3}{*}{Statistics} & Zipf Coefficient & Unigram rank-frequency statistics & - \\ & Self-BLEU & \(N\)-gram diversity & - \\ & Repetition & Sequence-level percentage of repetition & \(1-\frac{\left[\frac{\text{argmax }+\text{argmax }\left(\text{argmax }\right)}{\text{max }+\text{argmax }\left(\text{argmax }\right)}\right]}{\left[\text{max }+\text{argmax }\left(\text{argmax }\right)\right]}\) \\ & Diversity & Inverse of \(n\)-gram repetition rates (\(n=2,3,4\)) & \(\prod_{i=2,1}^{n}(1.0-\text{Repetition})\) \\ \hline Language Modeling & Perplexity & Evaluation-set perplexity & \(\frac{\mathrm{EMB}\left(M\right)}{\mathrm{EMB}\left(\mathbf{x}\right)}\) \\ & Coherence & LM quality (cosine similarity between sentence embeddings) & \(\frac{\mathrm{EMB}\left(\text{EMB}\left(\mathbf{x}\right),\frac{\mathrm{EMB}\left( \mathbf{x}\right)}{\mathrm{EMB}\left(\mathbf{x}\right)}\right)}{\mathrm{EMB}\left( \mathbf{x}\right)\left(\mathbf{x}\right)\left(\mathbf{x}\right)\left(\mathbf{x}\right)}\) \\ \hline Divergence Curve & MAUVE & Quality \& diversity via the divergence frontiers & \(\mathcal{C}(H,M)\) at all \(\lambda\in(0,1)\)[30] \\ \hline Frequency Domain & FACE (this work) & Quality \& diversity via the spectral similarities (four metrics) & \(FACE(\mathcal{F}_{n},\mathcal{F}_{n})\) \\ \hline Human Judgment & Bradley-Terry Score & Human preference via the pairwise evaluation & \(P(i\text{ beats })=\frac{1}{1+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot \beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon ^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_ {2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n }\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+ \varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n} \cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+ \varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n} \cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+ \varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n} \cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+ \varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n} \cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+ \varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n} \cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+ \varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n} \cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+ \varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n} \cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+ \varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n} \cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+ \varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n} \cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+ \varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n} \cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+ \varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n} \cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+ \varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n} \cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+ \varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n} \cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+ \varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n} \cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+ \varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n} \cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+ \varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n} \cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+ \varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n} \cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+ \varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n} \cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+ \varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n} \cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+ \varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n} \cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+ \varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n} \cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+ \varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n} \cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+ \varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n} \cdot\beta_{2}+\varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+ \varepsilon^{-n}\cdot\beta_{1}+\varepsilon^{-n}\cdot\beta_{2}+\varepsilon^{-n}\cdot \beta_{1}+\varepsilon^{-n}\cdot\beta_
### Model sizes
We consider such a text completion task in three domains: Wiki text, News, and Stories. Intuitively, the generated texts involving different domain knowledge may have different language usages and writing style, which may reflect on metrics. We generate completions from large-scale language models (LMs). In particular, we adopt three representatives of state-of-the-art pre-trained auto-regressive LMs: Generative Pre-trained Transformer 2 (GPT2) [32], Open Pre-trained Transformer LMs (OPT) [48], and BigScience Large Open-science Open-access Multilingual LM (BLOOM) [34]. We explore two sizes for each model to illustrate that our FACE metrics generalize across multiple LM families and sizes. Details regarding our task and input data are summarized in Table 2. Different models may generate vastly different numbers of continuations in each length interval (see Supplementary Material). To ensure the fairness of investigating the correlation between FACE and other widely-used metrics with respect to different models (with different sizes), we compute the weighted arithmetic mean for every metric across five length intervals.
The evaluation metrics we are interested in are based on various motivations and principles. Specifically, MAUVE and FACE emphasize the parallels between human and machine-produced texts, as stated in Section 3. Therefore, we group MAUVE together with four FACE metrics. To further obtain intuitive results, we utilize the voting approach to explore the correlations between these metrics on large/small models across three task domains. The results are shown in Table 3.
In our investigations, the GPT2-xl model consistently outperforms its small counterpart among statistics-based and language modeling metrics as all relevant _"vs."_ columns indicate, apart from the Coherence in the Stories domain. In the GPT2 experimental group, it is astonishing that the small model always performs better when referring to the voting results from MAUVE and FACE rows. Across three task domains, the performances of OPT and BLOOM models in two sizes differ. Large models have better overall performance, and small models only win four out of twelve comparisons by voting. Nonetheless, it is noteworthy that four FACE metrics we proposed maintain a relatively high level of consistency with MAUVE across all models. At least two FACE metrics yield the same results (in eight out of nine sets of human-model language comparisons) with MAUVE. Concretely speaking, _SO_ and _SAM_ show a higher positive correlation to MAUVE than _CORR_ and _SPEAR_, given that seven out of nine voting results (marked with yellow in Table 3) are identical.
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l} \hline \hline
**Domain** & **Metric** & **GPT2-om** & **GPT2-xl** & **vs.** & **Voting** & **OPT-12Sem** & **OPT-67.7s** & **vs.** & **Voting** & **BLOOM-50nm** & **BLOOM-7.7s** & **w.** & **Voting** \\ \hline \hline & Diversity (\(\uparrow\)) & 0.733 & 0.753 & L & 0.645 & 0.789 & L & 0.533 & 0.732 & L & & \\ & Coherence (\(\uparrow\)) & 0.955 & 0.652 & L & 0.641 & 0.634 & L & 0.925 & 0.819 & S & \\ & Zipf Coefficient (\(\downarrow\)) & 0.990 & 0.975 & L & 0.809 & 1.016 & S & L & 1.092 & 0.980 & L & E \\ & Self-BLU (\(\downarrow\)) & 0.499 & 0.424 & L & 0.423 & 0.739 & L & 0.280 & 0.422 & S & \\ \multirow{4}{*}{Wiki text} & MAUVE (\(\uparrow\)) & 0.677 & 0.186 & & 0.169 & 0.253 & L & 0.517 & 0.184 & S & \\ & \(\otimes\) (\(\uparrow\)) & 0.414 & 0.406 & S & 0.424 & 0.436 & L & 0.426 & 0.432 & L & \\ & _CORE_ (\(\uparrow\)) & 0.806 & 0.781 & S & 0.771 & 0.769 & S & L & 0.657 & 0.789 & L & L \\ & _SOM_ (\(\uparrow\)) & 0.199 & 0.213 & S & 0.216 & 0.217 & S & 0.258 & 0.208 & L & \\ & _SPEAR_ (\(\uparrow\)) & 0.022 & 0.023 & L & 0.025 & 0.029 & L & 0.2059 & 0.023 & S & \\ \hline & Diversity (\(\uparrow\)) & 0.890 & 0.897 & L & 0.853 & 0.876 & L & 0.740 & 0.870 & L & \\ & Coherence (\(\uparrow\)) & 0.613 & 0.640 & L & 0.663 & S & 0.897 & 0.785 & S & \\ & Zipf Coefficient (\(\downarrow\)) & 0.961 & 0.958 & L & 0.965 & 0.968 & L & 0.964 & S & \\ & Self-BLU (\(\downarrow\)) & 0.619 & 0.573 & L & 0.611 & 0.543 & L & 0.384 & 0.501 & S & \\ \multirow{4}{*}{News} & MAUVE (\(\uparrow\)) & 0.593 & 0.281 & S & 0.162 & 0.150 & S & 0.014 & 0.025 & L & \\ & _SOM_ (\(\uparrow\)) & 0.424 & 0.412 & S & 0.438 & 0.440 & L & 0.436 & 0.437 & L & \\ & _CORE_ (\(\uparrow\)) & 0.757 & 0.723 & S & S & 0.746 & 0.732 & S & S & 0.615 & 0.733 & S & L \\ & _Satur_ (\(\downarrow\)) & 0.224 & 0.420 & S & 0.299 & 0.236 & S & 0.281 & 0.234 & L & \\ & _SPEAR_ (\(\uparrow\)) & 0.021 & 0.019 & S & 0.017 & 0.021 & L & 0.048 & 0.019 & S & \\ \hline & Diversity (\(\uparrow\)) & 0.743 & 0.785 & L & 0.769 & 0.875 & L & 0.527 & 0.830 & L & \\ & Coherence (\(\uparrow\)) & 0.421 & 0.420 & S & L & 0.440 & 0.388 & S & 0.860 & 0.660 & S & S \\ & Zipf Coefficient (\(\downarrow\)) & 1.097 & 1.085 & L & 1.021 & 1.003 & L & 0.999 & 1.058 & S & \\ & Self-BLU (\(\downarrow\)) & 0.617 & 0.565 & L & 0.587 & 0.531 & L & 0.180 & 0.455 & S & \\ \multirow{4}{*}{Stories} & MAUVE (\(\uparrow\)) & 0.504 & 0.121 & S & 0.025 & 0.013 & S & 0.026 & 0.008 & L & \\ & _SOM_ (\(\uparrow\)) & 0.411 & 0.402 & S & 0.406 & 0.405 & S & 0.350 & 0.418 & L & \\ & _CORR_ (\(\uparrow\)) & 0.813 & 0.787 & S & 0.737 & 0.705 & S & S & 0.573 & 0.772 & L & L \\ & _SOM_ (\(\uparrow\)) & 0.195 & 0.209 & S & 0.231 & 0.245 & S & 0.300 & 0.214 & L & \\ & _SPEAR_ (\(\uparrow\)) & 0.023 & 0.022 & S & 0.036 & 0.041 & L & 0.050 & 0.027 & S & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Domain-specific generation quality with respect to different **models** (GPT2/OPT/BLOOM) and **model sizes** (large model on the left and small model on the right) using top-\(k\) (\(k\)=50) sampling under various existing metrics, as well as proposed FACE metrics. \(\uparrow\) indicates the larger the metric value, the better, whereas \(\downarrow\) indicates the opposite. The _vs._ column indicates the better-performing model in each comparison, where L/S denotes the large/small model wins and E represents a tie. We applied the majority voting to determine the winner. OPT and BLOOM are postfixed with their number of parameters.
To further evaluate model sizes, we apply FACE to the original GPT2 output data (webtext) 4 generated from GPT2-sm and GPT2-xl. GPT2-xl has a higher \(SO\) score than GPT2-sm, which is confirmed with the \(t\)-test, but non-significant effects are found on the other three metrics. Combining our generation task with the original GPT2 data, we illustrate the results for \(SO\) in Figure 3.
Footnote 4: [https://github.com/openai/gpt-2-output-dataset](https://github.com/openai/gpt-2-output-dataset)
To conclude, we discover three keypoints: (1) FACE is consistent with MAUVE in evaluating three different model types (two sizes for each); (2) the metrics estimating similarity between human-written and model-generated text generations (e.g., FACE, MAUVE) may produce opposite results to the text-centered metrics (e.g., Diversity, Coherence); (3) the four metrics of FACE show relatively homogeneous results, and using these metrics together helps to identify model-generated texts with a more comprehensive evaluation.
### Sampling methods
Recent work [18; 25] has indicated three clear trends in open-ended text generation using auto-regressive LMs: (1) maximization-based decoding algorithms (e.g., beam search, greedy decoding, etc.) lead to copious repetition, while sampling with temperature may result in incoherence; (2) truncation-based sampling methods like nucleus sampling produce text with higher quality; (3) contrastive decoding outperform nucleus sampling in terms of both fluency and coherence. Accordingly, to demonstrate the effectiveness of our approach, FACE should follow the inequality: maximization-based/temperature-based \(\prec\) nucleus \(\prec\) contrastive in terms of the quality relationship.
Figure 4 visualizes the correlation between FACE scores and various decoding algorithms. The contrastive decoding approach yields the best performance among the four FACE metrics. It can be clearly observed that the maximization-based sampling methods behave worse than other algorithms. Moreover, adding the temperature parameter to top-\(k\) sampling results in incoherent text generations, which explains the gap between the red curve (top-\(k\) w/o temperature) and the gray curve (top-\(k\) w/ temperature). We also plot the correlation graphs of unconditional generation (in the Supplementary
Figure 4: FACE scores (conditional generation) on original experimental data of [18] and [25]. Nine sampling methods are compared: greedy, beam search, stochastic beam search, pure sampling, temperature, top-\(k\), top-\(k\) with temperature, nucleus, and contrastive. Note that logarithmic normalization on parameter values as well as enlarged markers for greedy decoding, pure sampling, and contrastive decoding are adopted for better visualization effect.
Figure 3: FACE-\(SO\) scores on OPT, BLOOM and GPT2 original output data. Model sizes compared: small vs. large for OPT and BLOOM; -sm vs. -xl for GPT2. Error bars represent 95% confidence intervals from bootstrap. The significant levels are based on \(t\)-test between the two model-size groups.
Material) with fewer sampling methods involved. The trends and patterns in the visualization of unconditional generation are basically consistent with its conditional counterpart.
In Table 4, FACE scores on different decoding algorithms are summarized. FACE metrics correctly match the expected quality relationship of the sampling methods examined by assigning the best \(SO\) (\(.44\)), _CORR_ (\(.75\)), _SAM_ (\(.23\)), and _SPEAR_ (\(.17\)) scores to contrastive decoding. Other evaluation metrics fail to capture the correct relationship, for example, the perplexity rates nucleus-sampled text as better than contrastive-decoded text, which is irrational suggested by Li et al. [25].
### Human judgments
We also explore the correlation between FACE and human judgement scores, using the crowd-source dataset collected in [30] when human evaluation is available. The dataset contains model-generated continuations (by GPT2-sm, -md, -lg, and -xl with ancestral and nucleus sampling), human-written continuations using the same prefix, and the crowd-source workers' answers on which completion is more human-like, interesting, and sensible. We follow the same experimental settings and protocol to verify whether the FACE scores of the completion texts correlate with the human quality judgements by computing the Spearman's rank correlation coefficient. The results are presented in Table 5.
We observe a high and positive correlation between \(SO\) and human judgments, which outperforms five out of the six evaluation metrics reported in [30] and achieves a comparative performance against MAUVE. The remaining three FACE metrics have insignificant correlations. However, we consider human judgments to be subjective and sometimes biased. Including more fine-grained questions to perform human judgments may lead to more accurate correlation statistics.
### Intrinsic experiments
**Choice of estimator model.** We examine how different choices of the estimator model \(m_{\text{est}}\) affect the resulting spectra, using GPT2-sm, -md, -lg and -xl as \(m_{\text{est}}\), respectively. The spectra of webtext and the original GPT2 output data are computed. It is found that the spectra obtained from \(m_{\text{est}}\) have different magnitudes, but their aggregated curves have the same shape (see Supplementary Material). Therefore, the choice of \(m_{\text{est}}\) will not affect FACE scores as long as the same \(m_{\text{est}}\) is used for all data.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline
**Sampling Method** & **Perplexity** & **Self-BLEU** & **Zipf Coefficient** & **Repetition** & _SO_ (\(\uparrow\)) & _CORR_ (\(\uparrow\)) & _SAM_ (\(\downarrow\)) & _SPEAR_ (\(\uparrow\)) \\ \hline \hline Human & 12.38 & 0.31 & 0.93 & 0.28 & - & - & - \\ \hline Greedy & 1.50 & 0.50 & 1.00 & 73.66 & 0.20 & 0.56 & 0.31 & 0.04 \\ Beam(=16) & 1.48 & 0.44 & 0.94 & 28.94 & 0.21 & 0.31 & 0.40 & 0.04 \\ Stochastic Beam(=16) & 19.20 & 0.28 & 0.91 & 0.32 & 0.37 & 0.49 & 0.33 & 0.04 \\ \hline Pure Sampling & 22.73 & 0.28 & **0.93** & 0.22 & 0.41 & 0.63 & 0.28 & 0.03 \\ Sampling (\(t\)=0.9) & 10.25 & 0.35 & 0.96 & 0.66 & 0.42 & 0.61 & 0.29 & 0.03 \\ Top-(\(t\)=40) & 6.88 & 0.39 & 0.96 & 0.78 & 0.40 & 0.64 & 0.28 & 0.03 \\ Top-\(k\) (=640) & 13.82 & **0.32** & 0.96 & **0.28** & 0.42 & 0.63 & 0.28 & 0.03 \\ Top-\(k\) (=40, \(t\)=0.7) & 3.48 & 0.44 & 1.00 & 8.86 & 0.34 & 0.61 & 0.29 & 0.03 \\ Nucleus (\(p\)=0.95) & **13.13** & **0.32** & 0.95 & 0.36 & 0.42 & 0.63 & 0.28 & 0.03 \\ Contrastive Decoding & 14.39 & 0.54 & 1.04 & 0.24 & **0.44** & **0.75** & **0.23** & **0.17** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results for comparing all sampling methods with selected parameters regarding the conditional generation. The values _closest to human scores_ are **bolded**, except for our proposed FACE scores, where the _highest (for SO, CORR, and SPEAR) or the lowest (for SAM)_ values are in **bold**.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Metric** & **Generation Perplexity** & **Zipf Coefficient** & **Repetition** & **Distinct-4** & **Self-BLEU** & _SO_ & **MAUVE** \\ \hline \hline Human-like/BT & 0.810 & 0.833 & \(-0.167\) & 0.738 & 0.595 & **0.881** & **0.952** \\ Interesting/BT & 0.643 & 0.524 & \(-0.143\) & 0.524 & 0.405 & 0.762 & **0.810** \\ Sensible/BT & 0.738 & 0.690 & \(-0.071\) & 0.595 & 0.524 & 0.786 & **0.857** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Spearman’s rank correlation coefficients of _SO_ and five other metrics with human judgments. Higher scores mean better correlation. All the numbers except the _SO_ column are sourced from [30]. “BT” denotes the Bradley-Terry score of the pairwise human evaluation, which is employed to compute the Spearman’s rank correlation with the scores of other metrics.
**Stationarity tests.** One of the assumptions of the Fourier transform is that the signal is _stationary[20]_, that is, the mean and variance do not change over time. We applied the Augmented Dickey-Fuller (ADF) test [8] to examine the stationarity of the cross-entropy sequences for all the human and model-generated data used in this study. The null hypothesis \(H_{0}\) of the ADF test is non-stationarity, and thus a \(p<.05\) testing result rejects \(H_{0}\) and accepts the alternative hypothesis of stationarity in the series. We calculate the proportions of cross-entropy sequences that pass the ADF test with \(p<.05\) for all model-generated and human data: 97.4% for GPT2, 92.1% for OPT, 74.5% for BLOOM, and 97.9% for human. Therefore, the vast majority meets the stationarity requirement for the Fourier transform.
### Interpretation of spectra
As the frequency spectrum reflects the key characteristics of a signal, we attempt to interpret the spectra to see if they tell how the "signals" - entropy of human and machine languages - differ. Without aggregation, the raw spectra of single cross-entropy sequence look indistinguishable between GPT2-sm, GPT2-xl, and human (see the left plot in Figure 5). By aggregating 5,000 spectra from each group and smoothing the curves, it can be seen that GPT2-xl's curve is closer to human than the GPT2-sm curve (readers can find this by zooming in the middle plot in Figure 5). Here, the smoothing is done with Generalized Additive Models (GAM) [44]. Results from other models are included in the Supplementary Material.
When plotted separately, the aggregated spectra from human and different models have similar shapes: First, the majority of components exist in the low-frequency range (\(\omega<0.05\)). In addition, the locations of peaks and troughs are almost the same between groups. For instance, \(\omega_{1}=0.06\) is the first trough, and \(\omega_{2}=0.12\) is the first peak (see the right plots in Figure 5). Thus, roughly speaking, the main difference between human and model spectra is not in the locations of peak and trough frequencies but in the relative magnitudes of those frequencies.
We propose a simple way to interpret the peaks in spectra: the reciprocal of a frequency component \(T_{k}=1/\omega_{k}\) denotes the corresponding cycle in the time domain. Because the time interval (i.e., sampling interval) of an entropy sequence is not measured in _seconds_ but fixed as one _token_, the measurement unit of \(T_{k}\) is also in number of tokens. For example, the first frequency peak in Figure 5 (right plot) implies \(\omega_{2}=0.12\Rightarrow T_{2}=1/0.12\approx 8.3\) (tokens), which approximately means that tokens of the same cross-entropy levels tend to _recur_ every 8.3 tokens. This pattern is consistent in both human and model data. However, the degree of this _recurrence_ can mark the difference between the human and model languages. We leave more detailed interpretations of spectra to future work.
## 5 Conclusion and Limitations
We propose FACE, a set of metrics based on the Fourier analysis of cross-entropy, which is able to distinguish human and model-generated language with satisfactory performance in the open-ended generation task. The metrics scale with model sizes; reflect the effect of various sampling methods;
Figure 5: Intuitive observations on the spectra from GPT2 and human data (webtext). **Left**: Spectra of three randomly sampled entropy sequences from GPT2-sm, GPT2-xl, and webtext. **Middle**: Smoothed plot of 5,000 aggregated spectra with absolute values, \(|X_{\omega_{k}}|\sim\omega_{k}\). **Right**: Typical smoothed plot of raw spectra \(X_{\omega_{k}}\sim\omega_{k}\), with peaks and troughs annotated.
correlate well with other existing metrics and outperform most of them in alignment with human judgement scores. Among the four implementation methods of FACE experimented, Spectral Overlap (_SO_) has the best overall performance.
FACE is computationally efficient with easy-to-interpret output. As a method inspired by psycholinguistic studies on the predictability (entropy/surprisal/information density) of human language, we believe FACE is a good example of incorporating knowledge from different fields for better human-centered AIs. We can generally conclude that better language models can produce spectral representations of information that are more similar to human.
Our current work has several limitations: Firstly, for open-ended generation experiments (Section 4.1), a broader set of sampling methods other than top-\(k\) can be used. Secondly, larger models (with more than 100 billion parameters) need to be included for more comprehensive comparisons. We will improve from these aspects in future work.
|
2306.09306
|
Propagating Knowledge Updates to LMs Through Distillation
|
Modern language models have the capacity to store and use immense amounts of
knowledge about real-world entities, but it remains unclear how to update such
knowledge stored in model parameters. While prior methods for updating
knowledge in LMs successfully inject atomic facts, updated LMs fail to make
inferences based on injected facts. In this work, we demonstrate that a context
distillation-based approach can both impart knowledge about entities and
propagate that knowledge to enable broader inferences. Our approach consists of
two stages: transfer set generation and distillation on the transfer set. We
first generate a transfer set by prompting a language model to generate
continuations from the entity definition. Then, we update the model parameters
so that the distribution of the LM (the student) matches the distribution of
the LM conditioned on the definition (the teacher) on the transfer set. Our
experiments demonstrate that this approach is more effective at propagating
knowledge updates than fine-tuning and other gradient-based knowledge-editing
methods. Moreover, it does not compromise performance in other contexts, even
when injecting the definitions of up to 150 entities at once.
|
Shankar Padmanabhan, Yasumasa Onoe, Michael J. Q. Zhang, Greg Durrett, Eunsol Choi
|
2023-06-15T17:39:50Z
|
http://arxiv.org/abs/2306.09306v2
|
# Propagating Knowledge Updates to LMs
###### Abstract
Modern language models have the capacity to store and use immense amounts of knowledge about real-world entities, but it remains unclear how to update their implicit "knowledge bases." While prior methods for updating knowledge in LMs successfully inject facts, updated LMs then fail to make inferences based on these injected facts. In this work, we demonstrate that a context distillation-based approach can both impart knowledge about entities _and_ propagate that knowledge to enable broader inferences. Our approach consists of two stages: transfer set generation and distillation on the transfer set. We first generate a transfer set by simply prompting a language model to generate a continuation from the entity definition. Then, we update the model parameters so that the distribution of the LM (the student) matches the distribution of the LM conditioned on the definition (the teacher) on the transfer set. Our experiments demonstrate that this approach is more effective in propagating knowledge updates compared to fine-tuning and other gradient-based knowledge-editing methods without compromising performance in other contexts, even when injecting the definitions of up to 150 entities at once.
## 1 Introduction
As large language models (LLMs) are used for a wider variety of applications, it is crucial to ensure that they contain up-to-date information about the world. Retraining models from scratch is expensive, so being able to efficently update the knowledge in models is crucial. One potential solution is retrieval augmentation, which prepends retrieved texts to the language model's context [16; 24; 30; 29]. However, this raises inference costs and becomes impractical when updating large amounts of information. An alternative approach, and our goal in this work, is to internalize the new knowledge into the language model via parameter updates [31; 38; 6; 21; 17; 9].
Recent work on teaching models about emerging entities [27] demonstrates that knowledge editing techniques can teach specific facts and relations (_Rishi Sunak is the prime minister of the UK_), but struggle to teach the model how to _propagate_ this knowledge, or make inferences based on it _(what roles would Rishi Sunak play?)_. This contrasts with results from retrieval augmentation [16; 30] and chain-of-thought prompting [35], which show that such inferences are possible when the information is available in the context.
This work aims to bridge the gap between the two approaches in knowledge injection. Specifically, we use a form of knowledge distillation [10] called context distillation [1] that updates an LM to act like it is conditioned on a given context even when that context is not shown. Our approach consists of two steps: transfer set generation and distillation on the generated transfer set. The transfer set consists of continuations of the entity definition sentence generated by prompting a language model. We minimize the Kullback-Leibler (KL) divergence between the model's predictions on the transfer
set when it conditions on the definition (the "teacher" for distillation) and when it does not (the "student", or the language model itself).
As in prior work [27], we focus on teaching the language models _novel_ entities: entities that they would not have seen during their pretraining phase, and therefore would have minimal knowledge of. We evaluate our approach on how well it can learn to make inferences about such entities on two datasets: Entity Inferences [27] and Entity Cloze by Date [26]. Across three language models, our distillation approach outperforms fine-tuning as well as editing methods like MEND [21] and MEMIT [18]. We also present rich analysis on design choices of transfer set construction. Encouragingly, transfer set constructed from a language model itself is competitive with that generated by much larger model (GPT-3.5), showing that the context distillation process can work across a range of model sizes and qualities of transfer sentences. Finally, we show that our approach scales: we can inject over 100 new entities into a language model with minimal degradation on a set of contexts about popular entities, suggesting that the distillation process performs relatively targeted editing even without special consideration for this in the objective as in past methods [17; 18].
Our core contributions are: (1) We show that a knowledge distillation technique can effectively impart and propagate knowledge from entity definitions into the parameters of a pre-trained language model. (2) We explore the advantages of distillation relative to other injection methods, including fine-tuning, and show that it does teach entity-specific knowledge and can scale to many entities at once. (3) We analyze what kind of continuations are needed in our transfer set and show that sentences generated from an LM itself can perform well. The code and data will be available at [https://github.com/shankarp8/knowledge_distillation](https://github.com/shankarp8/knowledge_distillation).
## 2 Background and Task Setup
### Motivating Example
Figure 1 shows a motivating example. An LM trained on text collected prior to November 2022 will not have specific knowledge about what ChatGPT is, as ChatGPT was introduced then. Past retrieval-augmented generation methods [16] have shown that conditioning on information about this entity can lead to lower perplexities when evaluating on sentences like _ChatGPT can respond to natural language questions_[30; 27]. For example, the model assigns a higher likelihood to tokens like _respond_ given the knowledge that ChatGPT is a chatbot.
Our approach relies on teaching a "student model" (the LM itself) to match the next-token distributions given by the model conditioned on the definition sentence _even when the definition sentence is not
Figure 1: Overview of our distillation approach. The goal is to inject the entity definition (\(\mathbf{d}_{e}\)) into the student model (\(M_{s}\)) and propagate it to make inferences based on the injected knowledge. This example uses _ChatGPT_ as a new entity. We first generate a set of continuations of the entity’s definition using a generator model (Step 1), then use these to distill the information from definition into the student model via a KL loss between the conditioned and unconditioned models (Step 2); see Section 3 for formulation.
_shown_. We do this via a distillation process on a set of _continuations_, or sampled sentences following the definition. We impose a KL penalty between the student and teacher distributions of a set of target tokens, namely all those occurring after _ChatGPT_ in the continuation. Because the distillation process does not make updates on tokens where the teacher and student have the same distribution (zero KL), only tokens that are in some way predictable from the definition drive parameter updates (see Section 7 for discussion).
### Task Setup
We refer to language models \(M\) as \(M(\mathbf{x})\rightarrow\mathcal{D}(\mathcal{V})\), mapping an input context \(\mathbf{x}=(x_{1},\dots,x_{n})\) to a next-word distribution \(\mathcal{D}(\mathcal{V})=p(\cdot\mid x_{1},\dots,x_{n})\) over a vocabulary \(\mathcal{V}\). We will abuse notation and also use \(M(\mathbf{x})\rightarrow\mathcal{D}(\mathcal{V})_{1,\dots,n}\) to represent the collection of distributions after each prefix of \(\mathbf{x}\), which is a standard operation used in language model training.
The input to our knowledge updating process is definitional information of an entity \(\mathbf{d}_{e}\), for example, the information shown in Figure 1 where the entity \(e\) is _ChatGPT_. We will use \(e\) both as an indicator and also a reference to the string description of the entity, which may be multiple tokens long. Our goal is to learn a new model \(M_{s}\) that "knows" \(\mathbf{d}_{e}\) by matching \(M_{s}(\mathbf{x})\) with \(M_{t}(\mathbf{x}\mid\mathbf{d}_{e})\) (the teacher model) as closely as possible with our distillation scheme.
We evaluate on two factors. First, **propagation success** measures how well the model now reflects \(\mathbf{d}_{e}\). Crucially, our evaluation here is not just a narrow notion of whether a specific fact is injected [38; 6; 21; 17], inter alia], but captures the model's ability to make inferences as well [26; 27]. Second, **specificity** evaluates whether the predictions of the LM on other contexts are altered as in prior work [6; 21; 17; 18].
### Prior work
Knowledge distillationWe are not aware of prior work that uses distillation for knowledge editing. Our use of context distillation is most similar to Askell et al.'s alignment work [1]; however, they use it in a phase roughly analogous to RLHF and use a generic transfer set sampled from the language model training corpus. In terms of goals, our work is related to prompt injection [5], which examines tasks like distilling a persona-conditioned language model. However, they rely on a task-specific input generator, whereas we simply prompt existing LMs. Furthermore, while they aim to have a model memorize a particular prompt, we focus on general knowledge updates and inferences based on those. Other work has used distillation techniques for "gisting" to make shorter prompts [23] or to distill reasoning processes [32]. Similar approaches as our continuation sampling have been used for example extrapolation [15] with a substantially different task setup.
Efficient parameter updatesParameter updating methods such as KnowledgeEditor [6] and MEND [21] make use of standard fine-tuning to attempt to localize edits. ROME [17] and MEMIT [18] treat factual knowledge as subject-relation-object tuples, and find that new facts can be inserted into particular early and middle layers within a GPT-style transformer using specialized update rules. KILM [37] finds success for encoder-decoder LMs using a continuing pretraining using a modified pretraining objective, and [12] examines continually pretraining LMs.
Related tasksOther work has targeted related yet distinct settings; for example, counterfactual editing [17] or updating of specific relations [21]. Because our goal is to test _propagation_ of knowledge, we do not compare to these methods. Past work has proposed other benchmarks to evaluate models on facts about current entities [14; 7; 13]. However, these are not structured as injecting a known definition, so do not fit our task setting.
## 3 Method
Our method is illustrated in Figure 1 and described formally in Algorithm 1. It consists of two steps: transfer set generation and distillation on the generated transfer set.
Transfer Set GenerationFirst, we generate a transfer set corresponding to \(\mathbf{d}_{e}\), written as \(\mathbf{T}_{e}=\{\mathbf{c}_{1},\mathbf{c}_{2},\cdots,\mathbf{c}_{N}\}\). We do this by sampling \(N\) distinct continuations from our _generator_ model
with a prompt \(\mathbf{p}\) followed by the entity definition \(\mathbf{d_{e}}\); we will either use GPT-3.5 or the base LM \(M_{\mathrm{Base}}\) for this process.
Each continuation must contain an identifiable reference to the entity string \(e\). We describe how we ensure this in Section 5. We use \(\ell_{i}\) to refer to the fencepost index where this entity string ends in the continuation sentence \(c_{i}\); for example, in Figure 1, \(\ell_{i}=2\) with 1-based indexing to indicate the mention string _ChatGPT_ ends before the second token. Crucially, we only want to distill losses when predicting tokens located at position \(\ell\) or later. Tokens before do not condition on the entity name in the student and risk making broad updates to the model, which can impact specificity negatively.
DistillationWe initialize an LM \(M_{s}\) from its original pretrained checkpoint, as well as a copy of the LM, \(M_{t}\), to serve as the teacher model during the distillation process. Then, for each continuation \(\mathbf{c}_{i}\) in the transfer set, we compute the student model's distributions \(M_{s}(\mathbf{c}_{i})\) (a sequence of \(|\mathbf{c}_{i}|\) distributions, abusing notation) as well as the teacher model's distributions conditioned on the definition, \(M_{t}(\mathbf{c}_{i}\mid\mathbf{d}_{e})\). We compute the KL divergence summed over the tokens after \(\ell\) (line 8). Finally, we perform a gradient update on \(M_{s}\) based on this loss. This is done for \(K\) epochs.
Multi-entity editingWe can generalize this algorithm to editing multiple entities at once. For simplicity, we do not formally define this in the algorithm block but evaluate this setting in Section 7 which presents promising results. We take the union of transfer sets belonging to different entities, shuffle them, and distill each transfer example as described in line 4-9.
## 4 Evaluating Knowledge Propagation
DataWe use two datasets to evaluate our approach. First, Entity Inferences [27] is a synthetic dataset designed such that target span in its probe sentences are easily inferable from the definition sentence. For example, given a definition sentence describing _Dracula is a drama horror television_, models are asked to complete the following probe sentence: _Dracula makes me_ from multiple choice options (e.g., scared, atheletic, etc). Then, we report the accuracy in selecting the correct option.
Second, Entity Cloze By Date (ECBD) [26] consists of cloze-style sentences from Wikipedia that probe for knowledge of specific entitites. Examples in ECBD are separated by each entity's origination date (e.g., when an event occured). In contrast to [27], which uses the 2021 subset of ECBD, we use the 2022 subset and the January 2023 - March 2023 subset of ECBD to ensure that newer models (e.g. GPT-3.5) do not have knowledge of the probed entities beyond the definition it conditions on; see Appendix F for more discussion of the temporal characteristics of our models and datasets. Each example consists of a cloze-style probe sentence prefix \(\mathbf{x}\) about an entity \(e\) followed by a target span \(\mathbf{y}\). The definition \(\mathbf{d}_{e}\) is taken from the first sentence of the entity's Wikipedia page. For further details and dataset statistics, see Table B in the appendix.
Evaluation MetricsFor Entity Inferences, we evaluate by measuring accuracy in predicting the correct gold label (in other words, how often the model assigns highest probability to the gold label). We measure specificity by evaluating the model's accuracy on predicting gold spans on other, similar probe sentences.
We evaluate on ECBD by computing per-token perplexity of the continuation given the probe prefix, \(PPL(\mathbf{y}\mid\mathbf{x})\). Note that this metric is not directly comparable across base LMs which have different tokenizers and we do not attempt to make such comparisons in this work. To evaluate how much a particular editing method has taught the system about a particular entity, we report the **decrease** in perplexity from the edit, \(PPL(\mathbf{y}\mid\mathbf{x};M_{\mathrm{base}})\) vs. \(PPL(\mathbf{y}\mid\mathbf{x};M_{s})\). To evaluate an edit's specificity, we randomly sample 40 examples from the "popular" subset of ECBD, ensuring that all 40 probes are about unique entities. We then report the change in perplexity on these sampled examples before and after the edit, using the same metric as above for evaluating on the target sentence.
## 5 Experimental Setting
Base ModelsWe consider three autoregressive language models: GPT-Neo-1.3B [3], GPT2-XL [28] (1.5B), and LLaMA-7B [34]. The former two models have minimal knowledge of the entities in Entity Inferences and ECBD from their pretraining corpora (details in Appendix F). LLaMA-7B was trained on Wikipedia dumps from 2022, and therefore likely has knowledge of entities in Entity Inferences and ECBD 2022. Thus, we do not evaluate LLaMA results on Entity Inferences, but report the results on 2022 set of ECBD for comparison with the result on 2023 set.
Transfer Set GenerationWe experiment with two types of generator models to create the transfer sets: a state-of-the-art model learned from human feedback data (GPT-3.5, text-davinci-003), which can generate more fluent transfer sentences from the definition sentence, and the base model itself, which presents a more realistic scenario in which which we do not assume a better language model or RLHF/instruction-tuning. For both models, we use a simple prompt to elicit a continuation of the definition sentence (prompt in Appendix E) and sample five transfer sentences for each entity. For generation, we use nucleus sampling [11] with \(p=0.9\), and a max length of 40 tokens.
Table 1 summarizes the statistics on transfer set generated from different generator models. Upon manual inspections, we find that GPT-3.5 model hallucinates substantially less than smaller base models, as reflected in % of tokens in the continuations that appeared in the definition sentence. For continuation which did not contain entity name, we simply prepend the entity name. Examples of continuations can be found in Table 9 in the appendix.
### Comparison Systems
We compare our approach to methods for injecting knowledge by updating parameters as well as prepending the knowledge at inference time. For prepending, we report two settings: (1) prepending the correct entity definition and (2) prepending a definition of random entity, as reported in prior work [27]. We describe knowledge updating methods below:
**Finetuning** is frequently used to adapt pre-trained LMs to new domains or tasks [8] and is a baseline for knowledge injection. We train \(M_{\mathrm{base}}\) on \(\mathbf{d}_{e}\) with standard negative log likelihood loss on the sequence (teacher forcing). We investigate fine-tuning the full model, as well as only the last layer.
We also explore **finetuning with the transfer set** on ECBD dataset. For Entity Inference dataset, fine-tuning on definition sentence should provide sufficient information by dataset design. Because our approach uses a transfer set (\(T_{e}\)) that incorporates additional information, we compare to fine-tuning on this set. First, we fine-tune \(M_{s}\) on the definition. Then, for each sentence in our transfer set \(\mathbf{T}_{e}=(\mathbf{c}_{1},\dots\mathbf{c}_{N})\), we fine-tune on \(M_{s}(\mathbf{c}_{i}\mid\mathbf{d}_{e})\), conditioning on \(\mathbf{d}_{e}\) and only updating the model on the tokens after the entity occurrence \(\ell\). We do this to provide the most direct comparison between fine-tuning and distillation in terms of how many and what tokens are trained on.
**MEND**[21] is a hypernetwork that uses a set of smaller editing networks to make fast, local edits to a model's weights. MEND transforms the gradient obtained from traditional fine-tuning using
\begin{table}
\begin{tabular}{l c c c} \hline \hline \(M_{g}\) & \# Tokens & \begin{tabular}{c} \% Token \\ in \(E_{d}\) \\ \end{tabular} &
\begin{tabular}{c} \# Tokens \\ after \(l\) \\ \end{tabular} \\ \hline GPT-3.5 & 38.0 & 53.2 & 32.7 \\ GPT2-XL & 35.0 & 33.1 & 28.6 \\ GPT-Neo & 36.6 & 33.1 & 30.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics for transfer set sentences generated by each generator model.
a low-rank approximation. We train MEND editors for GPT-Neo using the WikiText-103 dataset, which utilizes generated text as altered output following the configuration used in the original paper.1
Footnote 1: MEND is extended by a method called SERAC [22]. However, SERAC uses an external edit table and a “scope classifier” network that decides whether a given query is “within scope” of any member of the edit table, which does not fit our goal which _deliberately_ aim to test the queries out of scope of the definitions.
**MEMIT**[18] treat facts as (subject, relation, object) tuples and consider each MLP within an LM as a key-value store. These methods use a rank-one modification of the weights of MLPs within a pre-chosen layer (in the case of MEMT, a set of consecutive pre-chosen layers) to edit the factual representations there. By spreading updates across multiple layers, MEMIT extends its predecessor ROME [17] to be able to edit up to 10,000 facts at a time without sacrificing edit performance.
We format our evaluation data for MEMIT as follows: For a given definition sentence \(\mathbf{d}_{e}\), the subject is the name of the entity \(e\), the relation is the part of the sentence before the masked span, and the object is the part of the sentence after the masked span, including the gold span.
Implementation DetailsWe experimented with a variety of learning rates (from 1e-8 to 1e-4) and numbers of steps (between 1 and 20) across all experiments. In particular, we tested learning rates of 1e-8, 5e-8, 1e-7,... 1e-4, and in between these intervals on occasion if we found two endpoints to perform well (eg: LR of 1e-5 and LR of 5e-6 both perform well, which may imply an optimum located in between). We focus on balancing results between performance and specificity; neither are prioritized if they significantly harm the other.
## 6 Results
### Entity Inferences
We first conduct smaller scale study on the easier benchmark, Entity Inferences dataset, where learning about the definition should make guessing the target relatively easy. Table 2 reports the results. Our distillation approach shows promising performance in both base models. For GPT-Neo, using distillation with GPT-3.5 as the generator even outperforms definition prepending, although GPT-3.5 potentially introduces information about the entity beyond the definition sentence. Fine-tuning on the definition and transfer set using GPT-Neo does outperform distillation, at the cost of specificity. For GPT-XL, distillation only outperforms fine-tuning on the definition sentence when using GPT3.5 as a generator model, but still shows a substantial gain using its own generated sentences (24.3%). The drop in specificity (1.6-4.2%) is substantially less severe than fine-tuning on the definition sentence. These results indicate that context distillation teaches models to make simple inferences based on injected knowledge without significantly harming the model's distribution on unrelated concepts.
### Ecbd
Tables 3 and 4 display our main experimental results on the ECBD 2022 and ECBD 2023 datasets, respectively.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{GPT-Neo-1.3B} & \multicolumn{2}{c}{GPT2-XL} \\ \hline Pre-Edit Accuracy & 34.1 & 34.1 & 32.9 & 32.9 \\ & Target (\(\Delta\)) & Spec. (\(\Delta\)) & Target (\(\Delta\)) & Spec. (\(\Delta\)) \\ \hline \hline Finetuning on \(d_{e}\) (full) & 57.7 (+23.6) & 18.3 (-15.9) & 62.9 (\(\times\)30.0) & 24.1 (-8.8) \\ Finetuning on \(d_{e}\) (last only) & 48.8 (+14.7) & 16.4 (-17.7) & 46.5 (+13.6) & 35.4 (+2.5) \\ Finetuning on \(T_{e}\) (full) & 66.5(+31.4) & 28.8 (-5.3) & 59.4 (+25.5) & 33.8 (+0.9) \\ MEND & 41.8 (+7.7) & 34.4 (+0.3) & - & - \\
**Distillation (\(M_{g}\) = \(M_{s}\))** & 61.8 (**+27.7**) & 32.6 (-1.6) & 58.2 (**+24.3**) & 31.4 (-1.5) \\
**Distillation (\(M_{g}\) = GPT3.5)** & 65.9 (**+31.8**) & 32.5 (-1.6) & 65.3 (**+31.4**) & 28.7 (-4.2) \\ \hline \hline Prepend Def. & 60.0 (+25.9) & _34.1_ & 64.1 (+31.2) & _32.9_ \\ Prepend Random Def. & 27.7 (-6.4) & _34.1_ & 26.5 (-6.4) & _32.9_ \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results (accuracy) on Entity Inferences dataset. Non-bolded lines are taken from prior work [27]. Before the edit, accuracy was 34.1 for GPT-Neo and 32.9 for GPT2-XL.
We first discuss the results on GPT2-XL and GPT-Neo-1.3B base models. Our context distillation method achieves high performance for both models. As established in [27], prepending the definition achieves strong performance, yet our approach attains more than 54% of the performance improvement from prepending the definition for both of these models. For most trials, using a transfer set generated by GPT-3.5 improves over using \(M_{s}\) as the generator, but the difference is much smaller than on Entity Inferences. Furthermore, using \(M_{g}=M_{s}\) for GPT-Neo-1.3B on ECBD 2023 actually slightly outperforms using \(M_{g}=\) GPT-3.5. These results suggest that our approach may benefit from, but does not require, access to a strong generator model. Across both of these models, fine-tuning the full model decreases perplexity somewhat (2.5-4.0 perplexity drop), while fine-tuning only the last layer provides little to no benefit. We found that MEND increases the perplexity, and MEMT for a single-edit decreases perplexity slightly.
Results on a bigger base model [34]With LLaMA, we find that distillation does not outperform fine-tuning on ECBD 2022 (Table 3). However, because LLaMA was trained using Wikipedia dumps from 2022, it may already have knowledge of the entities in this dataset. We therefore consider this out of scope of our approach, given our focus on injecting new information. Table 4 reports results from LLaMA-7B on entities from ECBD that originate in 2023. We find that distillation using both continuations generated by GPT-3.5 and those generated by LLaMA-7B itself do indeed surpass finetuning here.
The perplexities of LLaMA are significantly lower than those for GPT-Neo-1.3B and GPT2-XL, likely because LLaMA is a much stronger model. We perform a paired bootstrap test to test for the significance of the improvements in average post-perplexity of distillation (using GPT-3.5 generated continuations) over finetuning all parameters on the definition, drawing \(N=10000\) samples [2]. The gains of distillation over fine-tuning are significant with \(p<0.05\).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & GPT-Neo-1.3B & GPT2-XL & \multicolumn{2}{c}{Llama-7B} \\ \hline Pre-Edit PPL & 36.4 & 26.1 & 44.7 & 25.4 & 7.82 & 8.1 \\ & Target (\(\Delta\)) & Spec. (\(\Delta\)) & Target (\(\Delta\)) & Spec. (\(\Delta\)) & Target (\(\Delta\)) & Spec. (\(\Delta\)) \\ \hline \hline Finetuning on \(d_{e}\) (full) & 34.2 (-2.2) & 26.1 (+0.0) & 40.1 (-4.6) & 25.5 (+0.1) & 7.64 (-0.18) & 8.0 (-0.1) \\ Finetuning on \(T_{e}\) (full) & 35.2 (-1.2) & 26.2 (+0.1) & 41.3(-3.4) & 25.5 (+0.1) & 7.71 (-0.11) & 8.1 (+0.0) \\
**Distillation (\(M_{g}\) = \(M_{s}\))** & 28.6 (-7.8) & 26.0(-0.1) & 34.0 (-10.7) & 25.1 (-0.3) & **7.53 (-0.31)** & 8.1 (+0.0) \\
**Distillation (\(M_{g}\) = GPT3.5)** & 28.9 (-7.5) & 25.5(-0.6) & **33.7 (-11.0)** & 25.1 (-0.3) & **7.53 (-0.31)** & 8.1 (+0.0) \\ \hline \hline Prepend Def. & 22.7 (-13.7) & _26.1_ & 30.2 (-14.5) & 25.4 & 6.81 (-1.01) & _8.1_ \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results (perplexity) on the ECBD 2023 dataset. We see that our distillation approach outperforms fine-tuning across for all models.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & GPT-Neo-1.3B & \multicolumn{2}{c}{GPT2-XL} & \multicolumn{2}{c}{Llama-7B} \\ \hline Pre-Edit PPL & 31.0 & 26.1 & 32.9 & 25.4 & 10.45 & 8.1 \\ & Target (\(\Delta\)) & Spec. (\(\Delta\)) & Target (\(\Delta\)) & Spec. (\(\Delta\)) & Target (\(\Delta\)) & Spec. (\(\Delta\)) \\ \hline \hline Finetuning on \(d_{e}\) (full) & 28.5 (-2.5) & 26.0 (-0.1) & 30.0 (-2.9) & 25.4 (+0.0) & **9.76 (-0.69)** & 8.1 (+0.0) \\ Finetuning on \(d_{e}\) (last only) & 30.7 (-0.3) & 26.1 (+0.0) & 32.8 (-0.1) & 25.4 (+0.0) & 10.41 (-0.04) & 8.1 (+0.0) \\ Finetuning on \(T_{e}\) (full) & 28.9 (-2.1) & 26.1 (-0.0) & 30.6 (-2.3) & 25.5 (+0.1) & 10.01 (-0.40) & 8.1 (+0.0) \\ MEND & 35.2 (+4.2) & 26.4 (+0.3) & - & - & - & - \\ MEMT & - & - & 32.6 (-0.2) & 25.4 (+0.0) & - & - \\
**Distillation (\(M_{g}\) = \(M_{s}\))** & 26.0 (-5.0) & 25.9 (-0.2) & 27.6 (-5.3) & 25.2 (-0.2) & 9.83 (-0.62) & 8.1 (+0.0) \\
**Distillation (\(M_{g}\) = GPT3.5)** & 25.3 (-5.7) & 25.6 (-0.5) & 26.8 (-6.1) & 25.1 (-0.3) & 9.86 (-0.60) & 8.1 (+0.0) \\ \hline Prepend Def. & 21.9 (-9.1) & _26.1_ & 24.0 (-8.9) & _25.4_ & 8.50 (-1.95) & 8.1 \\ \hline Prepend Random Def. & 42.9 (+11.9) & _26.1_ & 40.3 (+7.4) & _25.4_ & 11.40 (+0.95) & 8.1 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results (perplexity) on the ECBD 2022 dataset. Our distillation approach outperforms other approaches for GPT-Neo-1.3B and GPT2-XL on target perplexity without impacting specificity, achieving a substantial fraction of the gain from prepending the definition. Even though ECBD 2022 is out of the scope of our problem for LLaMA-7B, we include results for comparison.
Comparing to domain adaptation: How much does the entity-specific knowledge matter?One possible explanation for our gains on ECBD is that distillation is teaching the model something about the particular _domain_ of probe sentences rather than knowledge about particular entities. We discuss several pieces of evidence for why this can only explain partial gains.
Existing editing methods we test do not significantly affect specificity, while our method leads to a slight decrease in specificity (improvement on unrelated sentences). This may indicate that our model is learning the domain of Wikipedia, but the small magnitude suggests that this alone does not explain performance gain in target probe sentences. In particular, if the model is _merely_ learning the domain of Wikipedia through our distillation procedure, and the particular entity knowledge isn't important, then its perplexity on unrelated entities should be significantly lower as a consequence.
Additionally, we compare our method to fine-tuning on the transfer set as well as the definition sentence; this can be viewed as a domain-adaptive setting [8]. We note that this actually _harms_ the model's perplexity on the evaluation setting relative to fine-tuning only on the definition sentence.
Ablation Study on DistillationWe further quantify the impact of knowledge about a specific entity via an ablation study in Table 5. We substitute either the entity definition or the transfer set with those belonging to a randomly sampled different entity. Similar to how prepending random definitions leads to a substantial increase in perplexity (bottom of Table 3, +11.9), distilling a definition of randomly chosen entity, even when using the correct transfer set, leads to an increase in perplexity (+2.6). This indicates that using the right definition is crucial to induce the correct distribution we want the student to learn from. This also shows potential benefits of parameter update methods compared to prepending to the context, as prepending irrelevant information brings a more substantial drop in in performance.
Next, we consider replacing the transfer set with a set of ten distinct elements from ten transfer sets of different entities (second row). We find that using the correct definition and a random transfer set _decreases_ perplexity, even outperforming fine-tuning. Although the success of this is surprising, there is precedent for this in distillation research in computer vision. For example, [25] finds that arbitrary transfer sets can be useful, and work such as [4] and [19] have found success in performing knowledge distillation using samples out of the distribution of the teacher model.
Furthermore, simply prepending the correct entity name (third row) in front of each element of the random transfer set decreases perplexity substantially. This further shows that distillation is able to inject the definition even in the presence of a noisy transfer set. Moreover, it contributes more evidence that distillation is not learning facts from the sentences in the transfer set.
Figure 2: Results on GPT-Neo with varying numbers of model updates for fine-tuning and distillation approach. Left: target perplexity; right: perplexity on the definition sentence. Only distillation continues to improve in both target and definition perplexity as the number of updates increase.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Definition & Transfer Set & Target (\(\Delta\)) & Specificity (\(\Delta\)) \\ \hline Random & Correct & 33.6 (+2.6) & 25.8 (-0.3) \\ Correct & Random & 28.9 (-2.1) & 26.6 (+0.5) \\ Correct & Random + Ent. str & 26.7 (-4.3) & 25.7 (-0.4) \\ Correct & Correct & **25.3** (**5.7**) & **25.6** (**0.5**) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Distillation ablation study on GPT-Neo base model. We report perplexity and delta from the base model.
## 7 Analysis
Does the distillation inject the definition itself?If distillation is teaching the model is learning to make inferences based on the definition, how well does it teach the model the definition sentence as well? We measure the per-token normalized perplexity on the _definition_ sentence and report the results in Figure 2. Unsurprisingly, fine-tuning on definition sentence significantly drops its perplexity to closer to zero after 5-10 updates. While never trained to directly repeat the definition sentence, distillation also lowers the model's perplexity on the definition sentence significantly, potentially because of lexical overlap between the transfer set and the definition sentence (token overlap of 33.1-53.2% as shown in Table 1).
Characterizing the supervision from the teacherContext distillation is more effective than fine-tuning on the transfer set; here we characterize the differences in these approaches. Figure 3 shows the negative log likelihood (NLL) for GPT-Neo of the continuations on ECBD 2022 generated by GPT-3.5 without conditioning on the definition (x-axis) vs. the reduction in NLL when conditioning on the definition (y-axis). Note that this is not the KL divergence and therefore not the actual training objective; however, by looking at how NLL values change, we can identify specific tokens whose probabilities are substantially modified, which would indicate a high KL value.
Tokens copied from the definition typically receive the highest decreases. Many tokens not in the definition are relatively unchanged in likelihood, and those in contexts that are not informed by the definition will have low KL divergence and drive small updates during learning. However, we show two examples of tokens not in the definition where conditioning _does_ reduce the NLL substantially. In the first case, _Dhaka_ is guessable given _Banglades_, and in the second, _features_ is semantically related to the definition. By contrast, _asset_ has similar NLL before and after conditioning.
Size of transfer setThroughout our experiments, we had five unique continuations in the transfer set, each of which are distilled over the five epochs. Is having diverse continuations necessary for successful distillation? We plot the distillation performance while varying the number of unique continuations in the transfer set from 1 to 10, while also keeping the number of updates the same, in Figure 8 in the appendix and summarize the results here. Repeating one continuation 10 times yields target perplexity of 25.3, while using 10 continuations once yields the target perplexity of 23.5. We see diminishing returns from introducing new continuations after 5 continuations, and most of the gains can be achieved with as few as two unique generated continuations. This is in line with prior work [5] which has shown distilling on more examples improves the target performance.
Scaling to multiple editsPrior editing techniques [17] showed limitations in updating multiple facts at once. To evaluate how our approach scales, we perform distillation for multiple entities at once. We aggregate (entity definition, transfer sentence) for each entity and shuffle them for the entire entity set such that they are not ordered by entity during training. Figure 4 reports model performance under this setting, varying the number of entities to be updated from 10 to 150. We find that our method is largely capable of large scale updates, compared to MEMIT which shows
Figure 3: Per-token NLL of tokens in continuations before conditioning on definitions and after (fractional reduction). Tokens not in the definition (green dots) are changed less but do see lower NLL when they are inferable from the definition.
increased perplexity when injecting more than 25 entities at once. For specificity, we find MEMIT and distillation on GPT2-XL does not show degradation, but on GPT-Neo we observe degradation with our distillation method. Results on Entity Inferences (Table 2) also showed much more substantial degradation in specificity for GPT-Neo compared to GPT2-XL, so this could be dependent on base LMs. Overall, we observe promising results on editing multiple entities at once with distillation.
## 8 Conclusion and Limitations
We present a distillation-based method to impart entity knowledge within the parameters of a pretrained LM. We demonstrate that our method outperforms fine-tuning and other model editing techniques across a variety of settings, including when multiple entities are updated at once. Our results show that model editing with distillation is a promising direction for future methods.
LimitationsDue to computational constraints, our experiments use models that are <10B parameters. Whether these techniques generalize to the largest models (175B+) or models that have been instruction-tuned is unknown. However, given the continual reduction in the cost of the instruction-tuning stage [33], we believe that future work can apply our methods to base language models and then rerun the instruction-tuning stage periodically.
While we have shown that our method can generalize to the multiple-edit setting, the constraints of the datasets we use limit us to experimenting with hundreds of entities. Further work is needed to assess whether thousands or millions of new entities can be injected in this fashion (e.g., to teach a complete set of new entities in a domain). However, we believe that methods that can add small numbers of new entities are still valuable; these can remain synchronized with current events over a short period for an organization that continually retrains models on a weekly or monthly basis. Furthermore, the data setup we consider only covers limited domains of knowledge updates represented by new entities with clear origination dates. We also only use a single definition sentence as the knowledge to inject, and only study English datasets. Finally, our specificity evaluation follow prior work in using examples from the same dataset, but broader testing of updated LMs' functionality will be helpful.
Finally, as discussed in Appendix F, some of our experiments have temporal overlap between models and datasets use. Our setting does not strongly depend on entities being completely unseen, as re-injection of seen information can reinforce it for a model, particularly for obscure entities.
## Acknowledgments
This work was partially supported by NSF CAREER Award IIS-2145280, a grant from Open Philanthropy, and support from the NSF Institute for Foundations of Machine Learning (IFML).
|
2306.16226
|
The second data release from the European Pulsar Timing Array V. Search
for continuous gravitational wave signals
|
We present the results of a search for continuous gravitational wave signals
(CGWs) in the second data release (DR2) of the European Pulsar Timing Array
(EPTA) collaboration. The most significant candidate event from this search has
a gravitational wave frequency of 4-5 nHz. Such a signal could be generated by
a supermassive black hole binary (SMBHB) in the local Universe. We present the
results of a follow-up analysis of this candidate using both Bayesian and
frequentist methods. The Bayesian analysis gives a Bayes factor of 4 in favor
of the presence of the CGW over a common uncorrelated noise process, while the
frequentist analysis estimates the p-value of the candidate to be 1%, also
assuming the presence of common uncorrelated red noise. However, comparing a
model that includes both a CGW and a gravitational wave background (GWB) to a
GWB only, the Bayes factor in favour of the CGW model is only 0.7. Therefore,
we cannot conclusively determine the origin of the observed feature, but we
cannot rule it out as a CGW source. We present results of simulations that
demonstrate that data containing a weak gravitational wave background can be
misinterpreted as data including a CGW and vice versa, providing two plausible
explanations of the EPTA DR2 data. Further investigations combining data from
all PTA collaborations will be needed to reveal the true origin of this
feature.
|
J. Antoniadis, P. Arumugam, S. Arumugam, S. Babak, M. Bagchi, A. S. Bak Nielsen, C. G. Bassa, A. Bathula, A. Berthereau, M. Bonetti, E. Bortolas, P. R. Brook, M. Burgay, R. N. Caballero, A. Chalumeau, D. J. Champion, S. Chanlaridis, S. Chen, I. Cognard, S. Dandapat, D. Deb, S. Desai, G. Desvignes, N. Dhanda-Batra, C. Dwivedi, M. Falxa, I. Ferranti, R. D. Ferdman, A. Franchini, J. R. Gair, B. Goncharov, A. Gopakumar, E. Graikou, J. M. Grießmeier, L. Guillemot, Y. J. Guo, Y. Gupta, S. Hisano, H. Hu, F. Iraci, D. Izquierdo-Villalba, J. Jang, J. Jawor, G. H. Janssen, A. Jessner, B. C. Joshi, F. Kareem, R. Karuppusamy, E. F. Keane, M. J. Keith, D. Kharbanda, T. Kikunaga, N. Kolhe, M. Kramer, M. A. Krishnakumar, K. Lackeos, K. J. Lee, K. Liu, Y. Liu, A. G. Lyne, J. W. McKee, Y. Maan, R. A. Main, S. Manzini, M. B. Mickaliger, I. C. Nitu, K. Nobleson, A. K. Paladi, A. Parthasarathy, B. B. P. Perera, D. Perrodin, A. Petiteau, N. K. Porayko, A. Possenti, T. Prabu, H. Quelquejay Leclere, P. Rana, A. Samajdar, S. A. Sanidas, A. Sesana, G. Shaifullah, J. Singha, L. Speri, R. Spiewak, A. Srivastava, B. W. Stappers, M. Surnis, S. C. Susarla, A. Susobhanan, K. Takahashi, P. Tarafdar, G. Theureau, C. Tiburzi, E. van der Wateren, A. Vecchio, V. Venkatraman Krishnan, J. P. W. Verbiest, J. Wang, L. Wang, Z. Wu
|
2023-06-28T13:51:14Z
|
http://arxiv.org/abs/2306.16226v3
|
# The second data release from the European Pulsar Timing Array
###### Abstract
Context:We present the results of a search for continuous gravitational wave signals (CGWs) in the second data release (DR2) of the European Pulsar Timing Array (EPTA) collaboration. The most significant candidate event from this search has a gravitational wave frequency of 4-5 nHz. Such a signal could be generated by a supermassive black hole binary (SMBHB) in the local Universe. We present the results of a follow-up analysis of this candidate using both Bayesian and frequentist methods. The Bayesian analysis gives a Bayes factor of 4 in favor of the presence of the CGW over a common uncorrelated noise process, while the frequentist analysis estimates the p-value of the candidate to be \(<\) 1%, also assuming the presence of common uncorrelated red noise. However, comparing a model that includes both a CGW and a gravitational wave background (GWB) to a GWB only, the Bayes factor in favour of the CGW model is only 0.7. Therefore, we cannot conclusively determine the origin of the observed feature, but we cannot rule it out as a CGW source. We present results of simulations that demonstrate that data containing a weak gravitational wave background can be misinterpreted as data including a CGW and vice versa, providing two plausible explanations of the EPTA DR2 data. Further investigations combining data from all PTA collaborations will be needed to reveal the true origin of this feature.
## 1 Introduction
The population of SMBHBs in the relatively local Universe is the most promising astrophysical source of gravitational waves (GWs) at nanohertz frequencies, which are probed by pulsar timing array (PTA) observations. The signal is generated by binaries in wide orbits with periods of months to years. Each binary is far from merger and evolving slowly, so the emitted GWs are almost monochromatic. However, the incoherent superposition of GWs from many binaries creates a stochastic GW background (SGWB) signal with a characteristic broad red-noise type spectrum. A search of the second data release (DR2) of the European Pulsar Timing Array (EPTA) for an SGWB was reported in (the EPTA and InPTA Collaborations 2023b). This analysis reported increasing evidence for an SGWB, based on seeing a red noise process with a common spectral shape in all pulsars and seeing evidence that the correlation of the signal between pairs of pulsars was consistent with the forecasted Hellings-Downs (HD) correlation curve that is expected from an SGWB. The statistical significance reported in (the EPTA and InPTA Collaborations 2023b) is not yet high enough to claim a detection, but the data is starting to show some evidence for an SGWB.
The EPTA DR2 includes 25 pulsars selected to optimize for detection of the HD correlations, based on the methods described in Speri et al. (2023). The analyzed data were collected with six EPTA telescopes: the Effelsberg Radio Telescope in Germany,
the Lovell Telescope in the UK, the Nancay Radio Telescope in France, the Westerbork Synthesis Radio Telescope in the Netherlands, the Sardinia Telescope in Italy and the Large European Array for Pulsars. In this paper, we have used the DR2new subset of the full data the EPTA and InPTA Collaborations (2023b). It includes only the last 10.3 years of data, which was collected with new-generation wide-band backends.
As discussed in detail in (Allen, 2023), the characteristic HD correlations can be generated either by fixing the positions of a pair of pulsars and then considering the effect of averaging the response to a single GW source over all sky locations for that source, or by fixing the location of a single source and then averaging over the possible positions on the celestial sphere of the pulsar pair with a fixed angular separation. For this reason, it is possible that some of the evidence for the HD correlations is coming from one or a small number of bright continuous gravitational wave (CGW) sources in the data. In this paper we report the results of a search of EPTA DR2 for individual CGW sources. We use DR2new, because a hint of the presence of a CGW was reported in that data in the EPTA and InPTA Collaborations (2023b).
This paper uses frequentist and Bayesian approaches to search for a CGW source. We adopt the model of a single binary system in a circular orbit. We analyze the data using both Earth-term only and a full signal (Earth + pulsar terms) model. For each pulsar, we assume the custom-made noise model reported in (the EPTA and InPTA Collaborations 2023a). We also allow for the presence of a common red noise (CRN) component in the data. Evidence for a CRN was reported in the analysis of the reduced EPTA DR2 dataset comprising the 6 pulsars with the best timing accuracy Chen et al. (2021). The 6-pulsar dataset was not informative on the nature of this CRN signal, but the more recent 25 pulsar analysis reported in (the EPTA and InPTA Collaborations 2023b) favours an SGWB origin for this noise. In this analysis we consider models that include a deterministic CGW signal and one of three different noise models: individual pulsar noises only (PSRN), PSRN plus a common _uncorrelated_ red noise (CURN) process, or PSRN plus an SGWB with Hellings-Downs (GWB) correlations between the pulsars. All common noises will be represented by simple power-law power spectral densities.
We have conducted a Bayesian search for CGWs across a wide frequency band by splitting the dataset into sub-bands of width \(\Delta\log_{10}f_{\mathrm{gw}}=0.05\). We follow up the most significant candidate from this search with a detailed analysis. In addition, we have performed an analysis on simulated datasets generated with noise properties consistent with the posterior distribution inferred from the actual data. Based on these results, we cannot reliably confirm the presence of a CGW signal in the data, but we can also not rule it out. Moreover, some tests suggest that what we observe as a CURN could be explained by a CGW in the data. While the evaluated Bayes factors for an SGWB model versus a CGW are close to unity, the CGW model is represented by a larger set of parameters with associated dimensionality penalties.
The paper is organised as follows. In Section 2, we describe the model used to describe the data, and the frequentist and Bayesian methods that we employ in our analysis. In Section 3 we present the results of the analysis of the EPTA DR2 dataset. In Section 4 we describe and present the results from the simulation study that we undertook to understand the results of the EPTA DR2 analysis. Finally, in Section 5 we summarise our results and current conclusions.
## 2 Methods
### Noise model
We adopt the model for the noise in a single pulsar described in (the EPTA and InPTA Collaborations 2023a), in which timing residuals are written as
\[\delta t=\underbrace{\mathrm{M}\epsilon+n_{\mathrm{WN}}+n_{\mathrm{RN}}+n_{ \mathrm{DM}}+n_{\mathrm{Sy}}}_{\mathrm{PSRN}}+\underbrace{n_{\mathrm{CRN}}}_{ \mathrm{Common\,Red\,Noise}}+\underbrace{s}_{\mathrm{CGW}}. \tag{1}\]
The timing model error, \(\mathrm{M}\epsilon\), is represented by a linear model based on the design matrix \(\mathrm{M}\) and an offset from the nominal timing model parameters, \(\epsilon\). The white noise component \(n_{\mathrm{WN}}\) is described by two parameters for each backend, which apply multiplicative (EFAC) and additive (EQUAD) corrections to the estimated timing uncertainty. The pulsar red noise, \(n_{\mathrm{RN}}\), dispersion measure variations, \(n_{\mathrm{DM}}\), and scattering variations, \(n_{\mathrm{Sy}}\), are each represented by an incomplete Fourier basis defined at \(i/T_{\mathrm{obs}}\) frequency bins (\(i\) is integer). The amplitudes are assumed to be generated by a stationary Gaussian process (van Haasteren and Vallisneri, 2014), with PSD described by a power-law, characterized by spectral index, \(\gamma\), and the amplitude, \(A\), at reference frequency \(f_{\mathrm{ref}}\)=1/year. The noise models, including the number of frequencies included in the Fourier basis, are customised for each pulsar, as described in the EPTA and InPTA Collaborations (2023a). We call the model that includes all of the aforementioned noise components the PulSaR Noise (PSRN) model.
We also allow for the presence of a common red noise (CRN), \(n_{\mathrm{CRN}}\), affecting all the pulsars, that can take the form of an uncorrelated noise among pulsars (CURN) or a gravitational wave background (GWB) with a correlation described by the HD curve. We model the properties of the CRN in a similar way to the individual pulsar red noises, using an incomplete Fourier basis, with amplitudes described by a Gaussian process with a power-law PSD. In the Bayesian analysis below we have used either three or nine Fourier bins for describing the CRN and we have adopted the same priors for the pulsar noise components as presented in the EPTA and InPTA Collaborations (2023b). We refer the reader to the EPTA and InPTA Collaborations (2023a); Chalumeau et al. (2022) for a more complete description of the noise models.
The final component of the model for the timing residuals is the presence a continuous gravitational wave (CGW) signal, \(s\). This will be described in the next section.
### Continuous gravitational wave model
A supermassive black hole binary (SMBHB) system in a circular orbit produces monochromatic and quasi-non-evolving GWs (Arzoumanian et al., 2023; Falxa et al., 2023). Such signals induce pulsar timing residuals \(s_{a}(t,\hat{\Omega})\) of the form :
\[s_{a}(t,\hat{\Omega})=\sum_{A=+,\times}F^{A}(\hat{\Omega})[s_{A}(t)-s_{A}(t- \tau_{a})], \tag{2}\]
where \(s_{A}(t)\) and \(s_{A}(t-\tau_{a})\) are referred to as the _Earth term_ (ET) and the _Pulsar term_ (PT), \(F^{A}(\hat{\Omega})\) are the antenna pattern functions that characterise how each GW polarisation, \(+,\times\), affects the residuals as a function of the sky location of the source, \(\hat{\Omega}\) is the direction of propagation of the GW and \(\tau_{a}\) is a delay time between the source and pulsar \(a\). The full expressions for \(s_{A}(t)\)
are:
\[s_{*}(t)=\frac{\mathcal{M}^{5/3}}{d_{L}\omega(t)^{1/3}} \left\{-\sin\left[2\Phi(t)\right](1+\cos^{2}\iota)\cos 2\psi\right. \tag{3}\] \[\left.-2\cos\left[2\Phi(t)\right]\cos\iota\sin 2\psi\right\},\] \[s_{\varsigma}(t)=\frac{\mathcal{M}^{5/3}}{d_{L}\omega(t)^{1/3}} \left\{-\sin\left[2\Phi(t)\right](1+\cos^{2}\iota)\cos 2\psi\right.\] (4) \[\left.+2\cos\left[2\Phi(t)\right]\cos\iota\sin 2\psi\right\},\]
with \(\mathcal{M}\) the chirp mass, \(d_{L}\) the luminosity distance to the source, \(\omega(t)=\pi f_{\rm sw}(t)\) the time dependent frequency of the GW, \(\iota\) the inclination, \(\psi\) the polarisation angle and \(\Phi(t)\) the time dependent phase of the GW. The amplitude of the GW is given by:
\[h=2\frac{\mathcal{M}^{5/3}}{d_{L}}(\pi f_{\rm sw})^{2/3}\,. \tag{5}\]
For a slowly evolving binary, \(\omega(t)\) is considered constant (\(\omega(t)=\omega_{0}\)) over the duration of PTA observations of \(\sim\)10 years, giving for the Earth and Pulsar phases:
\[\Phi(t) = \Phi_{0}+\omega_{0}t, \tag{6}\] \[\Phi(t-\tau_{a}) = \Phi_{0}+\Phi_{a}+\omega(t-\tau_{a})t. \tag{7}\]
Nonetheless, the difference in frequency between Earth and Pulsar terms can be significant. The frequency of the pulsar term can be computed using the leading order radiation reaction evolution:
\[\omega(t)=\omega_{0}\Big{[}1-\frac{256}{5}\mathcal{M}^{5/3}\omega_{0}^{8/3}(t -t_{0})\Big{]}^{-3/8}. \tag{8}\]
This difference in \(\omega(t)\) is determined by the time delay \(\tau_{a}\) given by:
\[\tau_{a}=L_{a}(1+\hat{\Omega}\cdot\hat{p}_{a}), \tag{9}\]
where \(L_{a}\) the distance between the Earth and pulsar \(a\) and \(\hat{p}_{a}\) is a unit vector pointing to pulsar \(a\). If the SMBHB has significantly evolved during the time \(\tau_{a}\), the Earth term will have a higher frequency than the Pulsar term. This will usually be the case for frequencies above \(\sim\)10 nHz. For binaries at lower frequencies, binary evolution is typically negligible and both terms will have the same frequency (within the resolution of the PTA), but different phases. The characterisation of the pulsar term can be difficult because the distance \(L_{a}\) is known with poor accuracy. As a consequence, the pulsar distance \(L_{a}\) and phase \(\Phi_{a}\) must be treated as free parameters that are fitted while searching for the signal (Corbin & Cornish 2010). In our analysis, we use a Gaussian prior on the distances \(L_{a}\) with the measured mean, \(\mu_{a}\), and uncertainty, \(\sigma_{a}\), from Verbiest et al. (2012)1. For the pulsars not included in that paper, we use a mean of 1 kpc and error of 20%.
Footnote 1: [https://www.atnf.csiro.au/research/pulsar/psrcat/](https://www.atnf.csiro.au/research/pulsar/psrcat/)
### Frequentist analysis
We analyse the data using a frequentist approach based on the Earth term-only \(\mathcal{F}_{e}\)-statistic (Babak & Sesana (2012); Ellis et al. (2012)). The \(\mathcal{F}_{e}\) detection statistic is the log-likelihood maximized over the "extrinsic" CGW parameters (\(h\), \(\iota\), \(\psi\), \(\phi_{0}\)) for a fixed set of intrinsic parameters (\(\theta\), \(\phi\), \(f_{\rm sw}\)). If the residuals are Gaussian, the null distribution is expected to be a \(\chi^{2}\) distribution with 4 degrees of freedom. In the presence of a signal, \(\mathcal{F}_{e}\) is distributed as a non-central \(\chi^{2}\)-distribution with non-centrality parameter related to the square of the signal-to-noise-ratio (\(s(t)|s(t)\))2(see Ellis et al. (2012) for further details). However, to calculate \(\mathcal{F}_{e}\) we need to make assumptions about the noise properties. We take two different approaches: (\(i\)) we use the posterior distributions obtained from fitting the noise parameters to obtain a distribution of \(\mathcal{F}_{e}\) for each set of intrinsic parameters; (\(ii\)) we fix the noise parameters to their maximum likelihood estimates, as is often done for the EFAC and EQUAD parameters. The second approach is standard within frequentist analysis, but we also use (i) for the red noise components to emphasize that the inferred parameters have rather large uncertainties. Varying the noise parameters generates a distribution of the optimal statistic for each choice of intrinsic parameters and thus brings an element of Bayesian approaches into this frequentist analysis.
Footnote 2: (\(s|s\)) denotes the noise weighted inner product, (\(s|s\)) = \(sC^{-1}s^{\tau}\), with \(C\) the covariance matrix of the noise model.
We want to evaluate the significance of the highest \(\mathcal{F}_{e}\) measured on the observed data by computing the p-value, which is a statement about how improbable it would be to draw the observed data if no signal was present. To compute the true p-value requires the true distribution of \(\mathcal{F}_{e}\) in the absence of signal (the null distribution), which we do not have access to. There are two ways of obtaining an approximate null distribution: (i) using the theoretical null distribution of \(2\mathcal{F}_{e}\) which behaves as a \(\chi^{2}\) with 4 degrees of freedom when the noise is Gaussian; or (ii) by artificially shuffling (scrambling) the sky positions of the pulsars to destroy the spatial correlation patterns that are the signature of a GW signal. The second approach has the advantage that it makes no distributional assumptions about the noise properties of the pulsars, but as the scrambled signal is still present in the dataset it still cannot provide the true null distribution. The procedure to obtain the scrambled distribution is as follows :
* i) We produce 3000 scrambles with a match statistic \(M<0.2\) according to the definition of match statistic given in Taylor et al. (2017). This set of distributions of pulsars will have a match \(<0.2\) with the unscrambled distribution and with each other and thus represent a (pseudo-)orthogonal3 set. Footnote 3: The match \(M<0.2\) defines the (pseudo)-orthogonality condition. Footnote 4: The \((s|s)\) denotes the noise weighted inner product, \((s|s)=sC^{-1}s^{\tau}\), with \(C\) the covariance matrix of the noise model.
* ii) For each of the 3000 scrambles, we evaluate \(\mathcal{F}_{e}\) for 1000 noise parameters drawn from the posterior distributions obtained in (the EPTA and InPTA Collaborations 2023a) and take the median value.
* iii) We produce a histogram of the 3000 median \(\mathcal{F}_{e}\) statistic values, representing the null distribution.
\begin{table}
\begin{tabular}{c|c|c} CGW parameter & Prior & Range \\ \hline \(\log_{10}h\) & Uniform & [-18, -11] \\ \(\log_{10}f_{\rm sw}\) & Uniform & [-9, -7.85] \\ \(\log_{10}\mathcal{M}\) & Uniform & [7, 11] \\ \(\Phi_{0}\) & Uniform & [0, \(\pi\)] \\ \(\cos\iota\) & Uniform & [-1, 1] \\ \(\psi\) & Uniform & [0, \(\pi\)] \\ \(\cos\theta\) & Uniform & [-1, 1] \\ \(\phi\) & Uniform & [0, \(2\pi\)] \\ \(\Phi_{a}\) & Uniform & [0, \(\pi\)] \\ \(L_{a}\) & Normal, \(\mathcal{N}(\mu_{a},\sigma_{a})\) & [-\(\infty\), \(\infty\)] \\ \end{tabular}
\end{table}
Table 1: List of parameters of the continuous gravitational wave model with their respective priors and ranges.
* iv) We repeat (i-iii) 20 times obtaining a slightly different histogram each time due to differences in the 3000 scrambles and in the noise realizations. This allows us to estimate the uncertainty in the null distribution and, therefore, in the computed p-value.
We will apply this procedure to the CGW candidate in Section 3.
### Bayesian analysis
We also carried out a Bayesian analysis to obtain posterior probability distributions for the noise and signal parameters in the model described in Section 2. We make use of MCMC samplers PTMCMC (Ellis and Hanasteren, 2017)), QuickCW (Becsy et al., 2022) and Eryyn (Karnesis et al., 2023) to explore the parameter space. The Bayesian analysis allows us to perform parameter inference and model selection. The latter is quantified through the evaluation of the Bayes factor: the ratio of the marginal posterior distributions (or evidences) for two different models. The marginal posterior is a quantity difficult to compute and can be estimated numerically using parallel tempering or nested sampling (Skilling, (2004)).
In this paper, we use ENTERPRISE (Ellis et al., 2020; Taylor et al., 2021) to evaluate the posterior probability for a given model. We compute the Bayes factors using the product-space method (Hee et al. (2016)) implemented in ENTERPRISE and through Reversible Jump Markov Chain Monte Carlo (RJMCMC), as implemented in Eryyn. In both approaches, at each step of the Markov Chain either the parameters within the current model can be updated, or a switch to a different model can be proposed. The acceptance rule for the model switch is defined in order to ensure that detailed balance is maintained, thus ensuring that the stationary distribution of the Markov Chain is the desired posterior distribution over models. The sampler will spend more time exploring the model with the highest marginal posterior probability. The Bayes factor \(\mathcal{B}^{t/B}\) between models \(A\) and \(B\) can then be calculated as the ratio between the final number of chain samples corresponding to each model.
In the product-space method, the chain samples in a hyper-model space, which is a union of all the parameters of all the models being considered. An additional parameter determines which model is active within each sample, while inactive parameters undergo a random walk during the within-model steps. The effect is that the product space method retains some memory of where it had been exploring the other models, which can increase the probability that a proposed switch back to the other model is accepted. In RJMCMC, the chain typically only samples in the parameters of the currently active model and does not retain any memory. This can lead to lower model-switch acceptance rates, but guarantees a more complete exploration of the parameter spaces of the different models.
## 3 Results of data analysis
### Frequentist analysis
Within the frequentist approach we want to maximize the detection statistic (\(\mathcal{F}_{e}\) in our case) over all intrinsic parameters of the model. We perform the search using the noise models described in Section 2 for 100 logarithmically spaced frequencies from 1 to 100 nHz dividing the sky into 3072 different pixels using healpix(Zonca et al. (2019); Gorski et al. (2005).
To account for the fact that the noise model has broad posteriors, we use the posterior samples of the noise parameters obtained in (the EPTA and InPTA Collaborations 2023a) to calculate \(\mathcal{F}_{e}\) for fixed CGW parameters and average \(\mathcal{F}_{e}\) over 1000 randomly drawn samples of the PSRN model. The maximum of \(\mathcal{F}_{e}\) is found at 4.64 nHz consistent with the results of the Bayesian analyses described in subsection 3.2.
The sky distribution of \(\mathcal{F}_{e}\) at this frequency is given in Figure 1. The region of high statistic value (bright yellow) is quite sparse and inconclusive with regards to the localisation of the CGW candidate. The maximum \(\mathcal{F}_{e}\) is depicted by a black star and corresponds to a region of the sky where we are lacking pulsars and hence where the array is expected to be less sensitive.
The analysis was repeated by including the CURN component in the noise model and results are presented in Figure 2. We show two distributions of \(\mathcal{F}_{e}\). These are both evaluated at the optimal sky position and at the GW frequency 4.64 nHz and are obtained by varying the noise parameters (random draw) with (orange histogram) and without (blue histogram) the CURN component. Inclusion of the CURN slightly reduces the significance of the CGW candidate.
To evaluate the p-value, we compute the null distribution according to the steps outlined in subsection 2.3 at the CGW candidate parameter values (maximising the noise averaged \(\mathcal{F}_{e}\)). These results are indicated by the grey shaded distribution in Figure 2. The theoretical \(\chi^{2}\) null distribution is shown as a black curve. The scrambled distribution of \(2\mathcal{F}_{e}\) is close to the theoretical \(\chi^{2}_{4}\) distribution, but not completely overlapping. This could be due to (i) non-gaussian noise present in the array; (ii) the choice of the orthogonality condition (\(M<0.2\)) allowing the signal to leak into the distribution; or (iii) the definition of the match function which doesn't take into account different sensitivity of pulsars in the array (Marco et al., 2023).
We compute p-values using the obtained null distributions and the measured median values of the orange and blue \(\mathcal{F}_{e}\) distributions. The results are summarized in Table 2, the top row is for the theoretical distribution and the second row is for the scrambled null distribution (with uncertainty). The obtained p-value for \(\mathcal{F}_{e}\) corresponds to about \(3\sigma\) while \(\mathcal{F}_{e,CURN}\) corresponds to about \(2.5\sigma\).
Figure 1: \(\mathcal{F}_{e}\)-statistic of the candidate source at \(f_{\rm gw}=4.64\) nHz averaged over the noise uncertainties for the custom PSRN model. The black star shows the position of highest \(\mathcal{F}_{e}\), whereas the red stars show the positions of the pulsars. The Fornax and Virgo clusters are shown as black dots.
### Bayesian analysis
We perform a Bayesian CGW search by splitting the frequency range \(10^{-9}-10^{-6.5}\) Hz into 50 logarithmically spaced subsegments and assuming Earth term only. We have computed the Bayes factor in each sub-band using PTMCMC and the product-space method. The results are presented in Figure 3 for the noise model described in Section 2. We find a Bayes factor above 100 around 4-5 nHz and we perform a detailed analysis around the small frequency range using the priors shown in Table 1.
We have compared several models describing the data using Bayes factors as the decision maker. We use Erryn (Karnesis et al. 2023) as our fiducial sampler and we crosscheck the results using PTMCMC sampler (Ellis & van Haasteren 2017)) and Qui cickW (Becsy et al. 2022). The computation of the Bayes factors is performed using RJMCMC and confirmed with product-space method. Our findings are summarized in Table 3 and here we give a detailed description of each row. For all models with CGW described below and quoted in the table, we have assumed a circular binary described in subsection 2.2 with Earth and pulsar term, using the priors given in Table 1 unless otherwise stated.
The simplest considered data model includes only the custom pulsar noise (PSRN), therefore, no CRN is included. The PSRN model is used as a null hypothesis, and alternative is given by the pulsar noise plus CGW (PSRN+CGW) considering Earth and pulsar terms. The Bayes factor for the model comparison PSRN+CGW vs PSRN is 4000. This indicates strong evidence for the inclusion of the CGW.
Next, we include to the custom noise PSRN a CRN: either a CURN, or a GWB correlated according to the HD pattern. These become the new null hypotheses (CURN+PSRN) and (GWB+PSRN). We also consider two descriptions for the CRN: one using the three lowest Fourier harmonics (3 bins), and one using the nine lowest Fourier harmonics (9 bins) as done in (the EPTA and InPTA Collaborations 2023b). Since the CGW candidate is located close to the second Fourier bin, showing the results for 3 bins can help singling out the red noise components of the spectrum that might be potentially affected by the other high frequency noises. The presence of a CGW in this model is not very prominent but definitively non-negligible. The Bayes factors of PSRN+CURN+CGW vs PSRN+CURN are 4 and 12, for 9 and 3 bins respectively. The choice of the number of bins affects the spectral properties of the CRN and, consequently, also the Bayes factors. In fact, the slope of the CURN model becomes steeper when using 3 bins, allowing the CGW, whose frequency is close to the 2nd bin, to emerge more easily than when using 9 bins. However, when including the HD correlations, the Bayes factors of PSRN+GWB+CGW vs PSRN+GWB drop to 0.7 and 1, for 9 and 3 bins, respectively.
As already was pointed out in (the EPTA and InPTA Collaborations 2023b), the HD component of the noise absorbs most of CGW signal and this can be clearly seen in the drop of the Bayes factor and in the posterior distributions shown in Figure 4. When the CRN is a CURN, the CGW model absorbs the power of the background around \(\log_{10}f_{sw}\in[-8.5,-8.2]\) and yields an amplitude \(\log_{10}A\) posterior distribution with tails extending up to the lowest end of the prior range (see correlation in Figure 4 for parameters \(\log_{10}f_{sw},\log_{10}A\)).
For the model CURN+PSRN+CGW with 9 bins, the log-frequency \(f_{sw}\) is measured to be \(4.61^{+1.11}_{-2.95}\) nHz and the log-amplitude \(\log_{10}h\) is measured to be \(-14.0^{+4.05}_{-2.65}\) (median and symmetric 90% credible interval). The chirp-mass posterior is uninformative and the sky localization posterior is shown in Figure 5 where we also show the Virgo and Fornax clusters which are 16.5 Mpc and 19.3 Mpc from the Earth Jordan et al. (2007), respectively. If we use the median values of the amplitude and frequency to estimate the luminosity distance, we obtain \(d_{L}\approx 16.6\,(\mathcal{M}/10^{9}\mathrm{M_{\odot}})^{5/3}\) Mpc, using Equation 5.
We compute the sky marginalized 95% upper limit on strain amplitude, \(h_{95}(f_{sw})\), across the studied frequency range for the
\begin{table}
\begin{tabular}{c||c|c} \hline & \(p(\mathcal{F}_{e})\) & \(p(\mathcal{F}_{e,\mathrm{CURN}})\) \\ \hline \hline \(\chi_{4}^{2}\) & \(5\times 10^{-4}\) & \(1\times 10^{-3}\) \\ Sky scrambles & \((7\pm 4)\times 10^{-4}\) & \((6\pm 1)\times 10^{-3}\) \\ \hline \end{tabular}
\end{table}
Table 2: Statistical significance of the candidate source at 4.64 nHz. The p-values for the \(\chi_{4}^{2}\) are obtained using the maximum likelihood noise parameters and the sky scrambled p-value from the median of the \(2\mathcal{F}_{e}\) distribution. We show the p-values for the custom pulsar noise PSRN, \(p(\mathcal{F}_{e})\), or also including a common uncorrelated red noise CURN+PSRN, \(p(\mathcal{F}_{e,\mathrm{CURN}})\).
Figure 3: Bayes factor for the model comparison PSRN+CURN+CGW (Earth term) vs PSRN+CURN for 50 logarithmically spaced frequency sub-bands in the region \(f_{sw}\in[1.5,320]\) nHz.
Figure 2: Distribution of \(\mathcal{F}_{e}\)-statistic over the noise uncertainties without CURN (blue) and with CURN (orange) at 4.64 nHz. The null distributions of the \(\mathcal{F}_{e}\) are obtained from the analysis of the EPTA DR2new data with scrambled sky positions (grey shaded region) and from the theoretical formula of a \(\chi_{4}^{2}\)-distribution (black solid line).
model PSRN+CURN+CGW with 9 bins. For this analysis, we used a uniform prior on \(h\) in the range \([10^{-18},10^{-11}]\) instead of the uniform prior on \(\log_{10}h\) used for the search (see Arzoumanian et al. (2023); Faka et al. (2023)). The strain upper limit was converted into a horizon distance, \(D_{H}\), (i.e., the distance up to which SMBHB systems should produce detectable CGW signals) using Equation 5:
\[D_{H}=2\frac{\mathcal{M}^{5/3}}{h_{95}}(\pi f_{gw})^{2/3}. \tag{10}\]
We plot \(D_{H}\) as a function of \(f_{gw}\) in Figure 6 for three values of chirp mass \(\mathcal{M}=[10^{8}M_{\odot},10^{9}M_{\odot},10^{10}M_{\odot}]\). The highest \(D_{H}\) is recovered around 20 nHz. The closest galaxy-cluster candidates (Fornax and Virgo) that could host a SMBHB lie at distances larger than 10 Mpc, meaning that we need binary systems with chirp masses larger than \(10^{9}M_{\odot}\) in order for them to be detectable.
We have also used this model (CURN+PSRN+CGW) to investigate the effect of the pulsar term and to cross-check the
Figure 4: Posterior distributions of the CGW search in the second data release of the EPTA DR2new. The posteriors are obtained using a CGW model with Earth and pulsar term, the custom PSRN model and a CRN with either CURN or HD (GWB) correlations represented by 9 frequency bins. We show the posterior distribution for the gravitational wave frequency and amplitude \(f_{gw}\), \(h\) of the CGW, and the common noise spectral index and amplitude \(\gamma\) and \(A\). The contours indicate the 1,2,3-\(\sigma\) Gaussian contours.
samplers. The narrow green posterior contours in Figure 7 correspond to using \(\mathtt{Eryn}\) to sample the model with the Earth-term only and the broad blue posterior is inferred with the model including the pulsar term. We have overplotted in orange similar results (including the pulsar term) obtained with the QuickCW sampler. To check if the circular CGW model is appropriate, we carried out a separate analysis including orbital eccentricity in the model following Taylor et al. (2016), but using only Earth term in the analysis. This analysis inferred a low eccentricity (\(e<0.2\)) for the CGW candidate, indicating that the analysis performed with the circular binary model is adequate.
For the model GWB+PSRN+CGW with 9 bins, we cannot constrain the CGW parameters, so we set a 95% upper limit on \(\log_{10}h_{95\%}=-13.75\). We constrain the spectral properties of the GWB background to be \(2.66^{+1.43}_{-1.02}\) and \(-13.95^{+0.25}_{-0.62}\), respectively for \(\gamma\) and \(\log_{10}A\)) (median and symmetric 90% credible interval). The picture changes if we use a different number of frequency bins. We show in Figure 8 the posteriors for the same model GWB+PSRN+CGW, but this time with 9 and 3 bins. It is clear that the spectral properties (\(\gamma\) and \(A\)) of the background are affected by the number of bins. The median log-amplitude decreases from -13.95 to -14.34 and the median slope from 2.66 to 3.832, following the typical \(\gamma-\log_{10}A\) correlation. The steeper slope allows the CGW to emerge from the noise and its posteriors show two clear peaks, one at 4.64 nHz and one at 12.6 nHz.
Figure 5: Posterior distribution of the sky localization obtained by searching for a CGW in the second data release of the EPTA DR2new. The posteriors are obtained using a CGW model with Earth and pulsar term, the inclusion of the custom pulsar noise (PSRN) and a common uncorrelated red noise (CURN) represented by 9 frequency bins. For reference, we show the position of the analyzed pulsars and the Virgo and Fornax clusters.
\begin{table}
\begin{tabular}{c|c} \hline \hline Model comparison & Bayes factor \\ \hline CGW+PSRN vs PSRN & 4000 \\ CGW+PSRN+CURN vs PSRN+CURN, 3 bins & 12 \\ CGW+PSRN+CURN vs PSRN+CURN, 9 bins & 4 \\ CGW+PSRN+GWB vs PSRN+GWB, 3 bins & 1 \\ CGW+PSRN+GWB vs PSRN+GWB, 9 bins & 0.7 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Bayes factors obtained for different model comparisons indicate how much the data favor the inclusion of a continuous gravitational wave (CGW) with Earth and pulsar term. The first comparison considers adding a continuous gravitational wave to the pulsar custom noise (PSRN). The second and third model comparisons include an additional common uncorrelated red noise CURN modeled with a power-law with 3 or 9 frequency bins. The fourth and fifth model comparisons include a gravitational wave background GWB modeled with a power-law with 3 or 9 frequency bins.
### Optimal Statistics
The Bayesian analysis presented in the previous subsection indicates that the results are inconclusive about the nature of the observed signal. This subsection starts a long investigation process attempting to answer the question: "what is it that we see?" and provides a transition to the next section which describes analysis of the simulated data.
We compute the signal-to-noise ratio (SNR) of the CRN using the optimal statistic approach (Chamberlin et al., 2015; Vigeland et al., 2018) implemented within enterprise_extensions software package (Taylor et al., 2021). We estimate the SNR assuming quadrupolar correlation (HD). Following the procedure outlined in (the EPTA and InPTA Collaborations, 2023b; Vigeland et al., 2018), we vary the noise parameters (using the results of (the EPTA and InPTA Collaborations, 2023a)) and get as a result a distribution of SNR. The solid orange line in Figure 9 reproduces the findings reported in (the EPTA and InPTA Collaborations, 2023b).
Next we include CGW in the model: we use the posterior samples obtained in the Bayesian analysis which preserve the correlation between the noise parameters and CGW (instead of only the noise parameters), and re-evaluate the optimal statistic. The resulting SNR distribution of HD correlations is given in Figure 9 as dashed orange line. This result implies that the data (minus CGW) does not show any sign of quadrupolar (GWB) correlation, in other words, a CGW alone can explain the HD feature observed in the data.
We corroborate our results obtained on the EPTA DR2new by repeating the same analysis on a simulated dataset. We produce a fake PTA based on the real (EPTA DR2new) pulsars and the noise estimation in which we inject only one CGW and no GWB (see Section 4 for a detailed description of the simulation). The blue solid line in Figure 9 indeed resembles the result obtained on the DR2new (orange line). As we will discuss in details in the next section, a single CGW signal could be interpreted as GWB (see Allen (2023)). The subtraction of the CGW from the timing residuals, as expected, removes the quadrupolar correlation (blue dashed line) and well reproduces the previously obtained results on the EPTA data.
## 4 Simulation
We perform a simulation campaign to try to reproduce the features observed in the analysis of DR2new. We generate a fake array with the same time of arrivals (TOA)s and pulsar positions as in the real dataset. We inject noises using the maximum a posteriori of the noise parameter posterior obtained in the EPTA and InPTA Collaborations (2023a). We use a Gaussian process to simulate the noise components and consider different realizations in order to reproduce the observed results (see Appendix A). Using the simulated array as a basis, we propose two cases to study:
* PSRN+CGW: A simulated analogue of DR2new with only one circular CGW injected at 4.8 nHz with sky location at (3h38, -35\({}^{\circ}\)27) as if it was in the Fornax cluster with a chirp mass of \(10^{9.2}M_{\odot}\) and amplitude \(h=10^{-13.6}\), without any CRN.
* PSRN+GWB: A simulated analogue of DR2new with a gaussian and isotropic GWB as CRN with a powerlaw spectrum corresponding to \(A=10^{-14.5}\) and spectral index \(\gamma=13/3\) (without any CGW).
Each simulation is analysed with the custom PSRN model and either with CGW (using the Earth term only) or GWB with a powerlaw spectrum.
We have considered the PSRN+GWB simulated data and analysed it with a single CGW source (no GWB). In Figure 10 we show that we can recover the CGW even if we have injected an isotropic GWB. We have repeated the analysis on 10 simulated datasets, in most cases the recovered "CGW" was centered at the lowest Fourier bin (\(1/T_{obs}\sim 3\) nHz) and located in the close vicinity of pulsars, in many cases around J1713+0747.
However, in 2 GWB injections out of 10, we recover a CGW frequency around 4-5 nHz (similar to what we observe in DR2new) with a Bayes factor PSRN+CGW over PSRN only of
Figure 6: Horizon luminosity distance, \(D_{H}\), obtained from the sky averaged 95% upper limit on strain amplitude \(h\) using the PSRN+CURN+CGW (Earth term + pulsar term) model. The horizon distance is calculated with Equation 10 for three chirp masses: \(10^{8}M_{\odot}\), \(10^{9}M_{\odot}\) and \(10^{10}M_{\odot}\).
Figure 7: Inference of the amplitude \(h\) and frequency \(f_{gw}\) of CGW using the PSRN+CURN+CGW model. The results obtained with Eryn are shown as green and blue histograms, for a Earth term only, and full CGW model, respectively. The blue contours are to be compared with the orange posterior obtained with QuickCW. The shown contours are the 90% normal credible regions.
about 5000. We present the posterior frequency of that particular case as a orange histogram in Figure 10. We have also analysed this data using GWB model and the inferred posterior is given in orange in Figure 11.
Next we consider the PSRN+CGW simulated data and analyse it with the model of isotropic GWB with a power-law spectrum. As expected, this model gives a very constrained posteriors and high Bayes factor of 2600. Indeed, the excess of power at low frequency gives support to a power-law spectrum (though not the best description) and HD correlations are reproduced by averaging over the pulsar pairs (see Allen (2023); Cornish & Sesana (2013) for details). The analysis of this dataset with a CGW model is shown in Figure 10 as a blue histogram. Analysis of the same data with GWB is given in Figure 11 as a blue posterior. One can see that posterior has a lower amplitude and is shallower.
It is important to note that the recovered GWB parameters on Figure 11 are different from their injected values for the PSRN+GWB simulation. This is due to strong correlations between the signal and the individual pulsar red noise models. We made a simulation where only white noise is injected (contrary to the realistic simulation where we inject the full PSRN noise model) with a GWB and no CGW, revealing that in that case we correctly recover the injected values of signal parameters (see Figure 12).
The analysis of the simulated data confirms that it is hard to reliably identify the nature of the CRN: whether it is a CGW or a GWB. The point source also produces HD correlations, as previously shown in (Cornish & Sesana 2013; Becsy et al. 2022; Allen 2023). Moreover, the anisotropic configuration of the current PTA (pulsars not uniformly distributed in the sky and having very different noise properties) produces an uneven response across the sky and the studied frequency range. The resulting discrepancies between the injected and recovered parameter values due to the interactions between the PSRN and signal models are still not fully understood and need to be investigated more thoroughly in future analyses. We need to invent further consistency checks (like anisotropy, for example) or wait for longer datasets.
Figure 8: Posterior distributions of the CGW search in the second data release of the EPTA DR2new. The posteriors are obtained using a CGW model with Earth and pulsar term, the custom PSRN model and an HD correlated background (GWB) represented by 9 and 3 frequency bins. We show the posterior distribution for the gravitational wave frequency and amplitude \(f_{\rm{gev}},h\) of the CGW, and the common noise spectral index and amplitude \(\gamma\) and \(A\). The contours indicate the 1,2,3-\(\sigma\) Gaussian contours.
Hopefully the inclusion of pulsars from the southern hemisphere (PPTA) could help us to break this parity.
## 5 Summary
This paper presents an analysis of the EPTA DR2new dataset searching for continuous GW signals from super-massive black hole binaries in quasi-circular orbits. We perform a frequentist (based on \(\mathcal{F}_{e}\)-statistic) and Bayesian (using Bayes factor) analysis of the data, and, in both cases, find a significant CGW candidate at 4-5.6 nHz. The frequentist analysis gives a p-value of (\(5\times 10^{-4}-6\times 10^{-3}\)), equivalent to a 2.5-3\(\sigma\) significance level, depending on the evaluation procedure and whether or not a CURN is included in the noise model. Within the Bayesian analysis of the CGW candidate, we computed the Bayes factor between
Figure 11: Posterior distribution of amplitude \(\log_{10}A\) and spectral index \(\gamma\) of the GWB for two simulated PTAs : 1 realistic PTA with only 1 CGW injected (PSRN+CGW) and 1 realistic PTA with a GWB injected (PSRN+GWB). The posterior distribution is obtained with a MCMC analysis for a PSRN+GWB model. The recovered GWB parameters are different from the injected ones for the PSRN+GWB simulation due to strong correlations between the injected PSRN and GWB models.
Figure 10: Posterior distribution of the gravitational wave frequency \(f_{\mathrm{gw}}\) of a CGW fitted to two simulated PTAs: one with a single injected CGW (PSRN+CGW blue histogram), and one with an injected GWB (PSRN+GWB orange histogram). The posterior distribution is obtained with a MCMC analysis for a PSRN+CGW (Earth term) model. The dashed lines are the medians of the distributions.
Figure 9: Distributions of the signal-to-noise-ratio (SNR) for the HD correlations for a common red process with \(\gamma=13/3\), with (dashed) or without (solid) adding CGW to the data model. The orange lines correspond to the results obtained on DR2new and blue are on the simulated data with only CGW (no CRN).
models containing CGW and noise-only. We see strong evidence (Bayes factor \(\sim 4000\)) of CGW if we consider only PSRN (individual pulsar noise) in the alternative hypothesis, weak evidence (Bayes factors \(\sim 4-12\)) if we include a CURN process in the alternative hypothesis, and completely inconclusive if the CRN is assumed to have the correlation of a GWB (Bayes factors \(\sim 0.7-1\)). In other words, the data is equally well described by a model including both a GWB and CGW and a model including GWB only. We note that the CGW model depends on 58 parameters and therefore comes with a large dimensionality penalty. Despite this, the Bayes factor is close to unity. In addition, we have shown that removing the CGW candidate from the data destroys HD correlations, as seen from the computation of the optimal statistic.
In an attempt to understand if the observed signal is due to a GWB or a CGW, we perform a simulation campaign. We simulate data based on the noise parameters inferred in the EPTA and InPTA Collaborations (2023a) and inject a GW signal. Our main finding is that simulated data with only an isotropic GWB injected can be fitted with a CGW model, and vice versa; a GWB model can explain simulated data containing only a single injected CGW. Therefore, we cannot conclusively distinguish between the presence of a single continuous gravitational wave or a gravitational wave background. In the EPTA and InPTA Collaborations (2023c), considering models that produce a GW signal consistent with the one present in DR2new, the probability of detecting a single source with SNR larger than 3 is estimated to be 50%.
We hope that an analysis of the combined IPTA data (Data Release 3) will help to confirm the presence or not of a CGW signal and shed light on its nature.
###### Acknowledgements.
The European Pulsar Timing Array (EPTA) is a collaboration between European and partner institutes, namely ASTRON (NL), INAF/Osservatorio di Cagliari (IT), Max-Planck-Institut fur Radioastronomie (GER), Nancay/Paris Observatory (FRA), the University of Manchester (UK), the University of Birmingham (UK), the University of East Anglia (UK), the University of Shieldfeld (GFR), the University of Paris (FRA), the University of Milan-Bicocca (IT), the Foundation for Research and Technology, Hellas (GR), and Peking University (CHN), with the aim to provide high-precision pulsar timing to work towards the direct detection of low-frequency gravitational waves. An Advanced Grant of the European Research Council allowed to implement the Large European Array for Pulsars (LEAP) under Grant Agreement Number 227947 (PI M. Kramer). The EPTA is part of the International Pulsar Timing Array (JPTA); we thank our IPTA colleagues for their support and help with this paper and the external Detection Committee members for their work on the Detection Checklist. Part of this work is based on observations with the 100-m telescope of the Max-Planck-Institut fur Radioastronomie (MPIfR) at Effelsberg in Germany. Pulsar research at the Jodrell Bank Centre for Astrophysics and the observations using the Lovell Telescope are supported by a Consolidated Grant (ST/T000414/1) from the UK's Science and Technology Facilities Council (STFC). ICN is also supported by the STFC doctoral training grant ST/15/062911. The Nancay radio Observatory is operated by the Paris Observatory, associated with the French Centre National de la Recherche Scientifique (CNRS), and partially supported by the Region Centre in France. We acknowledge financial support from "Programme National de Cosmologie and Galaxies" (PNCG), and "Programme National Haines Energies (PNFE) funded by CNRS/INSU-IN2P3-INP, CEA and CNES, France. We acknowledge financial support from Agence Nationale de la Recherche (ANR-18-CE31-0015). France. The Westerbork Synthesis Radio Telescope is operated by the Netherlands Institute for Radio Astronomy (ASTRON) with support from the Netherlands Foundation for Scientific Research (NWO). The Sardinia Radio Telescope (SRT) is funded by the Department of University and Research (MIUR), the Italian Space Agency (ASI), and the Autonomous Region of Sardinia (RAS) and is operated as a National Facility by the National Institute for Astrophysics (INAF). The work is supported by the National SKA programme of China (2020SKA0120100), Max-Planck Partner Group, NSFC 11690024, CAS Cultivation Project for FAST Scientific. This work is also supported as part of the "LEAGX" MPG-CAS collaboration on low-frequency gravitational wave astronomy. JA acknowledges support from the European Commission (Grant Agreement number: 11094354). JA and Scha were partially supported by the Stavros Niarchos Foundation (SNF) and the Hellenic Foundation for Research and Innovation (H.F.R.L) under the 2nd Call of the "Science and Society - Action Always strive for excellence - Theodos Papazoglou" (Project Number-01431). AC acknowledges support from the Paris Ile-de-France Region, AC, AF, Ase, ASa, EB, DI, GMS, MBo acknowledge financial support provided under the European Union's H2020 ERC Consolidator Grant "Binary Massive Black Hole Astrophysics" (B Massive, Grant Agreement: 818691), GD, KLi, RK and MK acknowledge support from European Research Council (ERC) Synergy Grant "BlackHoleCam", Grant Agreement Number 610058. This work is supported by the ERC Advanced Grant "LEAP", Grant Agreement Number 227947 (PI M. Kramer). AV and PRB are supported by the UK's Science and Technology Facilities Council (STFC; grant ST/N000946/1). AV also acknowledges support of the Royal Society and Wolfson Foundation. JPWV acknowledges support by the Deutsche Forschungsgemeinschaft (DFG) through their Heisenberg programme (Project No. 343075039) and by the NSF through AccelNet award #2114721. NXP is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer PO 2758/1-1, through the Walter-Benjamin programme. Asa thanks the Alexander von Humboldt foundation in Germany for a Humboldt fellowship for postdoctoral researchers. APo, DP and MBu acknowledge support from the research grant "Pieska" (PI. Andrea Possenti) funded under the INAF national call Prin-SKA/CTA approved with the Presidentialence 2070/16 (Italy). RNC acknowledges financial support from the Special Account for Research Funds of the Hellenic Open University (ELE-HU0) under the research programme "GRAVPUL" (grant agreement 319/10-10-2022). Ewd, CGB and GHJ acknowledge support from the Dutch National Science Agenda, NWA Startimuys - 400-17.6/08. BG is supported by the Italian Ministry of Education, University of Research within the PR12017 Research Program Framework, 2017/SYRCTN. LS acknowledges the use of the HPC system Cobra at the Max Planck Computing and Data Facility. The Indian Pulsar Timing Array (InPTA) is an Indo-Japanese collaboration that routinely employs JITF's upgraded Giant Metrewave Radio Telescope for monitoring a set of IPTA pulsars. BCJ, YG, YM, SD, AG and PR acknowledge the support of the Department of Atomic Energy, Atomic Energy, Government of India, under project no. 12-R&D-TRF-8.02-00700 while SD, AG and PR acknowledge support of the Department of Atomic Energy, Government of India, under project no. 12-R&D-TPF-8.02-0200. KT is partially supported by JSPS KAKENHI Grant Numbers 20H00180, 21H01130, and 21H04467, Bilateral Joint Research Projects of JSPS, and the ISM Cooperative Research Program (2021-ISMCRF-2017). AS is supported by the NANOGrav NSF
Figure 12: Posterior distribution of amplitude \(\log_{10}A\) and spectral index \(\gamma\) of the GWB for a simulated PTA with only white noise (WN) and an injected GWB with a powerlaw spectrum corresponding to \(A=10^{-14.5}\) and spectral index \(\gamma=13/3\). We see that the recovered values correctly match the injected ones when only white noise is added to the array.
Physics Frontiers Center (awards #1430284 and 2020265). AKP is supported by CSIR fellowship Grant number 09/0079(15784)2022-EMR-I. SH is supported by JSPS KAKENHI Grant Number 2020509. KN is supported by the Birla Institute of Technology & Science Institute fellowship. AmS is supported by CSIR fellowship Grant number 09/1001(12656)/2021-EMR-I and T-641 (DST-ICPS). TK is partially supported by the JSPS Overseas Challenge Program for Young Researchers. We acknowledge the National Supercomputing Mission (NSM) for providing computing resources of 'PARAM Ganga' at the Indian Institute of Technology Roorice as well as 'PARAM Seva' at IIT Hyderabad, which is implemented by C-DAC and supported by the Ministry of Electronics and Information Technology (Meit'r) and Department of Science and Technology (DST), Government of India. DD acknowledges the support from the Department of Atomic Energy, Government of India through Apex Project - Advance Research and Education in Mathematical Sciences at MlSc. The work presented here is a culmination of many years of data analysis as well as software and instrument development. In particular, we thank Drs. N. D'Amico, P. C. C. Freire, R. van Haesteren, C. Jordan, K. Lazaridis, P. Lazarus, L. Lentati, O. Lohmer and R. Smits for their past contributions. We also thank Dr. N. Wex for supporting the calculations of the galactic acceleration as well as the related discussions. The EPTA is also grateful to staff at its observatories and telescopes who have made the continued observations possible.
_Author contributions._ The EPTA is a multi-decade effort and all authors have contributed through conceptualisation, funding acquisition, data-curation, methodology, software and hardware developments as well as (aspects of) the continued running of the observational campaigns, which includes writing and proofreading observing proposals, evaluating observations and observing systems, mentioning students, developing science cases. All authors also helped in (aspects of) verification of the data, analysis and results as well as in finishing the paper draft. Specific contributions from individual EPTA members are listed in the CReRI1 format below. InPTA members contributed in uGMRT observations and data reduction to create InPTA data set which is employed while assembling the DR2full+ and DR2new+ data sets.
Footnote 1: [https://credit.niso.org/](https://credit.niso.org/)
Article number, page 12 of 15
|
2305.00114
|
Improving CFD simulations by local machine-learned correction
|
High-fidelity computational fluid dynamics (CFD) simulations for design space
explorations can be exceedingly expensive due to the cost associated with
resolving the finer scales. This computational cost/accuracy trade-off is a
major challenge for modern CFD simulations. In the present study, we propose a
method that uses a trained machine learning model that has learned to predict
the discretization error as a function of largescale flow features to inversely
estimate the degree of lost information due to mesh coarsening. This
information is then added back to the low-resolution solution during runtime,
thereby enhancing the quality of the under-resolved coarse mesh simulation. The
use of a coarser mesh produces a non-linear benefit in speed while the cost of
inferring and correcting for the lost information has a linear cost. We
demonstrate the numerical stability of a problem of engineering interest, a 3D
turbulent channel flow. In addition to this demonstration, we further show the
potential for speedup without sacrificing solution accuracy using this method,
thereby making the cost/accuracy trade-off of CFD more favorable.
|
Peetak Mitra, Majid Haghshenas, Niccolo Dal Santo, Conor Daly, David P. Schmidt
|
2023-04-28T22:20:42Z
|
http://arxiv.org/abs/2305.00114v1
|
# Improving CFD Simulations by Local Machine-Learned Corrections
###### Abstract
High-fidelity computational fluid dynamics (CFD) simulations for design space explorations can be exceedingly expensive due to the cost associated with resolving the finer scales. This computational cost/accuracy trade-off is a major challenge for modern CFD simulations. In the present study, we propose a method that uses a trained machine learning model that has learned to predict the discretization error as a function of large-scale flow features to inversely estimate the degree of lost information due to mesh coarsening. This information is then added back to the low-resolution solution during runtime, thereby enhancing the quality of the under-resolved coarse mesh simulation. The use of a coarser mesh produces a non-linear benefit in speed while the cost of inferring and correcting for the lost information has a linear cost. We demonstrate the numerical stability of a problem of engineering interest, a 3D turbulent channel flow. In addition to this demonstration, we further show the potential for speedup without sacrificing solution accuracy using this method, thereby making the cost/accuracy trade-off of CFD more favorable.
numerical error, machine learning, CFD acceleration
## 1 Introduction
Computational fluid dynamics (CFD) has become a cornerstone of modern engineering. However, accurately predicting the large-scale features that usually drive the design process typically requires resolving small-scale features that are not as germane to the design process. The necessary spatial and temporal resolution required to accurately model the physics and correctly predict the entire range of scales is often out of reach for many computational problems. While turbulence often garners much of the academic interest, the discretization error inherent in CFD is also of critical importance. Turbulence can be modeled using Reynolds averaging (RANS) or Large Eddy Simulation (LES), but using coarse meshes for faster evaluations leads to the accumulation of discretization errors and therefore under-resolution of key features. This _compute-accuracy_ trade-off is a major driver of the cost of modern-day CFD.
In recent years improving or enhancing solution quality by using machine learning (ML), akin to image super-resolution, has become a major area of interest. The approaches range from using physics-constrained generative networks [1] for full physics emulation, to building auto-differentiable frameworks that closely align the inductive biases of the ML algorithms to the physics [2, 3] thereby aiding model interpretability and explainability. However, these cheap-to-investigate full physics surrogate methods suffer from the ability to generalize under unseen conditions as they lack explicit knowledge of the underlying governing equations. Kochov et al. [4] proposed using machine learning inside traditional fluid simulations, and suggested it can improve both
the model accuracy and compute speed by an order of magnitude, and demonstrated the performance on canonical 2D examples. An alternate approach is to enhance solution quality of a under-resolved simulations by estimating the localized error. Coarsening the grid induces errors from primarily under-resolution as indicated by the modified partial differential equation [5]. More recently, error surrogate models based on machine learning techniques have received much attention [6; 7; 8], largely because of their non-intrusive nature and fast on-line evaluations. A review of several promising strategies by which machine learning can enhance CFD was published by Vinuesa and Brunton [9].
The principal contribution of this study is to make the cost-accuracy trade-off more favorable and demonstrate performance on an engineering-relevant 3D simulation. It is in the same vein that Kochkov et al. [4] demonstrated acceleration of LES simulations using ML based enhancement for the missing information in coarser meshes. Previous work in this area [4; 10] showed the ML models have the ability to effectively super-resolve the missing information for applications ranging to 2D turbulence [4] and tracers in climate models [10; 11]. Several contributions have been made in error modeling for parameterized reduced-order models (ROM) [8; 12], and the ideas have been extended to estimate discretization-induced errors [6]. Apart from some key differences in the implementation philosophy, a critical improvement over the previous work includes extending this approach to engineering relevant problems and to full 3D simulations. Our goal is to produce solutions to the Navier-Stokes equations with diminished sensitivity to mesh resolution. In particular, we will focus on the velocity field since for the constant density Navier-Stokes, the velocity field and its derivatives sufficiently determine the pressure field. Therefore for a zero Mach number flow, such as in consideration here, the pressure field is neglected as part of the feature selection.
## 2 Methods
The high-level functional premise of the _local enhancement_ method is shown in Figure 1. The proposed idea is to use corrections of local cell-level discretization error to nudge the lower fidelity (coarse grid) simulation towards the higher-fidelity solution. For a physical model system governed by a set of non-linear equations, the relationship between the high fidelity solution, \(\Phi_{\text{f}}\), from a fine mesh simulation and the coarse mesh predictions can be expressed as \(\Phi_{\text{f}}=\Phi_{\text{c}}(\delta)+\varepsilon\), where \(\Phi_{\text{c}}\) represents the solution field output of the low fidelity simulation from the coarse mesh with resolution \(\delta\), and \(\Phi_{f}\) represents the model variables - in our case fluid velocity, and \(\varepsilon\) the simulation error (lost information) due to numerical error.
As explained above, for zero Mach number flow, the general field \(\Phi\) is for this study specifically represented by \(\mathbf{u}\), the fluid velocity. Functionally the error can be represented as, \(\varepsilon=\mathbf{u}_{-\mathbf{c}}-\mathbf{u}_{\mathbf{c}}\), where subscripts \(f\) and \(c\) are fine and coarse respectively. The term \(\mathbf{u}_{-\mathbf{c}}\) is the fine to coarse mapped velocity, and \(\mathbf{u}_{\mathbf{c}}\) is the coarse mesh velocity. The additional step of mapping is an interpolation necessitated by the different node locations between a fine and a coarse mesh. Thus, to compute the local grid-induced error, it is necessary to map the fine-grid data \(\Phi_{f}\) with resolution \(\delta_{f}\) onto the coarse grid with resolution \(\delta_{c}\). In other words, \(\Phi_{f}\) is replaced by \(\Phi_{f\to c}\) which is the fine-grid field of \(\Phi\) mapped on a grid whose cell length is \(\delta_{c}\). This mapping, or interpolation, constitutes a source of error as some details of the flow field profile are lost due to interpolation. Using higher-order interpolation techniques, we minimize this source of additional error to \(6(10^{-5})\). This is achieved by using OpenFOAM's [13] in-built _mapFields_ functionality. The locally enhanced velocity within each cell would then have the functional form, \(\mathbf{u}_{\mathbf{c}}=\mathbf{u}_{\mathbf{c}}+\text{LC}(\mathbf{u}_{ \mathbf{c}})\), where \(\mathbf{u}_{\mathbf{c}}\) is the enhanced velocity, \(\mathbf{u}_{\mathbf{c}}\) is the coarse grid velocity, and LC is the learned correction provided by the machine learning algorithm, during inference time. The basic assumption for the application of the coarse-grained approach is that the coarse mesh simulation is able to capture/resolve the basic flow features. It would be inconceivable to use ultra-coarse representations of the physics such that any important detail is not resolved by the coarse mesh, thereby extrapolating the mapping abilities for the machine learning algorithm.
### Machine learning algorithm
The inductive biases of the data are a point-to-point correlation. For example, the error \(\varepsilon\), is based on the cell-level information lost between the mapped and the low-resolution solution. Since there are no spatial or temporal correlations to be learned and in aligning with the inductive biases of the problem itself, we make use of a deep feed-forward neural network as our machine learning algorithm. Functionally, the machine learning model \(f\) is learning the relationship between the coarse mesh (input) to the error (target),
\[\varepsilon=f(\Phi_{\text{c}}) \tag{1}\]
The model training procedure involves the following steps:
* **Run a fine-mesh simulation**: This simulation typically consists of a very large number of cells and therefore is very accurate.
* **Run many coarse mesh simulations**: Run simulations with different coarse mesh configurations. This step explores the multi-dimensional space in which error is created and provides input data for training.
* **Mapping fine mesh solution to coarse mesh stencil**: Use OpenFOAM's [13] in-built mapping functionalities to map
Figure 1: A schematic of the solution correction technique employed by the _locally enhanced_ approach, nudging the low-fidelity solution towards the more accurate solution[10].
the fine data generated in Step 1 onto the coarser stencil from Step 2. This would be our ground truth that the model aspires to achieve. In machine learning jargon, this is called the target data.
Similar to tuning constants in a physics model, hyperparameters such as network width, depth, and learning rate in deep neural networks represent the largest source of uncertainity in model outputs. This study involved conducting a Bayesian optimization based shallow neural architecture search [14] to identify the strongest candidates for the key hyperparameters. In the end the deep network was trained using 8 layers and 48 neurons in each layer. The initial learning rate was set to 0.0002 with a cosine learning rate decay [15]. The optimizer used in this study was Adam [16].
Many engineering-relevant CFD simulations are inherently transient and often times this leads to the presence of outliers (tails in a distribution) in input data. This is especially true for scenarios involving moving geometries as well as complex and intermittent physics such as combustion. While in machine learning there are different best practices to deal with such outlier data, they often indicate ignoring them as it leads to poor training and generalization abilities for the model. Using this framework for building a surrogate model might lead to a loss of important, transient physics and therefore lead to the degrading performance of the model itself. One method to alleviate this problem is to use a customized loss function. Compared to the mean-squared-error or L2 loss, which amplifies the outliers, a mean-absolute-error or L1 loss, tends to fit the mean better. Our proposed loss makes use of both of these losses in a weighted fashion. The weighting between the losses is based on the data distribution of outliers. The training loss used in this study functionally can be represented as
\[\textit{loss}=w*L2+(1-w)*L1 \tag{2}\]
where \(w\) is set to 0.7 for the current study.
Once trained, for run-time inference, we freeze the deep network graph and convert it into its C++ equivalent that is compatible to use with the OpenFoam library. The full details of integrating a trained neural network to OpenFoam's C++ library have been discussed in previous studies [17, 18].
### Modified Governing Equations
The machine-learned correction, or "nudge," is integrated into the Navier-Stokes governing equations by adding a source term, \(\mathbf{S}\), represented by \(\mathbf{u}_{\neg\Delta}-\mathbf{u}_{\mathbf{c}}\). The modified governing equations functionally are shown as below.
\[\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u}\cdot\nabla\mathbf{u}=-\frac{ \tri
is the cell Reynolds number, measured as \(Re\equiv\frac{\rho u\delta}{\mu}\) where \(\delta\) is the cube root of the cell volume, \(S\) is the shear-rate tensor, \(\Omega\) is the rotation-rate tensor and \(\mathbf{Y}\) is the wall distance. The non-dimensionalization factors for each term are discussed in Table 1.
### Quantitative metrics
In addition to qualitative metrics to measure performance we define a quantitative criterion to measure success for the locally enhanced approach. It is defined as the _cell volume weighted L2 norm_ defined as \(\text{L2}=\sum\delta*(\textbf{w}_{\rightarrow\text{c}}-\textbf{u})^{2}\) where \(\delta\) is the cube root of cell volume, \(\textbf{w}_{\rightarrow\text{c}}\) is the ground truth velocity mapped to the coarser CFD mesh, and **u** is the velocity predicted by CFD (locally enhanced or coarse-mesh simulation). We choose to focus on the velocity error as it is the metric we use to locally enhance the coarse mesh simulation. A lower L2 norm of error would establish the improvement in accuracy of the enhanced result compared to the coarse mesh simulation.
The use of coarse-graining reduces the cell count in the mesh. This reduction is quantified by a mesh Reduction Factor (RF), defined as the number of cells in the fine-grained mesh used to produce that ground truth data set divided by the number of cells in the coarse mesh. Because the cell size connects to the cost per iteration of the linear solvers, the number of iterations required per time step, and the time step size, the relationship between the reduction factor and overall computational cost is expected to be non-linear. With a larger RF, the opportunity for the learned correction to accelerate the computation and reduce the error is greater.
## 3 Results
The training data are sampled across three different turbulent Reynolds numbers of 290, 395 and 500. Further, the simulations at each Reynolds number consist of fourteen different coarse mesh configurations. The reduction factor, defined as the ratio of fine mesh cells to coarse mesh cells, ranged from 1.12 to 4.5 for a total fine mesh cell count of 60,000. The training dataset comprised of about 8 million points. The _a priori_ performance on a test data yields a \(R^{2}\) of 0.8460, which indicates a reasonable fit. The large range of learning, in terms of the near wall behavior, mean flow characteristics across different discretizations and large scale flow configurations, are some of the challenges to achieving a precise regression.
The trained network is coupled to the OpenFOAM [13] solver _pimpleFoam_. The qualitative performance for the velocity magnitude is indicated by examination of a mid-clip plane in Figure 2. This snapshot is taken at time t=1000s, for a mesh reduction factor of 2. The simulation was run on a fine mesh, considered to be the ground truth, which is then interpolated to the coarse mesh (left panel) for comparison to coarse-mesh CFD. The results of CFD run on the coarse mesh (middle panel) fail to accurately resolve the near-wall effects seen in the left panel. On the other hand, the error-corrected results from coarse-mesh simulation (right most) recover a large degree of lost information near the walls.
Figure 3 presents the velocity magnitude difference between the mapped (ground truth) and the CFD simulations. The left panel shows the difference between mapped and coarse mesh simulation, and the observation in the near wall region behavior is consistent with the earlier result. The right panel is the difference (shown on the same scale) between the mapped and the locally enhanced coarse mesh simulation. It is clearly evident that the locally enhanced simulation is able to recover lost information, especially close to the wall boundary. The time-averaged x-direction velocity performance is reported in Figure 5. The vertical line probes are placed at the center of the channel at 2m from the channel entrance (total length of the channel is 4 m). Figure 5 shows the time-averaged x-component of velocity at the 2m location and it is evident that the locally enhanced simulations improve the solution performance and recover lost information, especially in the near-wall region and in the mean flow. The middle and the right panel in Figure 5 represents the instantaneous snapshots of the Turbulent Kinetic Energy and the Reynolds stress tensor (in the near wall region), showing the degree of improvement in the prediction for the learned correction model. For the turbulent channel flow, the near wall region is the most challenging to resolve and very important from the perspective of viscous dissipation, and energy generation.
In addition to the qualitative diagnostics, the L2-norm of the error is calculated for the entire range of reduction factors. The error data are shown in Table 2. In comparing the coarse and the locally enhanced simulation performance for each reduction factor, it is observed that the additional source term improves the simulation fidelity significantly, up to an order of magnitude. The largest gain is obtained for the higher degree of coarseness. This is understandable since for a very coarse mesh, the loss of details is proportionately higher and therefore the learned correction model has a larger impact in the accuracy recovery.
The left panel in Figure 4 represents the compute cost versus accuracy trade-off for the turbulent channel flow study. The radius of each of the circles represents the reduction factor. The larger the reduction factor (or coarser the mesh), the larger the radius of the circle. Comparing circles of similar sizes gives a measure of the performance gains with the local enhancement approach. The general trend is that by using the local enhancement approach, there is a potential for massive gains in reducing errors (and therefore improving solution accuracy), for a moderate increase in solution cost (for example, ML-enhanced solutions add about 10% on average cost to the time to solution). One other way to look at this is to compare the coarse mesh circles (blue) with the local enhanced circles (orange) along the Y-axis. To obtain an
\begin{table}
\begin{tabular}{c c} \hline \hline Input & Normalization Factor \\ \hline \(\nabla\textbf{u}\) & \(\frac{\sqrt{k}}{\delta v}\) \\ \hline \(S\) & \(\frac{\sqrt{k}}{\delta v}\) \\ \hline \(\Omega\) & \(\frac{\sqrt{k}}{\delta v}\) \\ \hline Re & – \\ \hline \(\mathbf{Y}\) & \(\delta_{V}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Input Features and its normalization
error norm of 0.35, the coarse mesh simulations took about 1000s (wall time), whereas similar levels of accuracy were obtained at a fraction of the compute cost in approximately 300s, thereby indicating a compute speed-up of over 3x for a similar fidelity solution. The speedup can be further improved by studying larger problems, which are expected to be more expensive to compute. This increased cost would result in higher information retrieval at a fraction of the cost making the cost-accuracy trade off even more beneficial. Whereas the computational cost of CFD increases non-linearly with the cell count, the cost of the learned correction is linear. Kochkov et al. [4] used a 2D DNS dataset for their ground truth and reported 40-80x speed ups. The cost to perform DNS on this channel flow is orders of magnitude higher than the fine mesh LES employed here and therefore there are performance gains yet to be realized using this locally enhanced approach.
\begin{table}
\begin{tabular}{c c} \hline \hline Mesh reduction Factor & \% Error reduction \\ \hline
4.57 & 76.11 \\ \hline
3.33 & 70.25 \\ \hline
2.50 & 68.51 \\ \hline
2.00 & 67.31 \\ \hline
1.52 & 48.14 \\ \hline
1.14 & 7.67 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance improvements from the learned CORRECTION. REDUCED RESOLUTION, QUANTIFIED AS A REDUCTION FACTOR, IS LISTED VERSUS THE PERCENT REDUCTION IN ERROR.
Figure 3: The Velocity MAGNITUDE DIFFERENCE BETWEEN THE MAPPED (GROUND TRUTH) AND THE CFD SIMULATIONS. THE LEFT PANEL Shows the UMCORTECD COARSE-MESH DISCREPANCY. THE LOCALLY ENHANCED SIMULATION (RIGHT PANEL) IS ABLE TO RECOVER LOST INFORMATION IN THE NEAR-WALL REGION THERESY IMPROVING SOLUTION ACCURACY. THE DIFFERENCES IN THE VelocITY MAGNITUDE FURTHER CONIGHT THE EANIER OBSERVATION THAT THE NETWORK ENHANCED (RIGHT PANEL) RECOVERS missing information, and therefore HAS LOWER VelocITY MAGNITUDE DEFRENCES.
Figure 2: THE MIDPLANE CLIPS SUGGEST THE LOCALLY ENHANCED SIMULATION (RIGHT PANEL) IS ABLE TO RECOVER LOST NEAR WALL INFORMATION, THEREBY LOWERING ERRORS AND IMPROVING TIME TO SOLUTION. THE PANEL AT THE LEFT IS FROM THE FINE TO COARSE MESH MAPPING, THE MIDPLANE PANEL IS FROM THE COARSE MESH SIMULATION, AND THE RIGHT PANEL IS FROM THE NETWORK ENHANCED SIMULATION. EACH PLAN SHOWTS THE MID-CLIP PLANE COLORED BY THE VelocITY MAGNITUDE (SCALED SMILIARITY). THE NETWORK ENHANCED (RIGHT PANEL) RECOVERS missing information (GROUND TRUTH IN THE LEFT PANEL) COMPARED TO THE COARSE MESH (MIDDLE PANEL) SIMULATION.
Figure 4: BOTH PANELS INDICATE THE LOCAL ENHANCEMENT IS ABLE TO PROVIDED BETTER SOLUTIONS AT A LOWER COST, EVEN FOR UNSEEN RUN CONDITIONS. THE THMING PLOT ON THE LEFT SHOWS THE RELATIVE IMPROVEMENT IN THE COST VERSUS ACCURACY, AS A RESULT OF THE LOCAL ENHANCEMENT. THE SIZE OF THE CIRLES INDICATES THE REDUCTION FACTOR OF THE CELL COUNT THE REIGHT PANEL SHOWS THE NORM OF THE ERROR AT A RANGE OF REYNOLDS NUMBERS, INDICATING THE ABILITY OF THE SCHEME TO WORK AT OTHER REYNOLDS NUMBERS.
Figure 5: EACH PANEL SHOWS PLOTS OF THREE DIFFERENT CURVES. ONE FOR THE MAPPED FIELD, ONE FROM THE COARSE MESH SIMULATIONS, AND ONE FROM THE COARSE MESH ENHANCED SIMULATION. THE LEFT MOST PANEL SHOWS THE TIME-AVERAGED BEHAVIOR AND INDICATES INFORMATION RECOVERY FOR THE COARSE MESH ENHANCED SIMULATION AND A CONSISTENT TRACKING OF THE MAPPED FIELD (GROUND TRUTH DATA). THE MIDDLE AND THE RIGHT PANEL ARE FROM INSTANTANEOUS TURBULENT KINETIC ENERGY AND REYNOLDS STRESS BEHAVIOR IN THE NEAR WALL REGION. IT IS EVIDENT THAT THE ENHANCED SIMULATION RECOVERY REAR WALL BEHAVIOR BETTER COMPARED TO THE LOW RESOLUTION COARSE MESH SIMULATION.
## 4 Conclusions
A machine learning mesh error correction algorithm has been developed and implemented within open-source 3D CFD code OpenFOAM. This error correction allows a CFD simulation to achieve higher fidelity with lower resolution. The numerical stability of this method is demonstrated on a full 3D CFD simulation, relevant to many engineering applications. The approach achieved 3-5x speedups with minimal reduction in observed accuracy. An advantage of the locally enhanced method is its mesh invariance. For example, some of the current approaches for solution enhancement are limited by using Cartesian mesh, whereas there are no such requirements for this locally enhanced approach. The artificial time-scale term (\(\tau\)) is a hyperparameter and is currently chosen empirically for this study. A more scientifically rigorous method of choosing it is desirable and a subject of future work. Further ways to extract more performance benefits that could be realized by attacking larger problems, where the linear cost of the algorithm would generate additional speedup.
|
2306.16540
|
BLEND: Efficient and blended IoT data storage and communication with
application layer security
|
Many IoT use cases demand both secure storage and secure communication.
Resource-constrained devices cannot afford having one set of crypto protocols
for storage and another for communication. Lightweight application layer
security standards are being developed for IoT communication. Extending these
protocols for secure storage can significantly reduce communication latency and
local processing.
We present BLEND, combining secure storage and communication by storing IoT
data as pre-computed encrypted network packets. Unlike local methods, BLEND not
only eliminates separate crypto for secure storage needs, but also eliminates a
need for real-time crypto operations, reducing the communication latency
significantly. Our evaluation shows that compared with a local solution, BLEND
reduces send latency from 630 microseconds to 110 microseconds per packet.
BLEND enables PKI based key management while being sufficiently lightweight for
IoT. BLEND doesn't need modifications to communication standards used when
extended for secure storage, and can therefore preserve underlying protocols'
security guarantees.
|
Joel Höglund, Shahid Raza
|
2023-06-28T20:28:34Z
|
http://arxiv.org/abs/2306.16540v1
|
# BLEND: Efficient and blended IoT data storage and communication with application layer security
###### Abstract
Many IoT use cases demand both secure storage and secure communication. Resource-constrained devices cannot afford having one set of crypto protocols for storage and another for communication. Lightweight application layer security standards are being developed for IoT communication. Extending these protocols for secure storage can significantly reduce communication latency and local processing.
We present BLEND, combining secure storage and communication by storing IoT data as pre-computed encrypted network packets. Unlike local methods, BLEND not only eliminates separate crypto for secure storage needs, but also eliminates a need for real-time crypto operations, reducing the communication latency significantly. Our evaluation shows that compared with a local solution, BLEND reduces send latency from 630 \(\mu s\) to 110 \(\mu s\) per packet. BLEND enables PKI based key management while being sufficiently lightweight for IoT. BLEND doesn't need modifications to communication standards used when extended for secure storage, and can therefore preserve underlying protocols' security guarantees.
Secure storage, communication security, application layer security, OSCORE, EDHOC, IoT
## I Introduction
IoT is being deployed in extremely heterogeneous and wild scenarios such as agriculture monitoring, battlefields, remote surveillance, power-line monitoring, flood monitoring, and telemedicine. Most of these deployments require data confidentiality and/or integrity while at rest as well as in transit. While traditional Datagram TLS (DTLS) [1] has been extended to IoT, it is still too heavy for many IoT scenarios and lack full end-to-end security across different transport layer technologies. New Application layer protocols, namely OSCORE [2] and EDHOC [3], are specifically designed for resource-constrained IoT and offer full end-to-end security.
In contrast to the active standardization work on enabling secure communication in IoT, the secure storage solutions for IoT have attracted less attention. While a custom-made local secure storage protocol can be developed, it would require new proposals on, for example, key management, choosing encryption and secure hash functions, initialization vectors, etc. Less well tested new solutions are likely to be less secure, and will require additional implementation efforts ultimately requiring more processing and storage resources. Most importantly, a separate secure storage solution will require additional crypto operations when an IoT data is sent to a remote host, which will increase the real-time latency. As shown in Figure 1 (left), before sending a securely storage data, separate secure storage and secure communication solutions will require that the data must be first _decrypted_ using one set of security protocol and _encrypted_ again with another set of protocols. Such a solution has significant performance overhead and is infeasible for resource-constrained IoT devices.
In this paper, we propose BLEND that exploits novel application layer security protocols and provide combined secure storage and communication without compromising end-to-end security. BLEND does not require separate protection for storage and for communication, and the stored secure data can be shared with a remote host without any crypto operations during the transmission phase, ultimately reducing the real-time latency significantly; this is depicted in Figure 1 (right). BLEND is particularly advantageous in use cases having hard latency requirements; for example, when a drone cost-effectively collects IoT data from vast smart agriculture deployments or from remote power lines.
The main challenge in providing a combined secure storage and communication is to enable a solution that incurs minimal overhead for IoT devices, keep well-tested security properties intact, and does not compromise standard compliance and interoperability. This can be achieved by extending the use of the newly standardised OSCORE and EDHOC protocols to secure data storage. The core contributions of the paper are as follows: we _(i)_ extend standard based application layer security mechanisms and enable combined secure storage and secure communication; _(ii)_ provide an implementation of BLEND for resource constrained devices using Contiki NG; and _(iii)_ evaluate BLEND to show its suitability for IoT.
Fig. 1: Retrieving encrypted IoT data and securely sharing it with a remote host, with (right) an without (left) BLEND
The rest of this paper is organized as follows: related work and relevant background are presented in section II and III, respectively; we present a treat model in Section IV; elaborate our design in section V; provide implementation details in section VI and evaluation in VII; highlight security considerations in Section VIII; and conclude the paper in section IX.
## II Related Work
### _Secure communication_
The area of secure communication for resource constrained devices has seen a rapid development the last decade, with the introduction of protocols targeting IoT. Early standards such as IPSec has largely been replaced with DTLS, Datagram Transport Layer Security. Still the protocol overhead is relatively large. Especially for low power radio networks where network radio packets are as small as 127 bytes, and fragmentation can cause delays and security vulnerabilities. This is limiting the usable payload for application layer sensor data down to a maximum of 51 bytes per packet, unless network specific optimizations such as 6lowPAN header compression is used, which limits the general applicability [4, 5, 6]. Recently new application layer protocols for secure communication have been devised which can reduce the per packet overhead, while supporting crypto algorithms suitable for constrained devices. OSCORE together with with EDHOC for key establishment have the potential to be used for PKI solutions with sufficiently low overhead for IoT. While DTLS has been shown to be feasible for PKI solutions for IoT the cost of key establishment when using standard X509 certificates is high [7, 8].
### _Secure storage_
The area of secure storage has seen much less standardisation efforts. Instead several overlapping fields are contributing to the area. Blockchain based research efforts, including [9, 10] design solutions for custom deployments, but mainly address computationally capable end devices such as cellphones or routers, and rely on custom server infrastructure.
Another related area is research on Trusted Execution Environments, TEE, such as ARM's TrustZone. TrustZone functionality has been used as a building block to construct secure storage for Android based devices [11]. An important area for TEE is to enable the creation of secure key storage [12, 13, 14]. With respect to the more constrained IoT devices the TEE related efforts are complementary to our work on secure storage.
Besides the problem of secure key storage, many of the relatively lightweight cryptographic solutions used in communication protocols can be applied to any data to create a secure sensor data storage. As long as the secure storage only serves local encryption purposes, the need for standardisation is less emphasized.
There are two previous suggestions on how to combine secure communication with secure storage, FUSION and FDTLS [15, 16]. The proposed designs are based on IPSec and DTLS, where promising results in terms of reduced overhead when packets are being sent are shown. An important finding is the need to optimize the storage operations with respect to the memory hardware constraints, such as to write full memory pages to reduce flash handling overhead.
The main shortcomings of these lower layer security approaches are the following: The solutions rely on PSK, pre-shared keys. This is an outdated mode of key management, with no support for automated key management, including enrollment or revocation. Both IPsec and the DTLS version 1.2 used for the evaluations have large headers, greatly reducing the space available for sensor data when used over low power radio networks. To partly alleviate this, both designs rely on using 6lowPAN header compression, which ties the usage completely to networks where this is available. To allow new connections the protocol is side stepped in terms of removing the randomness used when generating session keys, without analyzing the security implications of this procedure, plus other minor protocol breaking tweaks. Additionally, by relying directly on IPsec or DTLS none of the conveniences offered by CoAP are available for any of the involved parties.
The conclusion is that while several works address some of the issues of secure storage of data for IoT, the existing proposals for coalesced storage and communication have serious shortcomings. We address these shortcomings with a design making use of application layer security.
## III Necessary Background
Object Security for Constrained RESTful Environments is an application-layer protocol specifically designed for IoT security [2]. It protects CoAP messages and builds upon COSE [17] and CBOR functionality for encryption and encoding [18]. The protocol offers replay protection using sequence numbers tied to the security context. Since UDP packets might arrive out of order, the protocol uses a replay windows, such that the receiver keeps a range of currently accepted numbers.
Ephemeral Diffie-Hellman Over COSE (EDHOC) is a proposed key exchange protocol primarily design for OSCORE [3], and shares the usage of COSE and CBOR encoding with OSCORE. It can be used with standard X.509 certificates, or with more compact certificate formats. The security functionality of EDHOC is based on the SIGMA schema, from which it follows that as long as the included components keep their security guarantees, the resulting protocol will provide the desired security services [19].
A successful EDHOC security context establishment will result in the parties agreeing on a Master Secret, a Master Salt, client and recipient IDs, and the crypto algorithms to use. With this information in place, Sender Key, Recipient Key and Common IV can be derived and saved. Once a security context is established, an endpoint is free to act both as server and client, using the same security context for both purposes [2].
## IV Threat model and assumptions
We consider scenarios where an attacker can, with some probability, get physical access to the node and probe the device permanent flash memory. We discuss both scenarios
where we assume that the non-permanent memory is sufficient for key storage and scenarios where a (small) tamper protected memory area exists, which can be used for key storage. For communication, the Dolev-Yao threat model is applicable. An attacker can eavesdrop any communication between the involved entities, and also modify and re-send any message. As a consequence protection for replay attacks are needed, together with authentication and confidentiality services to prevent unauthorized access to any secret content. To generate new keys and perform secure key exchange the devices must have access to a sufficiently strong random number generator. We assume that the standards we use as building blocks are not compromised, but can offer the claimed security guarantees when used together with the recommended crypto algorithm suits.
## V BLEND: design
### _Requirements_
The main requirement is to offer secure storage with low latency for data sending, while keeping the overall overhead low. To preserve security guarantees offered by OSCORE, as few deviations from the protocol usage as possible should be done. Preferably the receiving end of the communication should not need to take any additional steps outside of the regular protocol to receive and decrypt previously stored sensor data. In order to preserve the protocol guarantees, the initial key establishment needs to happen before packets can be precomputed and stored.
### _System building blocks_
An EDHOC implementation is needed for key establishment, but requires only standard functionality in terms of key export interfaces to create and retrieve the shared secrets used for the security context.
The OSCORE implementation needs to be augmented with handlers to enable BLEND to precompute packets and send them unmodified at a later point in time. Practically this means allowing retrieval of the byte buffer representing the serialized OSCORE packet and ensuring there are interfaces to control the sequence numbers.
A flash storage abstraction is useful to hide hardware specific details and offer a higher level API. We propose a simple file system like API which allows reading, writing and appending data to files, which are being written out to flash.
### _SecureStorage lifecycle and message flow_
The figure 2 illustrates the main events relevant to secure storage operations. After the key establishment both parties have established a secure context, which allows them to act as both clients and servers.
The sensor can thereafter be deployed, and start sensing. Depending on the data generation rate and storage policy, a number of sensor readings might be compiled as the payload for one CoAP packet. The packet is encrypted as a ready to send OSCORE packet and stored onto flash.
When the communication link is ready, for instance in the form of a data mule, a trigger command is sent. The trigger message is a CoAP request, protected with the same OSCORE security context as has been previously established through EDHOC. Hence the correct decoding of the trigger message serves both to authenticate the data mule, and to authorize it for accessing the sensor data. The command will cause the sensor device to start sending the stored packages. In sending the stored packages, the sensor device acts as a client. This allows the device to control the sequence numbers included in the packets, reflecting the sequence numbers of the stored packets. To prevent an attacker from stopping a data transfer simply by blocking the trigger message, we require the device to reply with a short no data message in case there is no sensor data to send.
### _Storage overhead trade-offs_
The amount of data that the sensor device needs to store locally depends on both the sensor data generation rate and the frequency of data collection from the outside. For sensor devices deployed in low power radio networks, the least amount of overhead is achieved if payloads corresponding to full 802.15.4 frame sized packets are precomputed and stored. If the sensor data generation rate is sufficiently small, the node would need to temporarily store unencrypted sensor data until the amount corresponding to a full payload is gathered. If no temporary plaintext storage is considered acceptable, the device must create encrypted packets for each individual sensor reading.
### _Relation between the layers_
An detailed illustration of the relation between the layers is shown in figure 3. In the following we explain the data needed to be included and processed.
### _CoAP packet creation_
While the CoAP protocol is versatile, with a range of packet options, we are here interested in a meaningful subset
Fig. 2: BLEND overview. An initial key establishment is done before deployment, can be redone later given EDHOC support. Sensor data is encapsulated into precomputed packets, and securely stored until a connection with a data mule, or any other secure endpoint, is available
needed to precompute sensor data packets. The table I shows the minimal plaintext data needed to create a CoAP packet, ready to be encrypted for secure storage. In italics are the fields that will be moved to the OSCORE packet. In bold are the fields will be protected through encryption. An observation is that the length of the destination URI directly adds to the packet overhead, but unless otherwise required the empty root path can be used as a valid destination URI.
### _OSCORE packet creation_
Given an existing security context and the CoAP packet information, BLEND can encrypt the CoAP payload together with the sensitive header fields, and calculate the correct OSCORE headers. The missing dynamic information needed is the sender sequence number. The sequence number is used as the basis for the partial initialization vector, or Partial IV in COSE terms. The sender ID is used as key ID. ('PIV' and 'KID' in figure 3.) These two items, together with static COSE information on the algorithm used, are used to form the additional authenticated data, AAD, used in encryption. The two items are also used for calculating a nonce used in encryption, and finally they are included in plaintext in the OSCORE packet header.
The table II shows the data present in the resulting OSCORE header. Starting from the original sensor data, the minimal total overhead is 21 bytes. For sequence numbers in the range 255-65535 an extra byte is needed, etc. This flexible sizing is in contrast to the older DTLS standard (shown in table III), where a fixed field of 6 bytes is allocated regardless of the currently needed size.
\begin{table}
\begin{tabular}{|l l l l|} \hline _Type_ & _Size_, byte & _Example_ & \\ \hline \hline Version \& type & 1 & ‘40’ & ver.1, confirmable \\ Code & 1 & ’02’ & POST \\ Message ID & 2 & ‘4A 84’ & from CoAP \\ Token & 1 & ’84’ & from CoAP \\ OSCORE flag & 2 & ’93 09’ & \\ Partial IV & 1– & ’13’ & \(=\) sequence no \\ Key ID & 1 & ‘42’ & \(=\) sender id \\ Payload marker & 1 & ‘FF’ & \\ Encrypted payload & 9–59 & \(<\)encrypted CoAP\(>\) & \\ MIC & 8 & & \\ \hline \hline _Packet length_ & \(\geq\)_21+original sensor data payload len_ & \\ \hline \end{tabular}
\end{table} TABLE II: Data contained in the OSCORE packets
\begin{table}
\begin{tabular}{|l l l l|} \hline _Type_ & _Size_, byte & _Example_ & \\ \hline \hline Version \& type & 1 & ‘40’ & ver.1, confirmable \\
**Code** & 1 & ’02’ & POST \\
**Code** & 1 & ’02’ & POST \\ _Message ID_ & 2 & ‘4A 84’ & \(<\)any id\(>\) \\ _Token_ & 1 & ’84’ & \\
**URI path \& len** & 1+path len & ’b0’ \\
**Payload marker** & 1 & ’FF’ & \\
**Payload** & 6–56 & \(<\)binary data\(>\) \\ \hline \end{tabular}
\end{table} TABLE I: Plaintext data needed to prepare the CoAP packets used in BLEND
\begin{table}
\begin{tabular}{|l l l|} \hline _Type_ & _Size_, byte & _Example_ \\ \hline \hline Content type & 1 & ’17’ \(=\) application data \\ Version & 2 & ’FEFD’ \(=\) DTLS 1.2 \\ Epoch & 2 & ’0001’ \\ Sequence number & 6 & ’0000 000 0001’ \\ Length & 2 & \\ Initialization vector & 8 & \\ Encrypted payload & 6–51 & \(<\)encrypted raw sensor data\(>\) \\ MIC & 8 & \\ \hline \hline _Packet length_ & \(\geq\)_29+original sensor data payload len_ \\ \hline \end{tabular}
\end{table} TABLE III: Previous state-of-art, data contained in a DTLS record packet
Fig. 3: The relation between the layers when using BLEND
### _Storage of precomputed packets_
For systems with fast flash memory operations, or where energy is of less concern, the prepared OSCORE packet can be saved directly. Where flash operations are slow or energy efficiency is paramount, the OSCORE packet header information can be stored once for a whole series of sensor data packets. Since all the dynamic fields; the message ID, token and sequence number can be assigned in a predictable increasing manner, storing and later retrieving the starting points for the first packet header is sufficient to recalculate the following packet headers. It is this optimized procedure which is shown in figure 3.
### _UPD alternative_
Also UPD headers could be precomputed, and the entire UDP databurffer could be stored for minimal processing at the time of sending. Precomputing UDP packets requires the source and destination ports to be known beforehand. A more important drawback is the increased storage overhead, since the UDP headers add another 8 bytes to each precomputed packet, which needs extra time for storage and retrieval.
### _Key management_
BLEND relies on devices being able to establish new security contexts upon need. To create a new context with the same endpoint, OSCORE allows existing master secret data to be reused, making the context derivation computationally cheap. This can be used to keep the context sequence number bounded by a fixed length. For key establishment we propose EDHOC to be used. EDHOC offers relatively low overhead while supporting PKI solutions. Low overhead is achieved through using certificate reference based key establishment. This requires relevant certificates to have been securely distributed at an earlier point in time. Certificate distribution is out of scope for this work, but in contrast to solutions based on shared secrets, certificates are meant to be openly shared and could be distributed from any trusted endpoint.
#### V-H1 Planned secure context updates
If the data collection endpoint is replaced, the sensor device needs to establish a new security context with the new endpoint. For a planned update, a notice can be communicated ahead of time. Depending on the deployment scenario, this might simply be a message to initiate a full new key establishment, which allows the sensor to immediately start using the new context for sensor data storage. For extreme deployments with very limited connectivity and data mules, it might be a notice send during the last data collection round where the old data mule is active. Unless the key establishment can be relayed at that time, the sensor device has to temporarily resort to local storage encryption until a new security context is in place.
#### V-H2 Unplanned secure context losses
For cases when the data connection endpoint loses the security context, or is lost all together, the following round of data collection with a not previously used endpoint requires re-keying. Any data that has been stored locally using the old security context need to be decrypted by the IoT device and encrypted again.
For IoT devices with access to tamper resistant nonvolatile memory that can be used for key storage, they can store the shared secret data established through the key exchange such that they can recover the security context in case of temporal power losses or restarts.
An IoT device without a secure permanent key storage wants to minimize the storage of security sensitive data to a minimum. Hence there is a risk of losing vital parts of the security context, in case of power losses and unplanned restarts. In the case of context losses the previous stored precomputed sensor data packets become opaque to the device. The stored data can still be sent to the endpoint which has access to the security context and is able to decipher the encrypted packets. Depending on the deployment, the device might report its situation and request a new authentication through a new key exchange before sending the packets from the old security context. In this case the receiving endpoint must keep both contexts in parallel. Alternatively, the setup can be done to allow the IoT device to interpret an incoming message it cannot decipher as the expected trigger message, if it is recovering from a security context loss.
#### V-H3 Proximity to endpoint
Since EDHOC offers true end-to-end protection it can be used to establish a security context with any reachable remote endpoint, even behind proxies.
### _Re-sending and multiple receivers_
The usage of precomputed sensor data packets does not affect resending that happens on lower layers. Lower layer resending will depend on the deployment scenario and radio configuration. As long as a packet has not been received by the other end, the receive window used for the replay detection by the recipient remains unchanged. If on the other hand data has been received the same packet can no longer be resent, as the encryption is affected by the sequence number. For scenarios where either the same receiver wants the same data item more than once, or where multiple receivers are interested in the same data item, extensions of the keying schema must be done. To handle multiple receivers there are proposals for OSCORE group communication, which could be part of an extended secure storage solution [20].
## VI Implementation
We have implemented BLEND in C as a module for the Contiki NG embedded OS [21], that can be adapted for other available operating systems such as Zephyr [22]. The BLEND implementation contains the needed OSCORE libraries, including COSE and CBOR encoding and decoding.
For the basic crypto operations we have reused functionality from the crypto libraries available in Contiki NG, which offers partial crypto operation hardware acceleration for selected target platforms.
Secure communicationThe secure communication part of BLEND is build using the OSCORE libraries available in an experimental version of Contiki NG, plus our EDHOC implementation for key establishment. To allow reusability of the available code for confirmable CoAP transactions we include a CoAP token in the packets.
Secure storageThe storage part is built on top of the Coffee file system for Contiki, offering a file abstraction for interacting with underlying flash memory. When the optimized packet storage method is used, the dynamic header information needed to recalculate the full headers is recorded at the start of new files, followed by the encrypted part of the packets. The specifics of flash memory block sizes and optimal amount of data to write per file depends on target hardware.
Key management and crypto algorithmsWe have implemented EDHOC which is used to establish shared secrets, and based on them derive security contexts. Both the EDHOC and the OSCORE standards are flexible in terms of supporting multiple crypto suits. Our implementation is focused on the mandatory SHA-256 for HKDF, HMAC-based Extract-and-Expand Key Derivation Function, and the most commonly used symmetric crypto, AES-CCM-16-64-128. While AES-CCM is a block cipher, it does not require padding of the resulting ciphertext. As a result the length of ciphertext is always the length of the plaintext, plus 8 byte MIC, message integrity code.
## VII Evaluation
We use a quantitative experimental research methodology where we evaluate the impact of one particular variable while keeping other parts of the system setup static, to correctly attribute the performance variations. In the following we present the relevant micro benchmarks illustrating the system performance and overhead.
### _Experimental setup_
For the hardware experiments we use Zolertia Firefly nodes, a platform using TI CC2538 ARM Cortex M3 microcontrollers [23]. The nodes are equipped with 32 KB RAM, 512 KB flash, a 2.4GHz 802.15.4 radio for communication and support for hardware acceleration of crypto operations.
### _Storage overhead_
The packet storage overhead for different sensor data payloads is shown in figure 4. Storing a ready to send OSCORE packet induces an overhead of 21 bytes, using the configuration presented in tables I and II. If instead a complete UDP packet is stored, the per packet overhead is 29 bytes. If only the starting points for dynamic header data counters are stored once, the overhead quickly shrinks to close to 3 extra CoAP bytes, plus the 8 byte MIC. For our system tests we use file append functionality for storing packets, such that the storage cost of the 6 bytes needed for header recalculations are amortized over 25 precomputed packets.
Depending on the initial sensor data size the resulting storage overhead ranges from 20% for the 56 byte sensor data packets using optimized storage, all the way up to close to 600% for 6 byte sensor data while storing full UDP packets.
In the following experiments the optimised version where needed header data is stored once is used.
### _Latency to get data ready for sending_
When the device gets a request to report recorded sensor readings, if local security is used, it needs to read the data from flash, decrypt it with the local key and prepare it for sending. If BLEND is used, the operations needed are reading from flash and, optionally, packet transaction allocation. The two cases are illustrated in figure 1. The total time needed is shown in figure 4(a), for when hardware acceleration is available, and in figure 4(b) without hardware acceleration.
With hardware acceleration BLEND performs around 0.5 \(ms\) faster compared with the local security solution. The resulting remaining latency when retrieving a stored packet, recalculating header information and allocating a CoAP transaction is is only between 65 \(\mu s\) and 110 \(\mu s\).
Without hardware acceleration, with all cryptographic operations done in software, the latency savings are between 0.75 \(ms\) and 1.36 \(ms\) per packet compared with the local security solution.
### _Total energy usage_
Using the fine grained timer system in Contiki NG we measure the time spent for relevant system operations. This makes it possible to calculate the consumption based on current and voltage levels from the CC2538 hardware datasheets [23]. We use the specified maximum peak current for writing, which means we present the absolute upper bound of energy usage for the flash operations.
The total energy usage is shown in figure 5(a) and 5(b), with and without the usage of crypto hardware acceleration. Using the crypto hardware acceleration the differences are small. Due to relatively slow storage write operations, also a small increase in storage needs can offset crypto savings. Without crypto hardware acceleration, BLEND saves energy for all sensor data sizes.
The conclusion is that BLEND performs at least on par with the local storage solution in terms of total energy usage, with a clear advantage for all cases where the crypto operations constitutes a larger proportion of the total work done.
### _Key establishment and comparison with DTLS_
#### Vii-E1 Key establishment
Using the reference based key establishment option in EDHOC we are able to perform a key establishment using only 284 bytes of application layer
Fig. 4: Storage size relative to the original sensor data size in bytes, for different sensor payloads and storage options
data. With DTLS 1.2 and the ECDHE-ECDSA cipher suit corresponding operations use 1.65 kB. This is largely due to lengthy ASN.1 encodings and the need to send full certificates in the handshake. The numbers are based on IoT profiled certificates of only 315 bytes for both parties in the exchange.
#### Vii-D2 Packet encryption and overhead
The AES crypto used for OSCORE corresponds to what is commonly used also for DTLS 1.2 in IoT devices. This means the overhead from the crypto operations are directly comparable. An obvious benefit of switching to an OSCORE based solution is the reduced packet overhead. Using the DTLS AES128-CCM8 cipher produces the packet overhead figures given in table III, to be compared with numbers for OSCORE in table II. An OSCORE solution using CoAP saves eight bytes even compared with a DTLS session without CoAP, used to transport raw UDP data. If instead also DTLS is used to provide a CoAPs session, encryption will be performed on the whole CoAP layer packet, which reduces the maximum usable sensor data payload with six more bytes, down to 45 bytes.
### _Memory requirements_
The BLEND implementation requires 1.5 kB of ROM and a little less than 0.5 kB of RAM. Figure 7 shows the comparison with the related components in the configuration used. Compared with the size of the total Contiki NG firmware used for the evaluation of around 60 kB ROM and 13 kB of pre-allocated RAM, the BLEND only contributes to 2.5% of the ROM and 3.8% of the RAM.
The numbers shown are when there is memory allocated for two parallel security contexts. Each additional security context adds 143 bytes of RAM.
## VIII Security Considerations
When a protocol designed for securing communication is reused to also protect data at rest, it is important to validate that protocol assumptions are still applicable. This includes the amount of data that can be protected. The OSCORE protocol is designed to allow theoretical maximum sequence numbers
Fig. 5: Latency for packet preparation, with local security and while using BLEND
Fig. 6: Total energy needed for sensor data operations, with local security while using BLEND
Fig. 7: Memory usage by BLEND and related components
up to \(2^{40}\)-1, but for actual implementations the number will be lower. The implementation used in our evaluation allows sequence numbers up to 4.3 billion. Using 48 byte sensor data packets, to ensure no fragmentation, this corresponds to covering more than 200 GB of data without resetting the sequence counter. This is not a limiting factor for the resource constrained IoT scenarios considered.
The EDHOC protocol relies heavily on the availability of a secure random number generator. For devices with less strong random generators there are proposals on how to incorporate more random material to improve generator quality [24].
If the same master secret data is used to generate multiple secure sessions, forward secrecy is no longer guaranteed [2]. If the long-term secret is leaked, data from previous sessions risk being exposed. This means the multiple session feature should only be used when the risk that previous communication has been eavesdropped is either neglectable, or if the old data no longer is considered secret.
Our only proposed deviation from existing protocol compliance is the suggestion that an IoT device that has lost its secure session could be allowed to send its stored encrypted data without performing mutual authentication and establishing a new secure session. This could enable an attacker to trick the device into sending data, but which the attacker will not be able to decipher, as long as the protocol is not otherwise compromised. To prevent the data from getting lost, the device should keep the data until it has been properly acknowledged through a new security context.
The concerns regarding secure key storage are applicable for any local secure storage solution as well. The local storage needs either a long time key stored in persistent memory, or it needs a secure key management protocol of its own.
## IX Conclusion
We have shown that the new application layer security standard, OSCORE, can be integrated with an IoT storage system, which makes it possible to provide a secure data storage service without compromising any communication security properties or the standard compliance. Our solution, BLEND drastically reduces the latency for sending stored IoT data compared with a local secure storage solution. When combined with EDHOC for performing secure key exchange and establishing the needed security context, BLEND enables a resource efficient way to achieve a complete secure storage and communication solution for IoT.
## Acknowledgment
This research is partially funded by the Swedish SSF Institute PhD grant and partly by the EU H2020 ARCADIAN-IoT (Grant ID. 101020259), the ITEA3 Smart, Attack-resistant IoT Networks (Project ID: P123800021) and the H2020 CONCORDIA (Grant ID: 830927) projects.
|
2308.00856
|
Differential Privacy for Adaptive Weight Aggregation in Federated Tumor
Segmentation
|
Federated Learning (FL) is a distributed machine learning approach that
safeguards privacy by creating an impartial global model while respecting the
privacy of individual client data. However, the conventional FL method can
introduce security risks when dealing with diverse client data, potentially
compromising privacy and data integrity. To address these challenges, we
present a differential privacy (DP) federated deep learning framework in
medical image segmentation. In this paper, we extend our similarity weight
aggregation (SimAgg) method to DP-SimAgg algorithm, a differentially private
similarity-weighted aggregation algorithm for brain tumor segmentation in
multi-modal magnetic resonance imaging (MRI). Our DP-SimAgg method not only
enhances model segmentation capabilities but also provides an additional layer
of privacy preservation. Extensive benchmarking and evaluation of our
framework, with computational performance as a key consideration, demonstrate
that DP-SimAgg enables accurate and robust brain tumor segmentation while
minimizing communication costs during model training. This advancement is
crucial for preserving the privacy of medical image data and safeguarding
sensitive information. In conclusion, adding a differential privacy layer in
the global weight aggregation phase of the federated brain tumor segmentation
provides a promising solution to privacy concerns without compromising
segmentation model efficacy. By leveraging DP, we ensure the protection of
client data against adversarial attacks and malicious participants.
|
Muhammad Irfan Khan, Esa Alhoniemi, Elina Kontio, Suleiman A. Khan, Mojtaba Jafaritadi
|
2023-08-01T21:59:22Z
|
http://arxiv.org/abs/2308.00856v1
|
# Differential Privacy for Adaptive Weight Aggregation in Federated Tumor Segmentation
###### Abstract
Federated Learning (FL) is a distributed machine learning approach that safeguards privacy by creating an impartial global model while respecting the privacy of individual client data. However, the conventional FL method can introduce security risks when dealing with diverse client data, potentially compromising privacy and data integrity. To address these challenges, we present a differential privacy (DP) federated deep learning framework in medical image segmentation. In this paper, we extend our similarity weight aggregation (SimAgg) method to DP-SimAgg algorithm, a differentially private similarity-weighted aggregation algorithm for brain tumor segmentation in multi-modal magnetic resonance imaging (MRI). Our DP-SimAgg method not only enhances model segmentation capabilities but also provides an additional layer of privacy preservation. Extensive benchmarking and evaluation of our framework, with computational performance as a key consideration, demonstrate that DP-SimAgg enables accurate and robust brain tumor segmentation while minimizing communication costs during model training. This advancement is crucial for preserving the privacy of medical image data and safeguarding sensitive information. In conclusion, adding a differential privacy layer in the global weight aggregation phase of the federated brain tumor segmentation provides a promising solution to privacy concerns without compromising segmentation model efficacy. By leveraging DP, we ensure the protection of client data against adversarial attacks and malicious participants.
_Clinical relevance--_ This approach fosters the development of secure and robust AI technologies, driving advancements in clinical research and healthcare while respecting the sensitivity of data. Our framework incorporates cutting-edge central differential privacy techniques, enhancing privacy guarantees while maintaining segmentation model performance.
## I Introduction
Medical information is immensely delicate from data governance point of view and hence medical data sharing is strongly limited or even illegitimate [1]. On the other hand, in many medical applications, machine learning (ML) models could significantly support and speed up the diagnosis process of medical data. A specific application area that can especially benefit from ML models is the segmentation of medical images, which typically contain a lot of information and many details that need to be carefully considered [2]. Today, medical images are routinely used to support various clinical diagnoses but are typically analyzed manually or in a computer-assisted manner by radiologists via a labor intensive process. Brain tumor segmentation challenge [3] highlights an example where AI/ML assisted detection and precise identification of brain tumors like glioblastoma in the early stage could enhance the process and affect patients' care tremendously by even improving the prognosis.
An absolute requirement for training an ML segmentation model is a large amount of high quality data, which in this context means a large set of images annotated by an expert. Unfortunately, often the number of such images within one organization is limited, and for rare diseases extremely limited to construct an accurate model [4].
In this situation, Federated Learning (FL) comes into play; FL has become momentous and gained traction as a collaborative learning paradigm to largely facilitate model training in a distributed construct without actual data transfer from local data storages to a central data trove [5]. FL enables institutional cross-cooperation for robust and generalizable machine learning models for a swarm of application domains. Basically, a server and a set of cooperating computing resources, with local data, constitute an FL configuration [6]. Intuitively, in a pristine FL process, task planning is done at the server and bulk processing takes place at the heterogeneous collaborators. Local model training is performed at the collaborators and the learned model parameters are dashed to the central server. The server aggregates the learned parameters from the individual collaborators via mutexes or semaphores [7] and shares the collectively learned aggregated parameters back to the collaborators via synchronization primitives [8]. The strategic advantage of the FL architecture is a collegiate ML model training with cybersecurity: data confidentiality does not breach in comparison to submitting raw data over the internet. Moreover, FL allows for coalescing compute power which leads to collective resource optimization. However, vanilla FL implementation is insufficient as a privacy preservation concept [9] because side information exploitation via parameter piracy and parameter bootlegging by adversarial linkage attacks on parameters acquired from collaborators in FL settings can be decompiled and reverse engineered to decode and re-identify the patients [10].
The concept of differential privacy (DP) offers a solid foundation for safeguarding data privacy by mathematical and logical reckoning [11]. With noise injection, the exact contributions of individual collaborator models and patients therein become impossible to quantify. Injected noise can be randomly drawn from Gaussian or Laplace distribution [12] and it can be added to original data records (local DP), or
at the server to the learned parameters (global DP) [13, 14]. Generally speaking, DP is the process of retaining the data set's overall statistical properties while removing personally identifiable information recognized as a digital fingerprint [15]. Hence, using advanced compositional bounds, DP coupled with FL provides a robust shield and impregnable defense against model parameter stealing, data feature reconstruction, model inversion hacks, membership inference attacks, and back-door attacks [16, 17]. However, there are various challenges regarding DP implementation. The precise amount of added noise has the potential to contaminate data and poison model training leading to declension in accurate prediction quality. The magnitude of noise disruption required in the DP process is parametrized by sensitivity, and the estimation of proper sensitivity on a particular data set is subject to experimentation. And - most critically - it's uncertain how exactly to incorporate DP with imaging data [13, 18].
This paper presents the DP-SimAgg algorithm, a differentially private similarity-weighted aggregation algorithm for brain tumor segmentation in multi-modal MRI imaging of Glioblastoma. Our investigation involves a comprehensive audit of various DP privacy budget options, aiming to evaluate their effectiveness and implications. By exploring and analyzing these options, we shed light on the intricate interplay between privacy and the algorithmic performance of DP-SimAgg.
## II Methods
### _Dataset_
The training data used in the study was based on partitioning 2 (with 33 collaborators) of the Federated Tumor Segmentation (FeTS) 2022 challenge and it included altogether 1251 scanned subjects with gliomas. All the mpMRI scans, provided as NIFTI files (.nii.gz), had four 240x240x155 structural MRI images including native (T1), post-contrast T1-weighted (T1Gd), T2-weighted (T2), and T2 FLuid Attenuated Inversion Recovery (FLAIR) volumes.
Annotations comprise the pathologically confirmed segmentation labels with a similar volume size of 240x240x155 including the GD-enhancing tumor (ET - label 4), the peritumoral edematous/invaded tissue (ED - label 2), and the necrotic tumor core (NCR - label 1). All the provided MRI scans were collected from multiple institutions and certain preprocessing steps such as rigid registration, brain extraction, alignment, 1x1x1 mm resolution re-sampling, and skull stripping were applied as described in [19, 20, 21].
We deployed Intel's Federated Learning (OpenFL) [22] framework for training brain tumor segmentation model -- an encoder-decoder U-shape type of convolutional neural network (see e.g. [23]) provided by FeTS2022 challenge -- using the data-private collaborative learning paradigm of FL. OpenFL considers two main components: 1) the collaborator which uses a local data set to train the global model and 2) the aggregator which receives model updates from each collaborator and fuses them to form the global model. At each FL round, we used 20 percent of the total available collaborators hosting the data, but the combination of collaborators at each FL round was unique. Moreover, the experiments were performed on a cluster workstation with NVIDIA TITAN V100 GPU and 350 GB memory.
### _Differential Privacy and Federated Learning_
Let us first revisit SimAgg algorithm [24], where at FL round \(r\), the parameters \(\rho C^{r}\) of the participating collaborators \(C^{r}\) are collected at the server. The average of these parameters is calculated as:
\[\hat{\rho}=\frac{1}{|C^{r}|}\Sigma_{i\in C^{r}}\rho_{i}. \tag{1}\]
Then we calculate the similarity of each collaborator \(c\in C^{r}\) with the average parameter values from all collaborators using
\[sim_{c}=\frac{\Sigma_{i\in C^{r}}|\rho_{i}-\hat{\rho}|}{|\rho_{c}-\hat{\rho}| +\epsilon}, \tag{2}\]
where \(\epsilon=1e-5\) (small positive constant) and normalize to obtain similarity weights as follows:
\[u_{c}=\frac{sim_{c}}{\Sigma_{i\in C^{r}}sim_{i}}. \tag{3}\]
We then compute a second weighting factor that favors collaborators with larger samples sizes:
\[\nu_{c}=\frac{N_{c}}{\Sigma_{i\in C^{r}}N_{i}} \tag{4}\]
where \(N_{c}\) is the number of examples in collaborator \(c\).
Using the weights obtained using Eqs. 3 and 4, the similarity weighted parameter values (\(\rho^{m}\)) are computed as:
\[w_{c}=\frac{u_{c}+\nu_{c}}{\Sigma_{i\in C^{r}}(u_{i}+\nu_{i})}, \tag{5}\]
and the parameters are finally aggregated as follows:
\[\rho^{m}=\frac{1}{|C^{r}|}\cdot\Sigma_{i\in C^{r}}(\nu_{i}\cdot\rho_{i}). \tag{6}\]
The normalized aggregated parameters \(\rho^{m}\) are then dispatched to the next set of collaborators in the successive federation rounds.
In the proposed setting, artificial noise is added to the collaborator's model parameters at the server side at aggregation. At the server, the model aggregates are perturbed by adding Gaussian noise to each participating collaborator's model parameters before the actual aggregation was performed. In this global differential privacy setup, privacy guarantees are adaptable and robust as the privacy budget and sensitivity can be tuned according to data, which is practical for privacy-preserving brain MRI lesion segmentation. It is also notable that the collaborators remain simple since they don't need to implement any kind of privacy preservation mechanism, but on the other hand, the accurate and possibly sensitive model information cannot leak from one collaborator to another.
Our DP-SimAgg algorithm is formally presented in Algorithm 1.
### _Evaluation of the privacy preservation vs. accuracy_
Epsilon (\(\mathbf{\epsilon}\)), the privacy budget parameter, can be fine-tuned to strong-hold noise accuracy trade-off. Typically, strict privacy requirements call for an epsilon value of less than one. However, it's not unusual to find epsilon values up to 10 being employed in some applications.
We varied the epsilon from loose (10) to strict (0.1) privacy budget. The value of epsilon is indirectly proportional to the degree of privacy preservation and directly proportional to accuracy. Hence, with a small epsilon, the degree of privacy preservation increases while data accuracy decreases and vice versa. The Delta parameter is the probability of information accidentally being leaked, and it was chosen to be small (1e-05).
## III Results
We assess the performance of an image segmentation 3D U-net model trained on FeTS 2022 data in federated settings using our methodology named as DP-SimAgg. For model comparison, model training was performed for 20 federation rounds for SimAgg and different values of the epsilon parameters for privacy measures in DP-SimAgg. The total simulation training time is roughly 7 hours per one FL round for both, DP inclusive and DP exclusive, SimAgg methods. So, adding DP is computationally inexpensive and hence doesn't affect the computational load in practice.
In Fig. 1 the convergence of the training process using different metrics is shown. The convergence of the algorithm appears to be rapid since the final performance level is roughly achieved already after 10 rounds.
The performance of the SimAgg and DP-SimAgg models (with different epsilon) using external and previously unseen validation data with 219 scans has been summarized in Table I and visualized in Fig. 2. The model performance appears to not be significantly impacted and affected by the value of \(\mathbf{\epsilon}\) in DP-SimAgg.
## IV Discussion
This paper introduces a novel and generic framework for distributed differentially private (DP) learning within the context of federated learning. Unlike existing frameworks, our method is exclusively server-based, providing a unique DP solution. Through empirical evaluation, we demonstrate that our approach achieves comparable results to non-private models, highlighting its effectiveness in maintaining privacy while preserving model performance.
In addition to privacy preservation, our framework leverages the OpenFL platform, which ensures secure communication and trust among participating entities in the federated learning system. By utilizing certification-based hand-shake protocols between the aggregator and collaborators, our approach aligns with the mainstream practice of authorized membership, enhancing the overall system's security.
Furthermore, our server-based methodology is platform agnostic, allowing for its adaptation and deployment in
Fig. 1: Training convergence of DP SimAgg algorithm for different values for \(\mathbf{\epsilon}\) (and also without DP denoted by ”No DP”) using different metrics and for different tumor regions (labels 0, 1, 2, 4).
various federated learning architectures. This versatility and effectiveness make our approach promising not only in the medical field but also in other domains where privacy preservation is of utmost importance.
## V Conclusions
Our framework provides a robust solution for distributed differentially private learning in federated settings. By addressing privacy concerns while maintaining model performance, our approach contributes to overcoming obstacles in the wider adoption of privacy preservation methods. It holds significant potential to empower researchers and practitioners in leveraging privacy-enhancing techniques for improved data privacy and security in various domains.
|
2305.00672
|
Perpendicular magnetic anisotropy of an ultrathin Fe layer grown on
NiO(001)
|
The magnetic anisotropy and magnetic interactions at the interface between Fe
and NiO(001) were investigated. Depending on the growth conditions of the
NiO(001) layers and the post-annealing temperature, the preferential
magnetization direction of the ultrathin Fe layer grown on a NiO(001) layer
changed from in-plane to a direction perpendicular to the film plane. The
lattice constant of the NiO(001) layers parallel to the growth direction
increased with O$_2$ flow rate, while that parallel to the in-plane were locked
onto the MgO(001) substrate regardless of the growth conditions of the NiO
layers. Moreover, perpendicular magnetization was observed only when the NiO
layer was grown with O$_2$ flow rates higher than 2.0 sccm corresponding to
oxygen-rich NiO. X-ray magnetic circular dichroism measurements revealed an
enhancement in anisotropic orbital magnetic moments similar to the origin of
perpendicular magnetic anisotropy at the Fe/MgO(001) interface. The interfacial
magnetic anisotropy energies were 0.93 and 1.02 mJ/m$^2$ at room temperature
and at 100 K, respectively, indicating less temperature dependence. In
contrast, the coercivity $H_c$ exhibited a significant temperature dependence.
Although no signature of exchange bias or unidirectional loop shift was
observed, $H_c$ was strongly dependent on the NiO layer thickness, indicating
that the exchange interaction at the interface between the ferromagnetic and
antiferromagnetic layers was not negligible, despite the NiO(001) being a
spin-compensated surface.
|
Soki Kobayashi, Hiroki Koizumi, Hideto Yanagihara, Jun Okabayashi, Takahiro Kondo, Takahide Kubota, Koki Takanashi, Yoshiaki Sonobe
|
2023-05-01T06:06:30Z
|
http://arxiv.org/abs/2305.00672v1
|
# Perpendicular magnetic anisotropy of an ultrathin Fe layer grown on NiO(001)
###### Abstract
The magnetic anisotropy and magnetic interactions at the interface between Fe and NiO(001) were investigated. Depending on the growth conditions of the NiO(001) layers and the post-annealing temperature, the preferential magnetization direction of the ultrathin Fe layer grown on a NiO(001) layer changed from in-plane to a direction perpendicular to the film plane. The lattice constant of the NiO(001) layers parallel to the growth direction increased with \(O_{2}\) flow rate, while that parallel to the in-plane were locked onto the MgO(001) substrate regardless of the growth conditions of the NiO layers. Moreover, perpendicular magnetization was observed only when the NiO layer was grown with \(O_{2}\) flow rates higher than 2.0 sccm corresponding to oxygen-rich NiO. X-ray magnetic circular dichroism measurements revealed an enhancement in anisotropic orbital magnetic moments similar to the origin of perpendicular magnetic anisotropy at the Fe/MgO(001) interface. The interfacial magnetic anisotropy energies were 0.93 and 1.02 mJ/m\({}^{2}\) at room temperature and at 100 K, respectively, indicating less temperature dependence. In contrast, the coercivity \(H_{c}\) exhibited a significant temperature dependence. Although no signature of exchange bias or unidirectional loop shift was observed, \(H_{c}\) was strongly dependent on the NiO layer thickness, indicating that the exchange interaction at the interface between the ferromagnetic and antiferromagnetic layers was not negligible, despite the NiO(001) being a spin-compensated surface.
Present address:Research Center for Magnetic and Spintronic Materials, National Institute for Materials Science (NIMS), Tsukuba 305-0047, Japan
## I Introduction
Magnetic thin films with perpendicular magnetic anisotropy (PMA) are key components of spintronic devices to achieve high thermal stability and low switching current for magnetization reversal, which are crucial for realizing the high density and the low switching power magneto-resistive random-access memory[1; 2]. Perpendicular magnetization films have been realized in various systems, such as magnetic compounds/alloys with relatively high uniaxial magneto-crystalline anisotropy[3; 4; 5; 6; 7] in the form of thin films and magnetic multilayers[8; 9; 10]. The origin of PMA can be divided into two mechanisms: a bulk effect, such as magneto-crystalline anisotropy[11; 12], and an interfacial effect[13]. Both magneto-crystalline anisotropy and interfacial magnetic anisotropy originate from spin-orbit interaction (SOI) associated with a crystal symmetry lowering. The interfacial magnetic anisotropy is particularly useful for controlling the preferential directions of the magnetic layers in spintronic devices owing to the stacking structures of various thin films.
Since the discovery of interfacial PMA in the bilayer system of CoFeB/MgO[14], followed by the excellent demonstration of magnetic tunneling junctions (MTSs)[15], extensive research has been conducted on the enhancement of the PMA and voltage-controlled
magnetic anisotropy (VCMA) in Fe/MgO[16; 17] and related materials[18; 19; 20; 21]. For a Fe/MgO system, _ab initio_ calculations have shown that the interfacial PMA originates from the hybridization between Fe-3\(d_{z^{2}}\) and O-2\(p_{z}\) because of SOI [22; 10; 23]. Through X-ray magnetic circular dichroism (XMCD) measurements, Okabayashi _et al._ showed that the origin of PMA at the interface can be attributed to the enhancement in anisotropic orbital angular momentum (OAM) of Fe induced by SOI[24; 25]. Therefore, if a similar orbital hybridization is realized, other PMA systems may be observed at the interface between Fe and certain oxides, particularly isostructural oxides, such as MgO.
NiO is a typical antiferromagnetic material with a simple rock-salt structure and a Neel temperature of 523 K, which is sufficiently higher than room temperature. The lattice constant is close to those of nonmagnetic materials, such as Ag and MgO; therefore, NiO has been a typical antiferromagnetic compound for studying antiferromagnetic spintronics[26; 27; 28; 29]. Koziol-Rachwal _et al._ recently found that the preferred magnetization direction of the Fe layer of Fe/NiO/MgO(001) in-plane and that the Neel vector of NiO changes from out-of-plane to in-plane if a Cr layer is inserted between the NiO layers and MgO(001) substrates owing to the change in the in-plane lattice constant[30]. Thus, Fe/NiO is a bilayer composed of the conventional ferromagnet and antiferromagnet and also comprises a fascinating interface in terms of the interplay and cooperation between the two magnetic layers with different magnetisms.
In this study, we demonstrated that ultrathin Fe(001) becomes a perpendicular magnetization film owing to the interfacial PMA at the interface between Fe(001) and off-stoichiometric NiO(001). The angular dependence of XMCD measurement revealed that the origin of PMA is OAM of Fe, which is similar to those of previously reported Fe/MgO systems.
The growth of atomically flat Fe(001) layers with a few monolayers is believed to be crucial for enhancing the interfacial PMA in a Fe/MgO(001) system. Because of the low wettability of the Fe film on MgO, most of the previously reported perpendicular magnetization films were realized at the interface between the top MgO(001) and bottom Fe(001) layers. Thus, it is challenging to achieve interfacial PMA with the reverse-stacking structure[31].
## II Experiment
All the samples were grown on single crystal MgO(001) substrates using a radio-frequency magnetron sputtering technique. The stacking structure reported in this study was Cr(001)/Fe(001) /NiO(001) /MgO(001)(substrate). Nickel oxide layers were grown at 500 \({}^{\circ}\)C using a metal Ni target in a mixture of Ar and O\({}_{2}\). Successively, Fe and Cr layers were grown at room temperature using the same sputtering system. Following deposition, the samples were annealed for 1 h in vacuum as a post annealing process. The flow rate of Ar was fixed at 10 sccm.
We prepared four types of samples for optimizing the PMA, quantitatively analyzing the PMA, and examining the magnetic interaction between ferromagnetic Fe and NiO(001) layers. The first type of sample was a series of multilayers of Cr(2 nm)/Fe(1 nm)/NiO(20 nm)/MgO(001), where the NiO(001) layers were grown under O\({}_{2}\) flow rates in the range of 0.5-6.0 sccm. Hereinafter, the length in parentheses of the stacking structure expresses the thickness of each layer. The samples were subsequently heated at 350\({}^{\circ}\)C in vacuum for 1 h. The O\({}_{2}\) flow rate was used as a growth parameter. The second sample was a series of multilayers of Cr(2 nm)/Fe(1 nm)/NiO(20 nm) /MgO(001) subjected to different post-annealing temperatures to optimize the PMA. The O\({}_{2}\) flow rate was fixed at 2.0 sccm. The third and final samples were multilayers of Cr(2 nm)/Fe/NiO(20 nm)/MgO(001) with a wedge-shaped Fe layer (0.5-4.0 nm) (Fig. 1(a)) and Cr(2 nm)/Fe (0.6 nm) /NiO/MgO(001) with a wedge-shaped NiO layer (0 - 30 nm) (Fig. 1(b)), respectively, for anomalous Hall effect (AHE) measurements. Both the samples were grown under O\({}_{2}\) flow rate of 2.0 sccm and post-annealing temperature of 350\({}^{\circ}\)C. Wedge-shaped layers were prepared using a linear moving mask. In addition, to evaluate the dependence of the NiO(001) film structures and valence states of Ni on the reactive sputtering process, NiO/MgO(001) films were prepared at various O\({}_{2}\) flow rates and process temperatures without Fe or Cr layers.
The samples were characterized using reflection high-energy electron diffraction (RHEED), X-ray reflectivity (XRR), X-ray diffraction (XRD), reciprocal space mapping (RSM), and X-ray photoelectron spectroscopy (XPS). XRR, XRD, and RSM experiments were performed using a Rigaku SmartLab with an X-ray source of Co-\(K\alpha_{1}\). The magnetization of the multilayer films was measured using a vibrating sample magnetometer (VSM) at room temperature. In addition, XMCD and X-ray absorption spectroscopy (XAS) measurements were conducted at BL-7A, Photon Factory, high-energy accelerator organization KEK-PF. A magnetic field of \(\mu_{0}H=\pm 1.2\) T was applied along the incident polarized beam by switching the magnetic field directions. The total electron yield mode was adopted. The geometry between the sample surface normal and incident beam directions was varied by changing the sample position from normal incidence (NI) to a grazing incidence of 60\({}^{\circ}\) (GI). All the XMCD measurements were performed at room temperature. Subsequently, to evaluate the magnetization process dependence on both the Fe- and NiO-layer thicknesses, we performed AHE measurements on wedged-shaped films. The samples for AHE measurements were patterned into a Hall bar with many voltage probes on films of different thicknesses, as shown in Fig. 1 (c), using photolithography and Ar ion milling. Cr (10 nm) and Au (100 nm) were sputtered on the electrical contact pads. The current path was parallel to the direction of the gradient of the film with a wedge-shaped thickness
distribution, and the Hall voltages were measured at positions with different Fe or NiO thicknesses. The typical applied current was 0.1 mA. All the measurements were performed at room temperature, unless otherwise stated.
## III Results and discussion
### Epitaxial growth of NiO(001) films
First, we investigated the valence states and composition, as well as the lattice parameters of NiO, depending on the O\({}_{2}\) flow rate during the growth processes. As shown in Fig. 2(a), clear Laue fringes are observed around the 002 diffraction of the XRD patterns, indicating that the NiO films are epitaxially grown with a (001) orientation and had a sufficiently smooth surface without incoherent lattice distortion at any O\({}_{2}\) flow rate. In addition, the RHEED images of all samples exhibited typical streak patterns (Fig. 2(b)), implying that the film surfaces are atomically flat and barely distorted, which is consistent with the observation of the Laue fringes. Figure 2(c) summarizes the lattice constants normal to the film plane of \(c_{\rm NiO}\) as a function of the O\({}_{2}\) flow rate, as determined by 002 reflection positions of the XRD patterns. \(c_{\rm NiO}\) is largely the same as that of the bulk value of NiO (4.176 A)[32] for O\({}_{2}\) flow rates \(\leq\) 1.0 sccm, and \(c_{\rm NiO}\) becomes greater than that of the bulk value of MgO (4.216 A) for O\({}_{2}\) flow \(\geq\) 2.0 sccm, as shown in Fig. 2(c).
As shown in Fig. 2(d), the RSM measurements indicate that the lattice constants along the in-plane direction of NiO(001) mostly matched that of MgO of the substrate at any O\({}_{2}\) flow rate, implying \(a_{\rm NiO}\approx\)4.22 A. Moreover, clear fringe patterns were observed around NiO 113 in both the films, suggesting that all the NiO films were coherently distorted owing to epitaxial stress. Moreover, the critical thickness of the misfit relaxation was greater than 20 nm for all the films. Because the in-plane lattice constants are locked to the MgO(001) substrate, the volume of the unit cell varies with the O\({}_{2}\) flow.
Similar results have been reported in previous studies [33; 34; 35]. With increasing oxygen in the film growth process, Ni\({}^{2+}\) ions are replaced by Ni\({}^{3+}\) ions and vacancies at the Ni-sites, resulting in the growth of non-stoichiometric NiO(001) films. We also performed XPS measurements of the NiO(20 nm)/MgO(001) films without Fe or Cr layers grown at various O\({}_{2}\) flow rates. The relative composition of oxygen and a trace of Ni\({}^{3+}\) in NiO increased with the increase in the O\({}_{2}\) flow rate, consistent with previous XPS measurements[36] and the structural analysis mentioned.
Figure 2: (a) 2\(\theta\)/\(\omega\)-XRD patterns of NiO(001) films grown under various different O\({}_{2}\) flow rates. The asterisks indicate 002 peaks of NiO. (b) Typical RHEED pattern of NiO(001). (c) O\({}_{2}\) flow rate dependence of the lattice constants along the growth direction (\(c_{\rm NiO}\)) determined by 2\(\theta\)/\(\omega\)-XRD measurements. (d) RSMs around the 113 diffraction of NiO/MgO(001) films grown at O\({}_{2}\) flow rates of 0.5 sccm (left) and 6 sccm (right). The numbers shown on the right side in (a) and (d) indicate the oxygen flow rates during deposition.
Figure 1: (a) and (b) Stacking structures of Cr(2 nm)/Fe/NiO/MgO(001) substrates with wedge-shaped thickness gradients. (c) Hall pattern and wiring arrangement. The current path is parallel to the gradient direction of the wedge.
### Interface magnetic anisotropy of Fe/NiO(001)
To study the magnetic anisotropy at the Fe/NiO(001) interface, we fabricated Fe/NiO multilayers on MgO(001) substrates with different O\({}_{2}\) flow rates for NiO(001) layer growth. Figure 2(a) shows the out-of-plane magnetization processes of 0.63 nm-thick Fe thin films grown on 20 nm-thick NiO(001) layers. Clearly, the magnetization processes are sensitive to the growth conditions of NiO(001), and the Fe layer becomes perpendicular magnetization. Therefore, PMA was dominant when the O\({}_{2}\) flow rate \(\geq\) 2.0 sccm. The in-plane lattice constant of NiO(001) was largely the same as that of the MgO(001) substrate, irrespective of the growth condition of the NiO layer, as mentioned in Sec. III.1. The observed PMA dependence on the growth condition of the NiO(001) layer originates from an interfacial effect rather than an epitaxial strain. Next, we fabricated a Cr/Fe/NiO (O\({}_{2}\) flow rate = 2.0 sccm) structure employing a post-annealing process at different temperatures (\(T_{\rm anneal}\)) following the growth of the capping layer of Cr. Figure 2(b) shows the out-of-plane magnetization processes of the films annealed at different \(T_{\rm anneal}\). The samples were perpendicular magnetization films at \(T_{\rm anneal}\) = 350 and 450\({}^{\circ}\)C. In the case of \(T_{\rm anneal}\) = 550\({}^{\circ}\)C, a clear peak shift of NiO 002 was observed in XRD patterns (not shown), suggesting considerable atomic mixing at the interface.
We performed AHE measurements on multilayer Cr-cap (2 nm)/Fe (0.5-4.0 nm)/NiO (20 nm)/MgO(001)(substrate) to quantitatively separate the observed magnetic anisotropy into the interfacial magnetic anisotropy and volume contribution. The sample with the Fe layer thickness of \(t_{\rm Fe}\) = 0.5 nm is perpendicular magnetization. In contrast, the saturation field increased with increase in \(t_{Fe}\), indicating that the shape anisotropy became dominant, and that the preferential direction of the magnetization changed from normal to the film plane and finally to the in-plane direction. These results indicate that the origin of PMA is the interfacial effect rather than the bulk effect.
The areal magnetic anisotropy (\(K_{u}t_{\rm Fe}\)) as a function of \(t_{\rm Fe}\), \(K_{u}t_{\rm Fe}=K_{v}t_{\rm Fe}+K_{i}\)[37], is plotted in Fig. 4. Here, \(K_{v}\) is the volume contribution to magnetic anisotropy, for example, magneto-crystalline, strain induced, and shape anisotropies, while \(K_{i}\) is the interface contribution. \(K_{i}\) at room temperature was determined as \(0.93\pm 0.03\) mJ/m\({}^{2}\). This is of the same order as the previously reported \(K_{i}\) for the Fe/MgO interface [38; 24]. In contrast, \(K_{v}\) was estimated to be \(-1.46\pm 0.01\) MJ/m\({}^{3}\), which is reasonably close to the demagnetization energy or shape anisotropy energy for the film form a sample with \(-\frac{1}{2}\mu_{0}M_{s}^{2}=-1.23\) MJ/m\({}^{3}\). Here, \(M_{s}\) denotes the saturation magnetization of Fe, which are 1400 and 1700 kA/m at room temperature and 100 K, respectively as determined by VSM. We also performed the same analysis at 100 K, and obtained \(K_{i}\) and \(K_{v}\) are \(+1.02\pm 0.07\) mJ/m\({}^{2}\) and \(-1.79\pm 0.03\) MJ/m\({}^{3}\), respectively.
We emphasize that the temperature dependence of the interfacial PMA of Fe/NiO appears to be weaker than that of CoFeB/MgO[39]. As the PMA is dominated by magnetization at the interface of Fe layer[40], the observed weaker temperature dependence of the PMA in Fe/NiO compared to CoFeB/MgO indicates that the magnetic exchange coupling between Fe and NiO at the interface may suppress thermal fluctuations of the Fe spins. Therefore, this system could be useful in applications owing to the greater robustness against thermal fluctuations.
Figure 3: Out-of-plane \(MH\) loops of Cr/Fe/NiO(001) multilayers at room temperature; (a) O\({}_{2}\) flow dependence range of 0.5 to 6.0 sccm with \(T_{\rm anneal}\) =500\({}^{\circ}\)C and (b)\(T_{\rm anneal}\) dependence range of 250 to 550\({}^{\circ}\)C with O\({}_{2}\) flow rate of 2.0 sccm.
Figure 4: \(K_{u}t_{\rm Fe}\)-\(t_{\rm Fe}\) plot at 100 K (blue circles) and room temperature (red circles). \(K_{v}\) and \(K_{i}\) are determined from the slope and intercept, respectively. The thinnest two data points expressed as open circles are excluded from the fits due to the perpendicular magnetization samples.
### XAS and XMCD analyses
Figures 5 and 6 show the Fe and Ni \(L\)-edge X-ray absorption spectra as well as the XMCD with angular dependence of the normal incidence (NI) and grazing incidence (60\({}^{\circ}\)) (GI) cases of Cr(2 nm)/Fe(0.6 nm)/NiO(20 nm)/MgO(001), respectively. The NiO(001) layers were grown at O\({}_{2}\) flow rates of 2.5 and 1.5 sccm for the perpendicular magnetization (Fig. 5) and in-plane magnetization (Fig. 6) samples, respectively. For both the samples, the clearly observed metallic line shapes of Fe suggest a lack of interfacial oxidation. In the case of the perpendicular magnetization sample, the XMCD intensity at the \(L_{3}\)-edge varied with the measurement geometry. The enhanced \(L_{3}\)-peak in the NI geometry corresponded to orbital magnetic moments (\(m_{\rm orb}\)). By applying the sum rule analysis, we obtained the spin magnetic moments and \(m_{\rm orb}\) to be 1.32 and 0.09 \(\mu_{B}\), respectively, for the NI case. Here, the 3d hole number of 3.39 was assumed, which is the same as the case for Fe/MgO [25]. If we express the \(m_{\rm orb}\) components parallel and normal to the film plane as \(m_{\rm orb}^{\perp}\) and \(m_{\rm orb}^{\parallel}\), respectively, the difference of \(m_{\rm orb}\), defined as \(\Delta m_{\rm orb}=m_{\rm orb}^{\perp}-m_{\rm orb}^{\parallel}\simeq 0.05 \mu_{B}\), provides the reasonable origin of the observed PMA[10].Here, we assumed that the orbital moment anisotropy originated from the second order perturbation of the spin-orbit interaction[10]. A magnetic dipole moment of less than 0.01 \(\mu_{B}\) was also estimated, which is sufficiently smaller than \(m_{\rm orb}\). This suggests that the orbital magnetic moment anisotropy is dominant at the Fe/NiO interface. In contrast, in the case of the in-plane magnetization sample, \(L_{3}\)-edge intensity remained unchanged for NI and GI set up, which suggests the isotropic \(m_{\rm orb}\) of 0.05 \(\mu_{B}\) because the shape anisotropy becomes dominant. Hysteresis loops at Fe \(L_{3}\)-edge photon energy shown in Figs.5(d) and 6(d) roughly reproduce the characteristic magnetization processes for the perpendicular and in-plane magnetization films, respectively. The asymmetry in XMCD spectrum reveals that the large orbital moments are induced in the perpendicular components. These features are quite similar to the case of Fe/MgO[24]. However, the effect of Fe-O bonding to anisotropic charge distribution does not work in the in-plane magnetization sample even with 0.6 nm thickness. As shown in Figs.5 and 6, there are no XMCD signals at the Ni \(L\)-edge for both the NI and GI cases because antiparallel spins at the Ni sites are compensated completely, although small differential signals appeared, originating from application of the magnetic field along the perpendicular direction during the measurements.
Interestingly, there were no signs of Ni\({}^{3+}\) in XAS, as shown in Figs. 5 and 6 (c). This result appears to be inconsistent with the XPS and XRD results, which indicate that the oxygen-rich NiO contains Ni\({}^{3+}\) and Ni-site vacancies in addition to Ni\({}^{2+}\) (Sec. III.1). The fact that no traces of iron oxide were detected (Fig. 5 (a)) implies that no significant oxidation occurred between the excess oxygen in NiO and the metallic Fe layer. Koo _et al._ pointed out that a variation in the interface composition due to oxygen atoms floating up from the Cr buffer layer and reaching the MgO interface is key to enhancing the PMA in MgO/Fe/Cr/MgO(001)[38]. Similarly, the variation or diffusion of excess oxygen in the NiO layer by annealing may play an important role in enhancing
Figure 5: XAS and XMCD of the perpendicular magnetization film of Fe/NiO(001). (a) XAS in Fe \(L\)-edge along the different magnetic field directions with \(\mu_{+}\) and \(\mu_{-}\), in the NI setup. (b) XMCD (\(\mu_{+}-\mu_{-}\)) for the NI and GI geometries. The inset shows an expanded view around the \(L_{3}\)-edge. (c) XAS and XMCD spectra at the Ni \(L\)-edge. (d) XMCD hysteresis curves in NI and GI geometries.
Figure 6: XAS and XMCD of the in-plane magnetization film of Fe/NiO(001). (a) XAS in Fe \(L\)-edge along the different magnetic field directions with \(\mu_{+}\) and \(\mu_{-}\), in the NI setup. (b) XMCD (\(\mu_{+}-\mu_{-}\)) for the NI and GI geometries. The inset shows an expanded view around the \(L_{3}\)-edge. (c) XAS and XMCD spectra at the Ni \(L\)-edge. (d) XMCD hysteresis curves in NI and GI geometries.
the PMA in the Fe/NiO system. Based on this, the XAS results of no signs of Ni\({}^{3+}\) can be consistently explained by the migration of excess oxygen in the NiO layer to the Cr layer which works as an oxygen absorber, and the interface structure of Fe/NiO becomes optimized to exhibit PMA.
### Effects of coexistence of PMA and magnetic exchange at the interface of Fe and NiO
To investigate the dependence of the NiO layer thickness on the magnetization process, we conducted AHE measurements on the stacking of Cr/Fe(0.6 nm)/NiO(\(t_{\rm NiO}\))/MgO(001) with a wedge-shaped NiO layer (\(t_{\rm NiO}\) = 0 - 30 nm) (Fig. 1(b)). As shown in Fig. 7, the films with thicker NiO layers have a higher coercive force \(H_{c}\) at all temperatures and a greater temperature dependence of \(H_{c}\). Generally, magnetic anisotropy is among the dominant parameters determining \(H_{c}\); However, as mentioned in Sec. III.2, the PMA exhibited little temperature change. The fact that the temperature dependence of \(H_{c}\) is more significant than that of PMA means that \(H_{c}\) is dominated by the bulky features of the NiO layer rather than the interfacial effect. The difference between the two samples with different NiO thicknesses could be explained by considering the blocking temperature (\(T_{B}\)) of antiferromagnets[41; 42; 43]. According to the previous study of \(T_{B}\) in antiferromagnets by Devasahayam _et al.[43]_, the thickness dependence of \(T_{B}\) can be expressed by the power law owing to the finite-size effect. The estimated correlation length was \(\approx 2\) nm for NiO films and therefore, \(T_{B}\) approaches or becomes less than the room temperature if \(t_{\rm NiO}\lesssim 2\) nm. Thus, our sample with \(t_{\rm NiO}=15\) nm has \(T_{B}\) above 300 K, while in the case of the sample with \(t_{\rm NiO}=0.8\) nm, the \(T_{B}\) is well below room temperature. As \(t_{\rm NiO}\) of the latter sample is sufficiently thinner than the correlation length of NiO, the role of the exchange coupling between NiO and Fe at the interface or magnetic effect of antiferromagnetism of the NiO layer must be negligibly small. In contrast, the magnetization process of Fe layer in the the sample with \(t_{\rm NiO}=15\) nm is strongly affected by the antiferromagnetic nature of NiO through the interfacial exchange coupling.
We also examined whether exchange bias emerges at the interface between the ferromagnetic Fe(001) layer and the antiferromagnetic NiO(001) layer for samples with an additional field annealing process. The details of the annealing process are as follows. Three films of Cr/Fe(0.6 nm)/NiO(20 nm) were simultaneously grown on MgO(001) substrates with dimensions of 10 mm \(\times\) 10 mm \(\times\) 0.5 mm. Two of the three films were annealed at 250\({}^{o}\)C in vacuum with a magnetic field of \(\mu_{0}H=\)1 T for 1 h. They were placed parallel and perpendicular to the magnetic field, respectively. Subsequently, the samples were spontaneously cooled to room temperature under the magnetic field. The third sample was annealed in zero magnetic field as a control sample.
Hysteresis loop shifts, which are characteristic of the exchange bias effect, have often been reported in Fe/NiO(001), which exhibits in-plane preferential magnetization[44; 45; 30; 46; 47] rather than perpendicular magnetization. Even at an interface between a ferromagnet and a compensated antiferromagnet layers, a finite exchange bias could be realized if the spins of both the ferromagnet and antiferromagnet lie in-plane because of the frustration of the first antiferromagnetic layer[48]. In our case, no clear shift in the magnetization curve or a clear change in \(H_{c}\) was observed in all three samples, regardless of the presence of the applied field during the annealing process or the direction of the applied field. This
Figure 7: \(\rho_{\rm AHE}\) of Cr/Fe(0.6 nm)/NiO(\(t_{\rm NiO}\))/MgO(001) with \(t_{\rm NiO}=15\) nm (top) and 0.8 nm (bottom). AHE measurements were performed at 10, 50, 90, 130, and 300 K. The thick-NiO sample of \(t_{\rm NiO}=15\) nm exhibits larger \(H_{c}\) at each temperature than the thin-NiO sample of \(t_{\rm NiO}=0.8\) nm.
means that the antiferromagnetic spin configuration near the interface cannot be controlled by an external field, although the origin of \(H_{c}\) could be dominated by interfacial spin frustration[49; 50]. We would like to emphasize that the difference between whether Fe is a perpendicularly magnetized film or an in-plane magnetized film appears to govern the absence or presence of an exchange bias in the Fe/NiO(001) system. Moreover, the relationship between the Neel vector of NiO(001) and the preferential magnetization direction of the ferromagnetic layer is important for understanding the exchange bias mechanism in such bilayer systems.
## IV Conclusions
In summary, we investigated the interfacial magnetic anisotropy at the interface between Fe and Ni oxide. We found that PMA emerged at the Fe/NiO interface at high O\({}_{2}\) flow rates for NiO layer growth. By measuring the Fe-thickness dependence of \(K_{u}\), we obtained \(K_{i}\) of 0.93 and 1.02 mJ/m\({}^{2}\) at room temperature and 100 K, respectively. Further, XMCD measurements revealed an enhancement in the anisotropic magnetic moment of Fe, which was also observed at the Fe/MgO interface. In contrast ot the lower temperature dependence of \(K_{i}\), \(H_{c}\) was strongly enhanced with decreasing temperature. However, no hysteresis loop-shift, which is characteristic of the exchange bias effect, was observed. The VCMA effect of Fe/NiO(001) should be examined in the near future. In addition to the well-known bilayer Fe/MgO(001) system, which shows PMA, we found that Fe/NiO(001) also exhibits relatively strong PMA. Because a rock-salt structure is a common crystal structure for divalent oxides, the combination of Fe and rock-salt type oxides is a promising system to exhibit greater PMA.
###### Acknowledgements.
This work was partly supported by JSPS KAKENHI(19KK0104, 21H01750, and 22H04966), TIAKKEHASHI (TK22-023), and Cooperative Research Project Program of the Research Institute of Electrical Communication (RIEC), Tohoku University. The XMCD experiments were performed under the approval of the "Photon Factory Program Advisory Committee" (proposal No. 2021G069). Part of this work was supported by the Advanced Research Infrastructure for Materials and Nanotechnology in Japan (ARIM). We thank Seiji Mitani, Hiroaki Sukegawa, and Yoshio Miura for providing us with useful suggestions.
|
2301.12781
|
Emergent magnetism as a cooperative effect of interactions and reservoir
|
Closed shell molecular structures are under normal conditions time-reversal
invariant. Experimental evidences point, however, towards that this invariance
may be locally violated when the structure is in contact with a particle
reservoir. The mechanisms behind such local symmetry breaking are not clear by
any means. By considering a minimal model for a closed shell structure, here we
propose that the symmetry breaking may result from a combination of internal
and/or external interactions. It is shown that a magnetic moment of a localized
electron level can be generated and maintained under the influence of such
combination. The theoretical results should cast new light on the mechanisms
that may form magnetic properties in molecular compounds.
|
M. Shiranzaei, S. Kalhöfer, J. Fransson
|
2023-01-30T10:56:17Z
|
http://arxiv.org/abs/2301.12781v1
|
# Emergent magnetism as a cooperative effect of interactions and reservoir
###### Abstract
Closed shell molecular structures are under normal conditions time-reversal invariant. Experimental evidences point, however, towards that this invariance may be locally violated when the structure is in contact with a particle reservoir. The mechanisms behind such local symmetry breaking are not clear by any means. By considering a minimal model for a closed shell structure, here we propose that the symmetry breaking may result from a combination of internal and/or external interactions. It is shown that a magnetic moment of a localized electron level can be generated and maintained under the influence of such combination. The theoretical results should cast new light on the mechanisms that may form magnetic properties in molecular compounds.
+
Footnote †: preprint: APS/123-QED
Molecules, as well as single atoms, which are in closed shell configurations when isolated can acquire a magnetic state when, e.g., immersed in solution [1; 2], attached on a surface [3; 4; 5; 6; 7; 8; 9], or being in embedded in clusters comprising several components [10; 11; 12; 13; 14; 15; 16; 17]. Such properties can be exploited in, for instance, anomalous Hall devices [18; 19; 20], electron spin resonance [21; 22], exploration of superconductivity in presence of spin impurities [23] giving rise to Yu-Shiba-Rusinov states [24; 25], and in structures with properties, such as coercivity [13; 14; 15; 16; 17] and spin-filtering [26; 27; 28; 16], that strengthens with temperature.
The origin of the magnetic state in the closed shell configuration can be effectively summarized as an interplay between the Pauli exclusion principle and the Hund's rules. Although these rules with some success can be employed also in a more general context, questions about the emergence of magnetic states in molecules that are normally regarded as non-magnetic inevitably arise. For instance, chiral molecules provide urgent examples of closed shell structures which, nevertheless, display magnetic properties when in contact with otherwise non-magnetic metals, see, e.g., Refs. [29; 28; 18; 22; 19; 8].
In this article we address the issue of the emergence of a magnetic state in or in a proximity around a local electronic structure when it is being exposed to an external environment. We begin by demonstrating that a spin degenerate molecular level may become spin-polarized if two conditions are met. First, there should exist internal molecular interactions which have the potential to break the time-reversal symmetry and second, the molecular level must be in contact with an external reservoir. We show that the nature of the reservoir, whether it is Fermionic or Bosonic, is secondary. This observation, hence, implies that also molecules in a purely thermal environment may be spontaneously polarized.
As a corollary result of these conditions, we also show that the spin-degeneracy of a localized electron may be broken by a spin-dependent coupling to a purely Bosonic reservoir. Breaking of the spin-degeneracy requires, however, the presence of both spin-conserving _and_ spin-nonconserving coupling. In this model we, furthermore, demonstrate the emergence of a non-vanishing magnetic moment and an associated cross-over temperature at which this moment undergoes a sign change.
We explain our findings to be a result of confluent interactions since these results cannot be obtained in a system with a single type of interaction. For simplicity, assume that there are two sources of interactions which can be formulated through the quantities \(V_{1}\sigma^{0}\) and \(\mathbf{V}_{1}\cdot\mathbf{\sigma}\), where \(\sigma^{0}\) and \(\mathbf{\sigma}\) are the \(2\times 2\)-unit matrix and vector of Pauli spin matrices. When these two interaction coexist, the effective interaction changes the spectrum as \((V_{0}\sigma^{0}+\mathbf{V}_{1}\cdot\mathbf{\sigma})^{2}=(V_{0}^{2}+|\mathbf{V}_{ 1}|^{2})\sigma_{0}+2V_{0}\mathbf{V}_{1}\cdot\mathbf{\sigma}\), which opens for the possibility to break the spin-degeneracy whenever both \(V_{0}\) and \(\mathbf{V}_{1}\) contributes.
As a philosophical remark, our results are important since they challenge the wide spread view that we can interpret measurements in terms of subsystems where the environment has a negligible effect, and we present a concrete example where this is not the case. Despite that we are taught in our scientific training that a measurement inevitably influences the properties of the sample, both interpretations of experimental results as well as theoretical descriptions are many times based on complete negligence of the reservoir to which the sample is connected.
The purpose here is to evaluate the magnetic moment \(\langle\mathbf{m}_{0}\rangle\) of a localized electron represented by the spectrum \(\varepsilon=\varepsilon_{0}\sigma^{0}+\mathbf{\epsilon}_{1}\cdot\mathbf{\sigma}\), where \(\varepsilon_{0}\) and \(\epsilon_{1}\) denote the energies corresponding to the spin-independent and spin-dependent degrees of freedom. Here, the latter is a three component vector, \(\mathbf{\epsilon}_{1}=\varepsilon_{\alpha}\hat{\mathbf{\epsilon}}_{\alpha}\), in some normalized orthogonal basis \(\{\hat{\mathbf{\epsilon}}_{\alpha}\}\), which accounts for, e.g, spin-orbit interactions and local spin-anisotropy. The model corresponding to this spectrum can be written \(\mathcal{H}_{0}=\psi^{\dagger}\varepsilon\psi\), where \(\psi=(\psi_{\uparrow},\,\psi_{\downarrow})^{\prime}\) denotes the spinor for the localized state.
In order to enable a general treatment of the local properties, we calculate the expectation of the magnetic moment \(\langle\mathbf{m}\rangle\) in terms of the Green function \(\mathbf{G}_{\mathbf{LS}}\) for the local electron through the relation \(\langle\mathbf{m}\rangle=(-i)\mathrm{sp}\mathbf{\sigma}\int\mathbf{G}_{\mathrm{LS}}^ {\ast}(\omega)d\omega/4\pi\), where \(\mathbf{G}_{\mathrm{LS}}^{\ast}\) denotes the lesser form of the Green function, whereas sp is the trace over spin 1/2 space. The equation of motion for \(\mathbf{G}_{\mathrm{LS}}\) can be cast in the Dyson-like form
\[\mathbf{G}_{\mathrm{LS}}=\mathbf{g}_{\mathrm{LS}}+\mathbf{g}_{\mathrm{LS}} \Sigma\mathbf{G}_{\mathrm{LS}}, \tag{1}\]
where \(\mathbf{g}_{\mathrm{LS}}=\mathbf{g}_{\mathrm{LS}}(z)=(z-\varepsilon)^{-1}\), \(z\in\mathbb{C}\), is the bare Green function defined by \(\mathcal{H}_{0}\), whereas \(\Sigma\) denotes the self-energy caused by the interactions the local electron is subject to. In this context, one can notice that the self-energy has (i) an energy dependence, \(\Sigma=\Sigma(z)\), and (ii) can be written on the
form \(\mathbf{\Sigma}=\Sigma_{0}\sigma^{0}+\mathbf{\Sigma}_{1}\cdot\mathbf{\sigma}\), which are natural conditions for spin 1/2 particles. Physically, this partitioning represents the charge- (\(\Sigma_{0}\)) and spin-dependent (\(\mathbf{\Sigma}_{1}\)) components of the interactions. However, in addition we shall make the replacement \(\Sigma_{0}\to V+\Sigma_{0}\). In this construction, \(V\) may define a contribution caused by hybridization between the localized state and an external reservoir, whereas the self-energy \(\mathbf{\Sigma}\) may be attributed to internal external, interactions associated with the localized electron. There is, nevertheless, nothing that prevents the opposite association of \(V\) and \(\mathbf{\Sigma}\), that is, that the former belongs to the molecule and the latter represents the interactions with the environment, as we shall see in the concrete example below.
Summarizing these facts, it is straight forward to write the retarded/advanced Green function as
\[\mathbf{G}_{\text{LS}}^{r/a}(\omega)= \frac{\left(\omega-\varepsilon_{0}-V^{r/a}-\Sigma_{0}^{r/a} \right)\sigma^{0}+\left(\varepsilon_{1}+\mathbf{\Sigma}_{1}^{r/a}\right)\cdot\mathbf{ \sigma}}{(\omega-E_{+}^{r/a})(\omega-E_{-}^{r/a})}, \tag{2}\]
with the poles
\[E_{\pm}^{r/a}= \varepsilon_{0}+V^{r/a}+\Sigma_{0}^{r/a}\pm\sqrt{\left( \varepsilon_{1}+\mathbf{\Sigma}_{1}^{r/a}\right)\cdot\left(\varepsilon_{1}+\mathbf{ \Sigma}_{1}^{r/a}\right)}. \tag{3}\]
Under equilibrium conditions, the fluctuation-dissipation theorem implies that the lesser Green function \(\mathbf{G}_{\text{LS}}^{<}\) can be expressed in terms of its retarded counterpart, \(\mathbf{G}_{\text{LS}}^{r}\), using the identity \(\mathbf{G}_{\text{LS}}^{<}(\omega)=\text{i}f(\omega)[-2\text{Im}\mathbf{G}_{ \text{LS}}^{r}(\omega)]\), where \(f(\omega)\) is the Fermi-Dirac distribution function which relates to the chemical potential \(\mu\) of the system. Of particular interest here is the component comprising the Pauli matrices, since only this term can contribute under the trace \(\text{sp}\mathbf{\sigma}\mathbf{G}_{\text{LS}}^{<}\). Indeed, using the notation \(\mathbf{G}_{\text{LS}}=G_{0}\sigma^{0}+\mathbf{G}_{1}\cdot\mathbf{\sigma}\), it can be seen that \(\langle\mathbf{m}\rangle=(-\text{i})\int\mathbf{G}_{1}^{<}(\omega)d\omega/2\pi\). Here,
\[\mathbf{G}_{1}^{<}(\omega)= -2\text{i}f(\omega)\text{Im}\frac{(\omega-E_{+}^{a})(\omega-E_{- }^{a})}{|\omega-E_{+}^{r}|^{2}|\omega-E_{-}^{r}|^{2}}\Big{(}\varepsilon_{1}+ \mathbf{\Sigma}_{1}^{r}\Big{)}. \tag{4}\]
In order to sort out the origin of the induced magnetic moment, we set
\[\lambda= \text{Re}V^{r}, \gamma=-\text{Im}V^{r}, \tag{5a}\] \[\Lambda_{0}= \text{Re}(\varepsilon_{0}+\Sigma_{0}^{r}), \Gamma_{0}=-\text{Im}(\varepsilon_{0}+\Sigma_{0}^{r}),\] (5b) \[\mathbf{\Lambda}_{1}= \text{Re}(\varepsilon_{1}+\mathbf{\Sigma}_{1}^{r}), \Gamma_{1}=-\text{Im}(\varepsilon_{1}+\mathbf{\Sigma}_{1}^{r}), \tag{5c}\]
and keep in mind that \(\Lambda_{1}=|\text{Re}(\varepsilon_{1}+\mathbf{\Sigma}_{1}^{r})|\) and, \(\Gamma_{1}=|\text{Im}(\varepsilon_{1}+\mathbf{\Sigma}_{1}^{r})|\). The the lesser Green function can, then, be written
\[\mathbf{G}_{1}^{<}(\omega)= 2if(\omega)\left\{\frac{(\omega-\omega_{+})\Gamma_{+}+(\omega- \omega_{-})\Gamma_{+}}{|\omega-\omega_{+}+i\Gamma_{+}|^{2}|\omega-\omega_{-}+ i\Gamma_{-}|^{2}}\Lambda_{1}\right.\] \[\left.+\frac{(\omega-\omega_{+})(\omega-\omega_{-})-\Gamma_{+} \Gamma_{-}}{|\omega-\omega_{+}+i\Gamma_{+}|^{2}|\omega-\omega_{-}+i\Gamma_{-}| ^{2}}\Gamma_{1}\right\}, \tag{6}\]
where \(\omega_{\pm}=\lambda+\Lambda_{0}\pm\Lambda_{1}\) and \(\Gamma_{\pm}=\gamma+\Gamma_{0}\pm\Gamma_{1}\).
As we wish to determine the origin of the magnetic moment, assume, for the sake of argument, that \(\mathbf{G}_{1}^{<}\) strongly peaks at the resonance energies \(\omega_{\pm}\), while it is nearly vanishing off resonance. This assumption is justified whenever the broadening \(\Gamma_{\pm}\) is small in a neighborhood around \(\omega_{\pm}\). Then, the magnetic moment can be estimated by approximately
\[\langle\mathbf{m}\rangle\approx \frac{1}{2\pi}\sum_{s=\pm 1}sf(\omega_{s})\frac{\Gamma_{2}^{2}}{ \Gamma_{s}^{2}}\frac{\Lambda_{1}\mathbf{\Lambda}_{1}+(\Gamma_{s}/2)\mathbf{\Gamma}_{1 }}{\Lambda_{1}^{2}+(\Gamma_{s}/2)^{2}}\bigg{|}_{\omega_{s}}. \tag{7}\]
Assuming, furthermore, that the self-energy strongly peaks at the energy \(\varepsilon_{0}+\omega_{0}\), which does not coincide with either of \(\omega_{\pm}\), then, one can notice that \(\Gamma_{0}(\omega_{\pm})\approx 0\) and \(\mathbf{\Gamma}_{1}(\omega_{\pm})\approx 0\), such that the magnetic moment reduces to
\[\langle\mathbf{m}\rangle\approx \frac{1}{2\pi}\bigg{(}\frac{\Lambda_{1}\mathbf{\Lambda}_{1}f(\omega_{+ })}{\gamma(\Lambda_{1}^{2}+(\gamma/2)^{2})}\bigg{|}_{\omega_{+}}-\frac{ \Lambda_{1}\mathbf{\Lambda}_{1}f(\omega_{-})}{\gamma(\Lambda_{1}^{2}+(\gamma/2)^{ 2})}\bigg{|}_{\omega_{+}}. \tag{8}\]
It should be mentioned that the energy \(\omega_{0}\) is associated with the energy of the internal interactions captured in \(\mathbf{\Sigma}\).
Here, we stress that the parameters \(\mathbf{\Lambda}_{1}=\mathbf{\Lambda}_{1}(\omega)\), \(\mathbf{\Gamma}_{1}=\mathbf{\Gamma}_{1}(\omega)\), et c., and that they acquire different values at the resonances \(\omega=\omega_{\pm}\). Hence, in the limit \(\varepsilon_{1}=0\), this calculation leading to Eq. (8) demonstrates that, despite the simplicity inferred, the result comprises a fundamentally important feature of the composite system discussed here. Namely, while the internal interactions, which lead to the self-energy \(\mathbf{\Sigma}\), provides an energy dependent shift of the electron resonances and their corresponding life times, as well as an induced finite spin-splitting, and while the coupling between electrons in the localized level and the reservoir contributes to the level broadening of the local resonances, it is only when those two mechanisms are present simultaneously that a finite magnetic moment can be induced and maintained in the localized level.
The implications of this result should have bearing on the interpretation of experimental results, as well as, how a theoretical account for a phenomenon can be made irrelevant by exclusion of effects from the environment. In magnetism, for instance, many types of interactions which, at first sight, may appear unrelated may actually play a non-trivial role for the stabilization of the ordered state [16; 17; 30]. The magnetic signatures observed after adsorbing non-magnetic molecules onto metallic surface [18; 20; 25] stem from mechanisms that are unlikely to be captured within the conventional theory for magnetism.
It is by now established that time-reversal symmetry may be broken by inelastic scattering [31; 32]. Therefore, we consider a simplified example that may be used to illustrate a possible experimental outcome for single molecules in contact with a thermal reservoir. Such a system can be modeled by the Hamiltonian \(\mathcal{H}=\mathcal{H}_{\text{mol}}+\mathcal{H}_{\text{ph}}+\mathcal{H}_{\text{ c-ph}}\), where \(\mathcal{H}_{\text{mol}}=\psi^{\dagger}\varepsilon\psi\) denotes the valence state in the molecule, whereas \(\mathcal{H}_{\text{ph}}=\sum_{\mathbf{\alpha}}\omega_{\mathbf{\alpha}}b_{\mathbf{\alpha}}^{ \dagger}b_{\mathbf{\alpha}}\) represents the thermal reservoir in which \(b_{\mathbf{\alpha}}^{\dagger}\) (\(b_{\mathbf{\alpha}}\)) creates (annihilates) a phonon at the energy \(\omega_{\mathbf{\alpha}}\). The electron-phonon coupling is provided through the term
\[\mathcal{H}_{\text{e-ph}}=\sum_{\mathbf{\mathfrak{q}}}\psi^{\dagger}\mathbf{U}_{\mathbf{ \mathfrak{q}}}\psi(b_{\mathbf{\mathfrak{q}}}+b_{\bar{\mathbf{\mathfrak{q}}}}^{\dagger}), \tag{9}\]
where the coupling parameter \(\mathbf{U}_{\mathbf{\mathfrak{q}}}=u_{\mathbf{\mathfrak{q}}\mathbf{\sigma}}\sigma_{0}+\mathbf{u }_{\mathbf{\mathfrak{q}}}\cdot\mathbf{\sigma}\), whereas \(\bar{\mathbf{\mathfrak{q}}}=-\mathbf{\mathfrak{q}}\). In addition to \(u_{\mathbf{\mathfrak{q}}}\) which defines a generic coupling between charge and vibrational modes, \(\mathbf{u}_{\mathbf{\mathfrak{1}}\mathbf{\mathfrak{q}}}\) denotes a vibrationally induced spin-orbit coupling [27; 33]. Here, \(\sigma_{0}\) and \(\mathbf{\sigma}\)
denote the \(2\times 2\) identity and vector of Pauli matrices, respectively.
The processes associated with the terms \(u_{0\mathbf{q}}\psi^{\dagger}\psi(b_{\mathbf{q}}+b_{\mathbf{q}}^{\dagger})\) and \(\psi^{\dagger}\mathbf{u}_{1\mathbf{q}}\cdot\mathbf{\sigma}\psi(b_{\mathbf{q}}+b_{ \mathbf{q}}^{\dagger})\) are illustrated in Fig. 1 (a) and (b), respectively. In processes of the former kind, the electrons emit or absorb phonons such the total charge undergoes a transition to the emission or absorption state. By contrast, in processes of the latter, in which both charge and spin are coupled to the phonons, the emission and absorption processes are accompanied by electronic spin-flips and spin-dependent rates.
The magnetic moment \(\langle\mathbf{M}_{\text{mol}}\rangle\) is related to the lesser single electron Green function \(\mathbf{G}_{\text{mol}}^{<}\), which is given by the Dyson-like equation in Eq. (1). The self-energy \(\mathbf{\Sigma}=\sum_{\mathbf{q}}\mathbf{U}_{\mathbf{q}}\tilde{\mathbf{\Sigma}}_{ \mathbf{q}}\mathbf{U}_{\mathbf{q}}\) is in the second order approximation given by the electron-phonon exchange loop [33],
\[\tilde{\mathbf{\Sigma}}(z)= \frac{1}{\beta}\sum_{\nu}\mathbf{G}_{\text{mol}}(z-z_{\nu})D_{ \mathbf{q}}(z_{\nu}), \tag{10}\]
since the Hartree contribution vanishes in this approximation. Here, \(\beta=1/k_{B}T\) defines the thermal energy in terms of the Boltzmann constant \(k_{B}\) and temperature \(T\).
While the equation for the Green function should be solved self-consistently, for the present purposes it is sufficient to replace the propagators in the self-energy with their corresponding bare ones, \(\mathbf{g}_{\text{mol}}(z)=(z-\varepsilon)^{-1}\) and \(D_{\mathbf{q}}(z)=2\omega_{\mathbf{q}}/(z^{2}-\omega_{\mathbf{q}}^{2})\). We, then, write the self-energy as \(\tilde{\mathbf{\Sigma}}_{\mathbf{q}}=\tilde{\mathbf{\Sigma}}_{\mathbf{0}\mathbf{q}} \sigma_{0}+\tilde{\mathbf{\Sigma}}_{\mathbf{1}\mathbf{q}}\cdot\mathbf{\sigma}\) where
\[\tilde{\mathbf{\Sigma}}_{\mathbf{0}\mathbf{q}}(z)= \frac{1}{2}\sum_{s=\pm 1}\biggl{(}\frac{1-f(\varepsilon_{s})+n_{B} (\omega_{\mathbf{q}})}{z-\varepsilon_{s}-\omega_{\mathbf{q}}}+\frac{f( \varepsilon_{s})+n_{B}(\omega_{\mathbf{q}})}{z-\varepsilon_{s}+\omega_{ \mathbf{q}}}\biggr{)} \tag{11a}\] \[\tilde{\mathbf{\Sigma}}_{\mathbf{1}\mathbf{q}}(z)= \frac{\tilde{\mathbf{\varepsilon}}_{1}}{2}\sum_{s=\pm 1}s\biggl{(}\frac{1-f( \varepsilon_{s})+n_{B}(\omega_{\mathbf{q}})}{z-\varepsilon_{s}-\omega_{ \mathbf{q}}}+\frac{f(\varepsilon_{s})+n_{B}(\omega_{\mathbf{q}})}{z- \varepsilon_{s}+\omega_{\mathbf{q}}}\biggr{)} \tag{11b}\]
where \(\varepsilon_{1}=|\mathbf{\epsilon}_{1}|\), \(\varepsilon_{s}=\varepsilon_{0}+s\varepsilon_{1}\), and \(\tilde{\mathbf{\epsilon}}_{1}=\mathbf{\epsilon}_{1}/\varepsilon_{1}\), whereas \(n_{B}(\omega)\) denotes the Bose-Einstein distribution function.
In the following, our aim is to emphasize how the thermal reservoir influences the temperature dependency of the induced magnetic moment. Therefore, we investigate a molecule with unpolarized level, that is, setting \(\varepsilon_{1}=0\).
In this limit, the unperturbed Green function simplifies to \(\mathbf{g}(z)=\sigma_{0}/(z-\varepsilon_{0})\) and the electron energy \(\varepsilon_{s}\to\varepsilon_{0}\) in the self-energy, as well as \(\tilde{\mathbf{\Sigma}}_{1}\to 0\). Nevertheless, because of the form of the electron-phonon coupling it can be seen that \(\Sigma_{0}=\sum_{\mathbf{q}}(u_{0\mathbf{q}}u_{0\mathbf{q}}+\mathbf{u}_{1 \mathbf{q}}\cdot\mathbf{u}_{1\mathbf{q}})\tilde{\mathbf{\Sigma}}_{\mathbf{0} \mathbf{q}}\) while \(\mathbf{\Sigma}_{1}=\sum_{\mathbf{q}}(u_{0\mathbf{q}}\mathbf{u}_{1\mathbf{q}}+ \mathbf{u}_{1\mathbf{q}}u_{0\mathbf{q}}+i\mathbf{u}_{1\mathbf{q}}\times \mathbf{u}_{1\mathbf{q}})\tilde{\mathbf{\Sigma}}_{\mathbf{0}\mathbf{q}}\). Then, for \(\mathbf{G}_{\text{mol}}^{r}=G_{0}^{r}\sigma_{0}+\mathbf{G}_{1}^{r}\cdot\mathbf{\sigma}\), we have
\[G_{0}^{r}(\omega)= \frac{z-\varepsilon_{0}-\Sigma_{0}^{r}}{(z-\varepsilon_{0}- \Sigma_{0}^{r})^{2}-\mathbf{\Sigma}_{1}^{r}\cdot\mathbf{\Sigma}_{1}^{r}}, \tag{12a}\] \[\mathbf{G}_{1}^{r}(\omega)= \frac{\mathbf{\Sigma}_{1}^{r}}{(z-\varepsilon_{0}-\Sigma_{0}^{r})^{2 }-\mathbf{\Sigma}_{1}^{r}\cdot\mathbf{\Sigma}_{1}^{r}}. \tag{12b}\]
First, we notice in this limit that, a configuration such that \(u_{0\mathbf{q}}=0\) and \(\mathbf{u}_{1\mathbf{q}}\neq 0\), may lead to a modification of the electronic state. The requirement is that \(\mathbf{u}_{1\mathbf{q}}\times\mathbf{u}_{1\mathbf{q}}\neq 0\). The momentum dependence of the coupling rate \(\mathbf{u}_{1\mathbf{q}}\) is related to the phononic polarization vector \(\mathbf{\epsilon}_{\mathbf{q}}\) which, in turn, depends on the lattice symmetries. For instance, inversion symmetry implies that \(\mathbf{\epsilon}_{\mathbf{q}}^{*}=\mathbf{\epsilon}_{\mathbf{q}}=\mathbf{\epsilon}_{ \mathbf{q}}\), under which conditions, then, the self-energy \(\mathbf{\Sigma}_{1}=0\), hence, also \(\mathbf{G}_{1}=0\). On this note, it is relevant to mention that chiral phonons, for which there is no inversion symmetry, would open for the possibility to generate an electronic spin-polarization, something that was considered in Ref. [34].
From the expressions in Eq. (12), we calculate the local density of electron states \(\langle n_{\text{mol}}\rangle=(-i)\text{sp}\int\mathbf{G}_{\text{mol}}^{<}( \omega)d\omega/2\pi=(-i)\int G_{0}^{<}(\omega)d\omega/\pi\), which, for \(u_{0\mathbf{q}}=u_{0}=0.01\) and \(\mathbf{u}_{1\mathbf{q}}=0\), is plotted in Fig. 2 (a), (b), as a function of the energy for temperatures corresponding to thermal energies between 1 meV
Figure 1: Illustration of the electron-phonon processes involving (a) only charge and (b) both charge and spin. By emission or absorption of phonons, the total charge undergoes transitions to the states at the energies \(\varepsilon_{0}-\omega\) and \(\varepsilon_{0}+\omega\), respectively. (a) For a coupling solely between the charge and phonons, there is no spin related process. (b) For a coupling that involves both charge and spin, the transitions may be accompanied by spin-flip and spin-dependent rates.
and 20 meV. The unperturbed (bare) density of states has a single peak at the energy \(\varepsilon_{0}=0.1\) (red). When the electron-phonon interaction is turned on, this central peak splits into two which are located symmetrically around \(\varepsilon_{0}\). This is expected considering the poles given in Eq. (3). The plots in Fig. 2 (a), (b), illustrate the thermal evolution of the density of state for two different phonon velocities, (a) \(c=0.01\) and (b) \(c=0.001\). The width of the spectrum is expected to increase inversely with the velocity, since more phonon modes contribute to the interactions with the electron the lower the velocity.
Despite the splitting of the density of electron states, the spin degeneracy remains preserved. This is clear since the electron-phonon coupling only contains the spin-conserving component. This leads, trivially, to that \(\mathbf{\Sigma}_{1}=0\), hence, the spin-dependent component \(\mathbf{G}_{1}\) of the Green function also vanishes. For completeness, the spin-resolved density of electron states are plotted in Fig. 2 (c), (d), illustrating the degeneracy of the spin projections.
The combination of charge and spin coupling interactions with the phonons, on the other hand, results in the emergence of two resonance peaks alongside the initial elastic peak in the density of state. This is illustrated in Fig. 3 for \(\mathbf{u}_{1\mathbf{q}}=u_{0}\mathbf{\hat{x}}\) and otherwise the same conditions as for the plots in Fig. 2. The side peaks shift to the higher energies with increasing temperature, while the central peak acquires a lowered amplitude. Also here, the lower velocity tends to induce a stronger shift of the side peaks with increasing temperature, as expected from the previous case.
In order to draw any conclusions about the spin properties under these conditions, however, we investigate the spin resolved densities of states captured in the matrix \(\mathbf{\rho}(\omega)=-\text{Im}\mathbf{G}_{\text{m01}}^{r}(\omega)/\pi\). The spin resolved densities of states are plotted in Fig. 3 (c), (d). As expected, the spin-dependent coupling \(\mathbf{u}_{1\mathbf{q}}u_{0}\mathbf{\hat{x}}\) breaks the degeneracy of the electronic structure. Quite unexpectedly at first glance, on the other hand, is that the spin projections are separated into two mutually exclusive branches. Here, however, this is not surprising since the self-energies \(\Sigma_{0}^{r}\) and \(\mathbf{\Sigma}_{1}^{r}\) are both proportional to \(\Sigma_{0}^{r}\) and \(\mathbf{u}_{1\mathbf{q}}=u_{0}\mathbf{\hat{x}}\), which leads to that \(\mathbf{G}_{\text{mol}}^{r}\) can be partitioned into
\[\mathbf{G}_{\text{mol}}^{r}=\frac{1}{2}\frac{\sigma^{0}+\sigma^{x}}{\omega- \varepsilon_{0}+i\delta}+\frac{1}{2}\frac{\sigma^{0}-\sigma^{x}}{\omega- \varepsilon_{0}-4u_{0}^{2}\tilde{\varepsilon}_{0}^{r}}, \tag{13}\]
where \(\delta>0\) infinitesimal.
This partitioning makes it clear that one central resonance is located at the elastic energy \(\omega=\varepsilon_{0}\), whereas the other resonances are found at the condition \(\omega-\varepsilon_{0}-4u_{0}^{2}\Sigma_{0}^{r}=0\), an equation which in the current approximation has two solutions. In Fig. 3 (c), (d), the resonances corresponding to the first and second contributions are signified by \(\pm\sigma^{x}\).
The associated molecular magnetic moment \(\langle\mathbf{M}_{\text{mol}}\rangle=\mathcal{M}\mathbf{\hat{x}}\) resulting from these conditions is given by
\[\mathcal{M}= \frac{1}{2}f(\varepsilon_{0})+\text{Im}\int\frac{f(\omega)}{ \omega-\varepsilon_{0}-4u_{0}^{2}\tilde{\varepsilon}_{0}^{r}}\frac{d\omega}{4 \pi}. \tag{14}\]
While this moment is, in general, non-vanishing, it undergoes a sign change at a finite temperature \(T_{\text{xo}}\). This is understood by the opposite signs of the two contributions constituting \(\mathcal{M}\) in Eq. (14); recall that \(\text{Im}(\omega-\varepsilon_{0}-4u_{0}^{2}\tilde{\varepsilon}_{0}^{r})^{-1}<0\). Here, the first contribution, which is positive, dominates the magnetic moment at low temperature, see Fig. 4. Put simply, in the figure it can be seen that whereas the central resonance at
Figure 3: (a), (b) Local DOS and (c), (d) local spin-DOS as a function of the energy \(\omega\), for the set-up \(\varepsilon_{0}=0.1\), \(u_{0\mathbf{q}}=0.01\), \(\mathbf{u}_{1\mathbf{q}}=0.01\mathbf{\hat{x}}\), and phonon velocity (a), (c), \(c=0.01\) and (b), (d), \(c=0.001\), for temperatures corresponding to the energies \(1/\beta\in\{1,\ 5,\ 10,\ 15,\ 20\}\) [units: meV]. The unperturbed (bare) DOS is shown for reference (red).
Figure 4: Influence of the temperature on the magnetic moment, both with respect to the shifts of the inelastic resonances and the thermal occupation factor (Fermi-Dirac distribution function). At low temperatures (blue), the inelastic resonances are strongly asymmetrically occupied while the occupation become symmetrized with increasing temperature (red). Very low temperatures is shown for reference (black). The spin-resolved densities are given for the conditions in Fig. 3 (d). The inset illustrates the phonon dispersions for three different velocities, (cyan) low, (purple) moderate, and (green) high, and the expected thermally occupied states (gray area) for each dispersion relation.
\(\omega=\varepsilon_{0}\) is nearly fully occupied, the side resonances are only partially occupied. Since the former and latter resonances add positively and negatively, respectively, to the total moment, the moment is positive at sufficiently low temperature. This property is corroborated by our computations of the magnetic moment, see Fig. 5, which displays \(\mathcal{M}\) as a function of the temperature for different phonon velocities \(c\).
With increasing temperature, the occupations of all resonances increase, however, while the occupation of the central resonance is marginally increased, the side resonances approach full occupation such that the two branches cancel out each other. Nevertheless, since the overall spin-density has a slight overweight at the side resonances, this contribution eventually becomes larger than the central resonance such that the total moment changes sign at a cross-over temperature \(T_{\infty}\) and becomes negative. The sign change and negative moment is clearly illustrated in Fig. 5, which also shows that the amplitude, both positive and negative, of the moment increases with decreasing velocity \(c\).
The latter observation is understood in terms of the thermally accessible energies for a given temperature, see inset of Fig. 4, illustrating the phonon dispersion relations for three velocities, (blue) low, (red) moderate, and (black) high, and the expected thermally occupied states (gray area) for each dispersion relation. For phonons with low velocity, a lower temperature is required to thermally access the energies for a larger portion of the phononic \(\mathbf{q}\)-vectors in reciprocal space, compared to phonons with a higher velocity. Therefore, it is not surprising that slow phonons contribute more to the limiting magnetic moments, than fast phonons.
Finally, we consider the configuration with \(\mathbf{u}_{1\mathbf{q}}=u_{0}(1,0,1)\). For these conditions, the molecular Green function can be written
\[\mathbf{G}^{r}_{\text{mol}}= \frac{1}{4}\sum_{s=\pm 1}\frac{2\sigma^{0}+s\sqrt{2}(\sigma^{x}+ \sigma^{z})}{\omega-\varepsilon_{0}-(3+s2\sqrt{2})u_{0}^{2}\hat{\Sigma}^{r}_{ 0}}. \tag{15}\]
In this set-up, there is no clear separation of the central and side resonances, instead the two branches mix. In Fig. 6, we display plots of the spin resolved density of electron states, for the same conditions as in Fig. 3, however, with \(u_{z\mathbf{q}}\neq 0\). First, one may notice that the resonances are mixtures of both spin projections. Second, it is clear that one spin branch is more heavily weighted on the side resonances, Fig. 6 (a), (c), whereas the other branch has an overweight on the central resonance, Fig. 6 (b), (d), albeit the central resonance cannot be clearly resolved.
In fact, the central resonance cannot be identified as a single resonance under the given conditions, since the electronic density comprises four distinct peaks. The four resonances can be found as the solutions to real parts of the two equations \(\omega-\varepsilon_{0}-(3\pm 2\sqrt{2})u_{0}^{2}\hat{\Sigma}^{r}_{0}=0\), of which the \(+\) (\(-\)) equation provides the resonances which are more heavily weighted on the side (central) resonances. In this sense, each equation corresponds to one of the two spin branches and despite the mixing between these, one can identify a slight discrimination between them.
In this configuration, the induced molecular magnetic moment can be written \(\langle\mathbf{M}_{\text{mol}}\rangle=\mathcal{M}(\mathbf{\hat{x}}+\mathbf{ \hat{z}})\), where the factor \(\mathcal{M}\) is provided by the integral
\[\mathcal{M}=\frac{\sqrt{2}}{4}\sum_{s=\pm 1}\int\frac{sf(\omega)}{\omega- \varepsilon_{0}-(3+s2\sqrt{2})u_{0}^{2}\hat{\Sigma}^{r}_{0}}\frac{d\omega}{2 \pi}. \tag{16}\]
Again, we can identify a cross-over temperature \(T_{\infty}\) at which the total moment changes sign from positive to negative, which can be seen in Fig. 7, in which the factor \(\mathcal{M}\) is plotted as a function of the temperature, for different phonon velocities. The mechanism for this sign change is the same as in the previous configuration. Whereas one spin-projection becomes more or less fully occupied already at low tempera
Figure 5: Induced magnet moment as a function of the thermal energy \(1/\beta\), for the set-up \(\varepsilon_{0}=0.1\), \(u_{0\mathbf{q}}=0.01\), \(\mathbf{u_{1\mathbf{q}}}=0.01\hat{\mathbf{x}}\), and phonon velocity (blue) \(c=0.001\), (red) \(c=0.005\), (black) \(c=0.01\), and (green) \(c=0.05\) [units: meV].
Figure 6: Local spin-DOS with (a), (b), spin \(\uparrow\)-projections and (c), (d), spin \(\downarrow\)-projection, as a function of the energy \(\omega\), for the set-up \(\varepsilon_{0}=0.1\), \(u_{0\mathbf{q}}=0.01\), \(\mathbf{u_{1\mathbf{q}}}=0.01(\mathbf{\hat{x}}+\mathbf{\hat{z}})\), and phonon velocity (a), (c) \(c=0.01\) and (b), (d) \(c=0.001\), for temperatures corresponding to the energies \(1/\beta\in\{1,\,5,\,10,\,30\}\) [units: meV].
ture and the other is only partially occupied, the latter tends to become increasingly occupied with the temperature and eventually dominates the overall magnetic moment. It can also be observed that the total magnetic moment increases when the \(z\)-component is added to the already existing \(x\)-component of the interaction parameter \(\mathbf{u_{1q}}\). This observation is, however, trivial due to the increased number of scattering channels that are opened.
A more important, and also interesting observation that may be done, is that the temperature for the sign change of the magnetic moment appears to be universal and independent of the phonon velocity, see Figs. 5, 7. This property is not surprising when considering that the sign change is a result of the competition between the two contributions, c.f., Eqs. (14), (16). The two contributions have equal temperature dependencies irrespective of the phonon velocity which, therefore, leads to that the specific phonon distribution does not impact the temperature at which the two contributions cancel. Should the two contributions, on the other, have unequal dependencies on the phonon distribution, then the cross-over temperature may vary with, e.g., the phonon velocity. Currently, we are not aware of which type of electron-phonon interactions that would cause such inhomogenous temperature dependencies, however, it is possible that structures in which the phonons modes are strongly anisotropic would open up for such properties.
In summary, we have theoretically investigated the influence of combined interactions on a localized level and demonstrated that an electronic state may become spin-polarized when coupled to reservoirs. We show that a system which is non-magnetic whenever isolated from an surrounding environment, may spin-polarize when a connection to such environment is made. The system may spin-polarize if there are, at least, two types of interactions of which at least one has an intrinsic spin-dependence associated with it. Formally, an interaction that can be expressed as \(V_{0}\sigma^{0}\) and \(\mathbf{V}_{1}\cdot\mathbf{\sigma}\), changes the electronic spectrum by \((V_{0}\sigma^{0}+\mathbf{V}_{1}\cdot\mathbf{\sigma})^{2}=(V_{0}^{2}+|\mathbf{V}_ {1}|^{2})\sigma^{0}+2V_{0}\mathbf{V}_{1}\cdot\mathbf{\sigma}\). Hence, the electronic spectrum becomes spin-dependent if and only if both \(V_{0}\) and \(\mathbf{V}_{1}\) are non-zero. Under those conditions, there is a potential for the system to acquire a non-vanishing magnetic moment.
As a corollary, we develop a theory for temperature-dependent magnetization in a molecule. We show that spin-dependent inelastic scattering, e.g., off phonons which may arise due to spin-orbit coupling [33], leads to breaking of the time-reversal symmetry. For this, we employ an unconventional treatment of electron scatterings off phonons by taking into account both the charge-phonon and spin-phonon couplings. While none of these coupling individually break the electronic spin degeneracy, our findings show that the combination of the two leads to a splitting of the spin channels. The effect we consider, which results in non-conserved energy collisions, originates from the interplay between spin-orbit coupling and vibrational modes. We, furthermore, demonstrate that the inelastic scattering does induce a non-zero magnetic moment of the initially unpolarized molecule, a moment which magnitude increases with temperature, however, changes sign at a cross-over temperature. The sign change of the magnetic moment can be explained in terms of competing influences from the relevant interactions.
Despite that we are currently aware of experimental results which comply with our theoretical discussion [8; 18; 19; 22; 28; 29], it would intriguing to consider the effects under more extreme conditions, for instance, measurements of magnetically asymmetric thermopower or using magnetic force microscopy to measure asymmetric forces of chiral molecules attached to surface.
|
2306.06753
|
3rd Place Solution for PVUW Challenge 2023: Video Panoptic Segmentation
|
In order to deal with the task of video panoptic segmentation in the wild, we
propose a robust integrated video panoptic segmentation solution. In our
solution, we regard the video panoptic segmentation task as a segmentation
target querying task, represent both semantic and instance targets as a set of
queries, and then combine these queries with video features extracted by neural
networks to predict segmentation masks. In order to improve the learning
accuracy and convergence speed of the solution, we add additional tasks of
video semantic segmentation and video instance segmentation for joint training.
In addition, we also add an additional image semantic segmentation model to
further improve the performance of semantic classes. In addition, we also add
some additional operations to improve the robustness of the model. Extensive
experiments on the VIPSeg dataset show that the proposed solution achieves
state-of-the-art performance with 50.04\% VPQ on the VIPSeg test set, which is
3rd place on the video panoptic segmentation track of the PVUW Challenge 2023.
|
Jinming Su, Wangwang Yang, Junfeng Luo, Xiaolin Wei
|
2023-06-11T19:44:40Z
|
http://arxiv.org/abs/2306.06753v1
|
# 3rd Place Solution for PVUW Challenge 2023: Video Panoptic Segmentation
###### Abstract
In order to deal with the task of video panoptic segmentation in the wild, we propose a robust integrated video panoptic segmentation solution. In our solution, we regard the video panoptic segmentation task as a segmentation target querying task, represent both semantic and instance targets as a set of queries, and then combine these queries with video features extracted by neural networks to predict segmentation masks. In order to improve the learning accuracy and convergence speed of the solution, we add additional tasks of video semantic segmentation and video instance segmentation for joint training. In addition, we also add an additional image semantic segmentation model to further improve the performance of semantic classes. In addition, we also add some additional operations to improve the robustness of the model. Extensive experiments on the VIPSeg dataset show that the proposed solution achieves state-of-the-art performance with 50.04% VPQ on the VIPSeg test set, which is 3rd place on the video panoptic segmentation track of the PVUW Challenge 2023.
## 1 Introduction
Video panoptic segmentation (VPS) [3] aims at simultaneously predicting object classes, bounding boxes, masks, instance id associations, and semantic segmentation while assigning unique answers to each pixel in a video.
In recent years, many VPS datasets have emerged, including Cityscapes-VPS [3], KITTI-STEP [10], VIPSeg [7]. For example, Cityscapes-VPS has 400 training videos and 100 validation videos. Each video consists of 30 consecutive frames, with every 5 frames paired with the ground truth annotations. For each video, all 30 frames are predicted, and only the 6 frames with ground truth are evaluated. And KITTI-STEP consists of 21 training sequences and 29 test sequences. In addition, VIPSeg provides 3,536 videos and 84,750 frames with pixel-level panoptic annotations, covering a wide range of real-world scenarios and categories, which is the first attempt to tackle the challenging video panoptic segmentation task in the wild by considering diverse scenarios. In this paper, we focus on the dataset VIPSeg covering a wide range of real-world scenarios and categories. Some visual examples as shown in Fig. 1.
However, there still exist several challenges that hinder the development of VPS. First of all, there are many similar objects in the real application scenarios of VPS, where the accurate cross-frame tracking of these objects is very confusing, which leads to the different objects being wrongly matched as the same one. Secondly, stuff classes often occupy a large area, but it is difficult to maintain consistency in a large area, resulting in a lot of noise. In addition, many scenarios are very different, containing different objects and behaviors, which leads to many scenes not being included in the training dataset, thus bringing great challenges to the generalization of the algorithm.
Figure 1: Visual examples of the VIPSeg dataset [7].
are prominent problems, and there are many other difficulties to be solved in the task of VPS, which together make VPS still a challenging task.
To deal with the task of VPS, lots of learning-based methods have been proposed in recent years, achieving impressive performance. For example, Video K-Net [6] is built upon K-Net, a method that unifies image segmentation via a group of learnable kernels. In the method, learnable kernels from K-Net encode object appearances and contexts, and can naturally associate identical instances across video frames. Thus, video K-Net learns to simultaneously segment and track "things" and "stuff" in a video with simple kernel-based appearance modeling and cross-temporal kernel interaction. Tarvis [1] proposes a novel, unified network architecture that can be applied to any task that requires segmenting a set of arbitrarily defined 'targets' in the video. This approach is flexible with respect to how tasks define these targets, since it models the latter as abstract 'queries' which are then used to predict pixel-precise target masks. A single TarViS model can be trained jointly on a collection of datasets spanning different tasks and can hot-swap between tasks during inference without any task-specific remaining. Video-KMax [9] propose a unified approach for online and near-online VPS. The meta architecture of the Video-kMaX consists of two components: within-clip segmenter (for clip-level segmentation) and cross-clip associated (for association beyond clips). In addition, clip-kMaX (clip k-means mask transformer) and HiLA-MB (Hierarchical Location-Aware Memory Buffer) are used to instantiate the segmenter and associate, respectively. Tube-link [5], a versatile framework that addresses multiple core tasks of video segmentation with a unified architecture, is a near-online approach that takes a short subclip as input and outputs the corresponding spatial-temporal tube masks. To enhance the modeling of cross-tube relationships, Tube-link introduces a way to perform tube-level linking via attention along the queries, and introduces temporal contrastive learning to instance-wise discriminative features for tube-level association.
Inspired by these existing methods, we propose a robust integrated video panoptic segmentation solution. For the first challenge of VPS, we first introduce Tarvis to represent both semantic and instance targets as a set of queries, and then combine these queries with video features extracted by neural networks to predict segmentation masks, which ensures that the instance target (thing classes) is tracked accurately. In addition, we add additional tasks of video semantic segmentation and video instance segmentation for joint training to improve the learning accuracy and convergence speed of the solution. For the second challenge, we add an additional image semantic segmentation model ViT-Adapter [2] trained on VSPW [8] to further improve the performance of semantic classes (stuff classes). For the third challenge, we also add some additional operations to improve the robustness of the model, including exponential moving average (EMA), model ensemble, and so on. Extensive experiments on the VIPSeg dataset show that the proposed solution achieves state-of-the-art performance with 50.04% VPQ on the VIPSeg test set, which is 3rd place on the video panoptic segmentation track of the PVUW Challenge 2023.
The main contributions of this paper include: 1) we enhance naive Tarvis with additional tasks of video semantic segmentation and video instance segmentation for joint training on the same data set VIPSeg, to improve the accuracy of thing classes. 2) we introduce ViT-Adapter trained on VSPW to further improve the performance of stuff classes. 3) we add extra operations to improve the robustness on the test set. 4) The proposed solution achieves state-of-the-art performance with 50.04% VPQ on the VIPSeg test set, which is the 3rd place on the video panoptic segmentation track of the PVUW Challenge 2023.
Figure 2: Architecture of Tarvis [1].
## 2 Our solution
To solve the problems of VPS, we propose a robust integrated video panoptic segmentation solution. In this solution, we first introduce Tarvis as the baseline of video panoptic segmentation, with video additional tasks of video semantic segmentation (VSS) and video instance segmentation (VIS) for joint training to improve the learning accuracy and convergence speed of the solution. Next, we use ViT-Adapter [2] with a multi-scale feature extractor to further improve the performance of stuff classes. In addition, we also conduct some extra operations to improve the robustness. Details of the proposed solution are described as follows.
### Video Instance Segmentation
Because of the similarity and movement of example targets, it is easy to fail to track them. To improve the performance, we introduce Tarvis as the baseline, as shown in Fig. 2. In Tarvis, segmentation targets for different tasks are represented by a set of abstract target queries. The core network (in green) is agnostic to the task definitions. The inner product between the output queries and video feature yields segmentation masks as required by the task. In this way, a large number of video data from different tasks can be jointly trained to ensure better performance of video instance tracking and segmentation task.
To improve the performance and convergence speed of Tarvis, we add additional loss of video semantic segmentation and video instance segmentation for joint training. Specifically, we convert the annotations of video panoptic segmentation into video semantic segmentation and video instance segmentation respectively. In detail, video semantic annotation is to convert all thing targets into semantic categories to obtain the labels for video semantic segmentation, while video instance annotation is to keep only thing targets (remove all stuff targets ) to obtain the labels of video instance segmentation. Then, on the same data set VIPSeg, the joint training of video panoptic segmentation, video semantic segmentation, and video instance segmentation is carried out, so that the model can be fully learned on this data set.
On the final result, Tarvis is trained in four stages. The first stage and the second stage are the same as the paper [1], and they are trained on multiple image datasets and multiple video task datasets respectively. In the third stage, Tarvis is conducted on joint training of three tasks (_i.e_., VPS, VSS, VIS) on VIPSeg. In the fourth stage, Tarvis is conducted on additional training of only VPS on VIPSeg.
### Video Semantic Segmentation
In order to ensure the consistency of stuff targets, we introduce ViT-Adapter [2] trained on the image dataset VSPW for semantic segmentation (SS). Note that VPSW and VIPSeg have the same data source and categories, and VPSW has a higher annotation frame rate, which is more suitable for the training of semantic segmentation. The architecture of ViT-Adapter is shown in Fig. 3.
### Extra Operations
**Exponential moving average.** We use the exponential moving average to make the model more robust on the test data.
**Modeling Ensemble.** In addition, We integrate the logits of the stuff classes of VPS (from Tarvis), VSS (from Tarvis), and SS (from ViT-Adapter), and take the average following by softmax as the final semantic segmentation results.
**Others.** We also try to use Segment Anything (SAM) [4] to get the segmentation masks of some categories, but the effect is not ideal, so it is not used in the final result.
Figure 3: Architecture of ViT-Adapter [2].
### Implementation Details
Tarvis with the backbone of Swin-L is trained in four stages. The first stage and the second stage are the same as the paper [1], and they are trained on multiple image datasets and multiple video task datasets respectively. In the third stage, Tarvis is conducted on joint training of three tasks (_i.e_., VPS, VSS, VIS, with the sampling weights of 1:1:1) on VIPSeg for 90k iterations. In the fourth stage, Tarvis is conducted on additional training of only VPS on VIPSeg for 10k iterations.
ViT-Adapter with the backbone of ViT-Adapter-L and Mask2Former head, is trained on VSPW for 40k iterations.
All experiments were carried out on 8 Nvidia A100 GPUs with 80G memory.
## 3 Experiments and Results
### Experimental Setup
**Datasets.** VIPSeg provides 3,536 videos and 84,750 frames with pixel-level panoptic annotations, covering a wide range of real-world scenarios and categories, which is the first attempt to tackle the challenging video panoptic segmentation task in the wild by considering diverse scenarios. The train set, validation set, and test set of VIPSeg contain 2, 806/343/387 videos with 66, 767/8, 255/9, 728 frames, respectively. In addition, all the frames in VIPSeg are resized into 720P (the size of the short side is resized to 720) for training and testing.
**Evaluation Metrics.** Video Panoptic Quality (VPQ) and Segmentation and Tracking Quality (STQ) are used evaluation metrics for video panoptic segmentation, as used in [7].
\begin{table}
\begin{tabular}{c|c|c c c c c c} \hline Rank & Name & VPQ & VPQ1 & VPQ2 & VPQ4 & VPQ6 & STQ \\ \hline
1 & zhangtao-whu & 53.7380 (1) & 54.7484 (1) & 54.0604 (1) & 53.2963 (1) & 52.8467 (1) & 0.5095 (6) \\
2 & yknykn & 52.8930 (2) & 54.3543 (2) & 53.1442 (2) & 52.2927 (2) & 51.7809 (2) & 0.5173 (3) \\
3 & yyyds & 50.0394 (3) & 51.6104 (5) & 50.5923 (3) & 49.4210 (3) & 48.5340 (3) & 0.5171 (4) \\
4 & SUtech & 49.8604 (4) & 51.6154 (4) & 50.5523 (4) & 49.1890 (4) & 48.0851 (4) & 0.5214 (1) \\
5 & korpusose & 48.5721 (5) & 52.7642 (3) & 49.7589 (5) & 46.9454 (5) & 44.8198 (6) & 0.4806 (8) \\ \hline \end{tabular}
\end{table}
Table 1: Ranking results (Top 5) in the YouTube-VOS 2022 test set. We mark our results in blue.
Figure 4: Qualitative results on VIPSeg testset of the proposed solution.
### Results
The proposed solution obtain 3rd place on the video panoptic segmentation track of the PVUW Challenge 2023, as listed in 1. In addition, we also show some of our quantitative results in Fig. 4. It can be seen that the proposed solution can accurately segment stuff and thing targets in some difficult scenarios which have severe changes in object appearance, and confusion of multiple similar objects and small objects.
## 4 Conclusion
In this paper, we propose a robust solution for the task of video panoptic segmentation and make nontrivial improvements and attempts in many stages such as model, training, and ensemble. In the end, we achieve the 3rd place on the video panoptic segmentation track of the PVUW Challenge 2023 with 50.04% VPQ.
|
2304.10748
|
Optimized control for high-fidelity state transmission in open systems
|
Quantum state transfer (QST) through spin chains has been extensively
investigated. Two schemes, the coupling set for perfect state transfer (PST) or
adding a leakage elimination operator (LEO) Hamiltonian have been proposed to
boost the transmission fidelity. However, these ideal schemes are only suitable
for closed systems and will lose their effectiveness in open ones. In this
work, we invoke a well explored optimization algorithm, Adam, to expand the
applicable range of PST couplings and LEO to the open systems. Our results show
that although the transmission fidelity decreases with increasing system-bath
coupling strength, Markovianity and temperature for both ideal and optimized
cases, the fidelities obtained by the optimized schemes always outweigh the
ideal cases. The enhancement becomes more bigger for a stronger bath,
indicating a stronger bath provides more space for the Adam to optimize. This
method will be useful for the realization of high-fidelity information transfer
in the presence of environment.
|
Yang-Yang Xie, Feng-Hua Ren, Arapat Ablimit, Xiang-Han Liang, Zhao-Ming Wang
|
2023-04-21T05:34:24Z
|
http://arxiv.org/abs/2304.10748v1
|
# Optimized control for high-fidelity state transmission in open systems
###### Abstract
Quantum state transfer (QST) through spin chains has been extensively investigated. Two schemes, the coupling set for perfect state transfer (PST) or adding a leakage elimination operator (LEO) Hamiltonian have been proposed to boost the transmission fidelity. However, these ideal schemes are only suitable for closed systems and will lose their effectiveness in open ones. In this work, we invoke a well explored optimization algorithm, Adam, to expand the applicable range of PST couplings and LEO to the open systems. Our results show that although the transmission fidelity decreases with increasing system-bath coupling strength, Markovianity and temperature for both ideal and optimized cases, the fidelities obtained by the optimized schemes always outweigh the ideal cases. The enhancement becomes more bigger for a stronger bath, indicating a stronger bath provides more space for the Adam to optimize. This method will be useful for the realization of high-fidelity information transfer in the presence of environment.
## I Introduction
High-fidelity information transfer between qubits lays a firm foundation for the realization of large-scale fault-tolerant quantum computers [1]. Spin qubits interact through nearest-neighbor Heisenberg exchange coupling and constitute a one-dimensional spin chain. Bose has proposed to use a spin chain as the channel for short-distance communication [2]. Nonetheless, the transmission fidelity decreases with increasing number of spins [2; 3]. Lots of strategies for the fidelity improvement have been proposed, such as arranging the special couplings between nearest-neighbor sites for PST [4; 5], adding well-designed external fields [6; 7; 8; 9]. QST has also been experimentally investigated in varieties of platforms, including superconducting qubit chains [10], trapped ions [11], ultracold atoms [12], semiconductor quantum dots [13; 14], etc.
Along with the above alluded ones, the proposed schemes are mainly based on ideally closed systems. When considering the environments [15; 16], normally the information processing which can be performed well in closed systems will be destroyed by the system-bath interaction. The detrimental effects of a Markovian [3] or non-Markovian [17; 18; 16] bath on the QST through spin chains have been investigated recently. The transmission fidelity is found to decrease with the increasing system-environment coupling strength, environmental characteristic frequency and temperature [18; 16]. A lot of schemes have been proposed to reduce these adverse effects, like modulating the couplings between the spins [3; 4; 5] or invoking an LEO [18; 16]. In our recent work, we investigate the almost exact state transmission in a spin chain by adding an LEO Hamiltonian [16; 9]. The LEO Hamiltonian can be realized by a sequence of control pulses. The pulse conditions have been obtained in a closed system for almost exact QST [16]. When applying these conditions to an open system, the fidelity decreases due to the existence of the environments [19; 16].
Gradient descent is the most basic optimization algorithm [20], moving relevant parameters towards the direction minimizing a predefined cost, or loss, function but without guaranteeing a fast and stable convergence. Momentum algorithm makes progress with this problem by updating parameters according to the gradients of current and previous iterations [21]. Besides, one of the algorithms with adaptive learning rates, RMSprop [22], can modulate the learning rate on the basis of different parameters and training phases. Adaptive Moment Estimation (Adam) algorithm builds on and hence inherits the above two ones, realizing more efficient convergence behaviors, and become the most popular optimizer even in the noisy intermediate-scale quantum (NISQ) device era [23; 24; 25]. Recently we use the stochastic gradient descent or Adam algorithm to find the optimized pulses for the adiabatic speedup [26] or non-adiabatic QST [27] in a noisy environment. The control pulses are designed via optimization algorithms by considering both the system and environment. As stated above, the ideal pulses are not effective for the open systems. In this paper, we use Adam algorithm to design the optimized couplings or pulses for high-fidelity QST through a spin chain in a non-Markovian environment. By defining an effective loss function which is relevant to the system and environmental parameters, the real unknown parameters can be revealed and the optimized solution is obtained along the gradient descent direction. We adopt a new-developed non-Markovian quantum master equation approach to solve the corresponding dynamics of the system [28]. For the optimized couplings, we find that the achievable maximum fidelity can be enhanced and the corresponding arrival time can be shortened as well.
For the optimized control pulses, our results show that they can acquire better QST qualities than the ideal closed-system pulses do. In both scenarios, the effects of system-bath coupling strength \(\Gamma\), environmental non-Markoviany \(\gamma\) and temperature \(T\) on the fidelity are analyzed. The fidelity decreases with increasing anyone of above parameters as expected, but the fidelity can be improved by our optimized schemes, especially in a strong environment.
## II Model and Method
### The model and the Hamiltonian
When a quantum system is exposed to its environment, the total Hamiltonian \(H_{tot}\) consists of three parts
\[H_{tot}=H_{s}+H_{b}+H_{int}. \tag{1}\]
Here \(H_{s}\) and \(H_{b}=\sum_{k}\omega_{k}b_{k}^{\dagger}b_{k}\) are the system and bath Hamiltonian, respectively. \(H_{int}=\sum\limits_{k}(g_{k}^{*}L^{\dagger}b_{k}+g_{k}Lb_{k}^{\dagger})\) accounts for the interaction between them. \(\omega_{k}\) indicates the \(k\)th mode frequency of bath and \(b_{k}^{\dagger}\) (\(b_{k}\)) represents the bosonic creation (annihilation) operator. The system is linearly coupled to a bosonic bath through the Lindblad operator \(L\) with coupling constant \(g_{k}\).
According to the QSD approach [28; 29; 30; 31], the dynamical evolution of an open system in a non-Markovian finite-temperature heat bath is governed by
\[\frac{\partial}{\partial t}\rho_{s} = -i[H_{s},\rho_{s}]+[L,\rho_{s}\overline{O}_{z}^{\dagger}(t)]-[L^{ \dagger},\overline{O}_{z}(t)\rho_{s}] \tag{2}\] \[+[L^{\dagger},\rho_{s}\overline{O}_{w}^{\dagger}(t)]-[L, \overline{O}_{w}(t)\rho_{s}].\]
The operators \(O_{z(w)}\) are defined by an ansatz, and enter this evolution equation through the memory kernels \(\overline{O}_{z(w)}(t)=\int_{0}^{t}ds\alpha_{z(w)}(t-s)O_{z(w)}(t,s)\). To simplify, we adopt the weak system-bath coupling and low frequency (or high temperature) approximations. Moreover, the chosen spectrum density, Ohmic type with a Lorentz-Drude cutoff function [32; 33; 34], reads \(J(\omega)=\frac{\Gamma}{\pi}\frac{\omega}{1+(\frac{\omega}{\omega})^{2}}\). Subsequently, the two bath correlation functions \(\alpha_{z(w)}(t-s)\) in \(\overline{O}_{z(w)}(t)\) satisfy the following condition
\[\frac{\partial\alpha_{z(w)}(t-s)}{\partial t}=-\gamma\alpha_{z(w)}(t-s). \tag{3}\]
Then the operators \(\overline{O}_{z,(w)}(t)\) obey the closed equations [28]
\[\frac{\partial\overline{O}_{z}}{\partial t}=(\frac{\Gamma T\gamma}{2}-\frac{ i\Gamma\gamma^{2}}{2})L-\gamma\overline{O}_{z}+[-iH_{s}-(L^{\dagger}\overline{O}_{z}+L \overline{O}_{w}),\overline{O}_{z}], \tag{4}\]
\[\frac{\partial\overline{O}_{w}}{\partial t}=\frac{\Gamma T\gamma}{2}L^{ \dagger}-\gamma\overline{O}_{w}+[-iH_{s}-(L^{\dagger}\overline{O}_{z}+L \overline{O}_{w}),\overline{O}_{w}]. \tag{5}\]
As a result, we are allowed to numerically solve the dynamical evolution equation in Eq. (2), with the help of Eqs. (4) and (5). In the above derivation, \(\Gamma\) and \(\gamma\) stand for the system-bath coupling strength and the characteristic frequency of bath. The Ornstein-Uhlenbeck correlation function \(\Lambda(t,s)=\frac{\gamma}{2}e^{-\gamma|t-s|}\) contained in \(\alpha_{z(w)}(t-s)\), decays exponentially with the environmental memory time \(1/\gamma\) characterizing the memory capacity of the bath. Therefore for small \(\gamma\), non-Markovian properties can be observed. The large \(\gamma\) corresponds to a Markovian bath due to the shrinking environmental memory time.
When \(\gamma\) approaches \(\infty\), the bath becomes completely Markovian and has no memory capacity anymore. Consequently, \(\overline{O}_{z}=\frac{\Gamma T}{2}L\) and \(\overline{O}_{w}=\frac{\Gamma T}{2}L^{\dagger}\). The master equation in Eq. (2) therefore reduces to the Lindblad form [28]
\[\frac{\partial}{\partial t}\rho_{s} = -i[H_{s},\rho_{s}]+\frac{\Gamma T}{2}[(2L\rho_{s}L^{\dagger}-L^{ \dagger}L\rho_{s}-\rho_{s}L^{\dagger}L) \tag{6}\] \[+(2L^{\dagger}\rho_{s}L-LL^{\dagger}\rho_{s}-\rho_{s}LL^{\dagger})].\]
In this paper, we consider a one-dimensional \(XY\) spin chain as the system
\[H_{s}=\sum\limits_{i=1}^{N-1}J_{i,i+1}\left(\sigma_{i}^{x}\sigma_{i+1}^{x}+ \sigma_{i}^{y}\sigma_{i+1}^{y}\right). \tag{7}\]
Here \(\sigma_{i}^{\alpha}\) (\(\alpha=x,y\)) stands for the Pauli operator acting on the \(i\)th spin. \(J_{i,i+1}\) indicates the relevant coupling strength between the nearest-neighbor sites \(i\), \(i+1\) and we set the PST coupling layout \(J_{i,i+1}=-\sqrt{i\left(N-i\right)}\) throughout.
Initially prepare all the spins at the down state but the first one at the up state, i.e., \(\left|\Psi_{s}\left(0\right)\right\rangle=\left|\mathbf{1}\right\rangle=\left| 100\ldots 0\right\rangle\). Our task is to transfer the state \(\left|1\right\rangle\) from the first to the last spin of the chain, and the target state will be \(\left|\mathbf{N}\right\rangle=\left|000\ldots 1\right\rangle\). During this process, the transmission fidelity \(F(t)=\sqrt{\left\langle\mathbf{N}\right|\rho_{s}(t)\left|\mathbf{N}\right\rangle}\) is monitored to evaluate the transfer quality. Here \(\rho_{s}(t)\) is the reduced density matrix of our system.
Combining the advantages of two algorithms with fast and steady convergence, Momentum and RMSprop, Adam has already become the most valuable optimizer in the NISQ era. Now in this work we use the Adam to construct an iterative process to optimize the parameters for high-fidelity state transfer in noisy environments.
First we need to define a loss function \(Loss\) and our goal of high-fidelity QST is encoded as to minimize the \(Loss\). The specific optimization procedure of Adam algorithm is as follows.
_Step 1:_ Compute the gradient vector \(\mathbf{g}\) of loss function \(Loss\) with respect to selected variables \(\mathbf{A}\) in the \(k\)th iteration
\[\mathbf{g}^{k}=\triangledown_{\mathbf{A}^{k}}Loss(\mathbf{A}^{k}). \tag{8}\]
_Step 2:_ Compute the new exponential moving averages
\[\mathbf{m}^{k}=\beta_{1}\mathbf{m}^{k-1}+(1-\beta_{1})\mathbf{g}^{k}, \tag{9}\]
\[\mathbf{v}^{k}=\beta_{2}\mathbf{v}^{k-1}+(1-\beta_{2})(\mathbf{g}^{k})^{2}. \tag{10}\]
_Step 3:_ Compute the new bias-corrected moment vectors
\[\hat{\mathbf{m}^{k}}=\mathbf{m}^{k}/[1-(\beta_{1})^{k}], \tag{11}\]
\[\hat{\mathbf{v}^{k}}=\mathbf{v}^{k}/[1-(\beta_{2})^{k}]. \tag{12}\]
_Step 4:_ Update the variables \(\mathbf{A}\) according to
\[\mathbf{A}^{k+1}=\mathbf{A}^{k}-\alpha\hat{\mathbf{m}^{k}}/(\sqrt{\hat{\mathbf{v}^{k}}}+ \varepsilon). \tag{13}\]
_Step 5:_ Repeat steps \(1-4\) till \(Loss<\xi\) or the number of iterations \(k>k_{max}\). \(\xi\) (set \(\xi=0.001\)) and \(k_{max}\) are the prescribed loss ceiling and maximal iteration number, respectively.
## III Results and Discussions
In this work, we consider two scenarios and apply Adam optimizer to explore high-fidelity QST through a spin chain in open systems. For the first one, we choose to modulate the coupling strength sequence \(\mathbf{J}=[J_{1,2},J_{2,3},\cdots,J_{N-1,N}]\). For the second one, we optimize the pulse amplitude sequence \(\mathbf{I}=[I_{0},I_{1},\cdots,I_{M-1}]\) to realize a more effective LEO. Without loss of generality, we consider the Lindblad operator \(L=\sum_{i=1}^{N}\sigma_{i}^{-}\) that describes the dissipation. Here \(\sigma_{i}^{-}=\left(\sigma_{i}^{x}-i\sigma_{i}^{y}\right)/2\) denotes the lowering operator on the \(i\)th spin.
### Optimized couplings via Adam
In this section, we perform the coupling optimization. Recall that our goal is to minimize a commonly defined loss function
\[Loss(\mathbf{J})=1-F(\mathbf{J})+\lambda J_{max}, \tag{14}\]
where the fidelity \(F(\mathbf{J})\) is obtained with the help of the optimized coupling sequence \(\mathbf{J}\) and \(Loss(\mathbf{J})\) is the corresponding infidelity. \(J_{max}\) stands for the maximal absolute value of couplings \(J_{i,i+1}\) in optimized couplings. The relaxation parameter \(\lambda\) is introduced here to modulate the proportion of \(J_{max}\) in \(Loss\) to restrain \(J_{max}\) to not too large.
As an example, the number of spins is taken as \(N=6\). Here we take the PST couplings \(J_{i,i+1}=-\sqrt{i\left(N-i\right)}\) as an initial guess and set the maximal iteration number \(k_{max}=1000\). In addition, it is necessary to mention that in closed systems, PST can be observed at \(t=n\pi/4\) (\(n\) is an odd integer) for the PST couplings. Accordingly, the total evolution time is taken as \(T_{tot}=\pi/4\) throughout.
In Fig. 1 we plot the time evolution of the fidelity \(F(t/T_{tot})\) with PST and optimized couplings for different
environmental parameters. The parameters are taken as \(\gamma=2\), \(T=10\) (Fig. 1(a)), \(\Gamma=0.01\), \(T=10\) (Fig. 1(b)), and \(\Gamma=0.01\), \(\gamma=2\) (Fig. 1(c)), respectively. For a fair comparison between PST and optimized couplings, the optimized couplings are limited in \([-3,-2]\), which has the same region as PST. At first, without optimization, the exposure to environment always decreases the fidelity. A larger \(\Gamma\), \(\gamma\) or \(T\) corresponds to a lower fidelity \(F\), i.e., a stronger system-bath interaction, more Markovian or higher temperature bath will destroy the system more severely, which is in accordance with Refs. [16; 18]. This result still holds for the optimized cases. For example, in Fig. 1(b), the maximum fidelity \(F_{max}=0.73\) is obtained for \(\gamma=2\) and as \(\gamma\) grows, \(F_{max}\) decreases. Secondly, comparing \(F_{max}\) for the PST and optimized couplings with the same environmental parameters, we find that the optimized \(F_{max}\) are always higher than these in the ideal cases. In another word, using the optimized couplings, \(F_{max}\) can always be enhanced in the presence of environment. This is the key observation of our paper. It is not so obvious but worth mentioning that the maximal fidelity improvement increases with increasing \(\Gamma\) and \(T\). That is to say, the bath destroys system more severely, the improvement is more significant. Namely, a more stronger bath provides more space for Adam to optimize. Clearly, without environment (\(\Gamma=0\) in Fig. 1(a)), the evolution is the same for PST and optimized couplings. Thirdly, defining the arrival time \(T_{a}\) when \(F_{max}\) is achieved, with PST couplings, \(T_{a}\) occurs at \(t=\pi/4\) for different \(\Gamma\) and \(T\), and nearly at \(t=\pi/4\) for different \(\gamma\). The bath slightly affects the arrival time \(T_{a}\) under ideal pulses. However, after optimization, \(T_{a}\) is evidently shorter than \(\pi/4\), which bears the advantage that \(F_{max}\) arrives earlier and thus the accumulative detrimental effects of the environment can be partially avoided. In Fig. 1, \(T_{a}\) is shorter for larger \(\Gamma\), \(\gamma\) and \(T\). At last, even for the Markovian case (Fig. 1(b)), \(F_{max}\) can still be enhanced by the coupling optimization. In sum, our optimized couplings via Adam algorithm can simultaneously enhance the transmission fidelity and shorten the arrival time.
Fig. 2 plots the corresponding PST and optimized couplings used in Fig. 1. The optimized coupling configuration is similar to the PST: bigger in the middle and smaller in both ends. But for a more stronger bath (bigger \(\Gamma\), \(\gamma\) and \(T\)), Adam finds a more flatter configuration. The minimum and maximum get closer. Also, the symmetry of the couplings with respect to the middle of the chain is broken due to the existence of the environments, which can be clearly seen for a strong bath (\(\Gamma=0.1\) in Fig. 2(a), \(\gamma=5\) in Fig. 2(b), or \(T=15\) in Fig. 2(c)).
Figure 2: (Color on line) The corresponding PST and optimized couplings in Fig. 1.
### Optimized pulses via Adam
#### iii.2.1 QST under ideal pulse control
Environmental noise normally destroy the transmission fidelity and Refs. [18; 28] have introduced an LEO approach to address this problem. The main idea of this LEO approach is to add an additional Hamiltonian \(H_{LEO}\) to the system Hamiltonian \(H_{s}\), ensuring the quantum system to evolve along a predefined passage. For example, if we use \(H_{PST}\) to denote the Hamiltonian in Eq. (7) with PST couplings, we can set \(\left|\Psi\left(t\right)\right\rangle=\exp\left(-iH_{PST}t\right)\left|\mathbf{ 1}\right\rangle\) as the evolution passage. The LEO Hamiltonian in adiabatic frame [35] can be constructed as
\[H_{LEO}=c\left(t\right)\left|\Psi\left(t\right)\right\rangle\langle\Psi\left( t\right)|, \tag{15}\]
where \(c(t)\) is the control function. The total Hamiltonian becomes
\[H_{tot}=H_{s}+H_{LEO}. \tag{16}\]
The LEO Hamiltonian can be achieved by a series of control pulses that can be divided into perturbative and nonperturbative versions. In this paper we consider the latter one whose pulse intensity and duration are finite. The pulse conditions for effective control have been theoretically deduced by P-Q partitioning technique in closed systems [19; 36; 37]. For sine pulses \(c(t)=I\sin(\omega t)\), the corresponding pulse condition is
\[J_{0}(\frac{I\tau}{\pi})=0. \tag{17}\]
Here \(I\) and \(\tau\) represent pulse intensity and half period, and \(J_{0}(x)\) denotes the zero-order Bessel function of the first kind. Note that the integral of such pulses over a period is zero (i.e., zero-area condition of pulses) [19; 35]. The control pulses such as rectangular and triangular ones have also been investigated [37].
#### iii.2.2 High-fidelity QST
Although the above ideal pulse conditions are derived theoretically from closed systems, they can be applied to open ones with no guarantee of their effectiveness. In this section, we aim to design optimized pulses for certain environmental parameters with the help of Adam, and then compare their performances with ideal counterparts. In order to make fair comparisons, the optimized pulses also satisfy the zero-area condition [19; 35]. First we design the optimized sine pulses (single pulses)
\[c(t)=I(t)\sin(\omega t). \tag{18}\]
Here \(I(t)\) is a \(P\) segment piece-wise constant function, whose \(P\) values are drawn in order from the pulse
amplitude sequence \(\mathbf{I}=[I_{0},I_{1},\cdots,I_{P-1}]\) and take the equal time interval \(\Delta t=T_{tot}/P\) (\(\omega=2\pi/\Delta t\) and set \(P=5\)). Notice that the zero-area condition [19; 35] is followed in iterative procedures as in theoretical derivation. We consider the corresponding ideal values \(\mathbf{I}=[96.200,96.200,\cdots,96.200]\), derived from the pulse condition in Eq. (17), as our initial guess. The maximal iteration number is still \(k_{max}=1000\) and the number of spins \(N=4\). Furthermore, the maximal intensity of the optimized pulses are not supposed to outweigh that of their ideal counterparts. Similar to Eq. (14), the loss function is accordingly defined as
\[Loss(\mathbf{I})=1-F(\mathbf{I})+\lambda c_{max}. \tag{19}\]
Here \(c_{max}\) is the maximum of the control function \(c(t)\). In Eq. (19), there also is a competition between infidelity \(1-F(\mathbf{I})\) and maximal control intensity \(c_{max}\) for \(Loss\), and a relaxation parameter \(\lambda\) to restrict \(c_{max}\)[38].
In Fig. 3, we plot the fidelity \(F\) as a function of the rescaled time \(t/T_{tot}\) for different environmental parameters with ideal and optimized pulses. In Fig. 3(a), \(\gamma=10\) and \(T=10\). When \(\Gamma=0.1\), the maximal fidelity \(F_{max}(t)\) dramatically rockets, from \(0.585\) without control to \(0.958\) with ideal pulses and \(0.959\) with single pulses. Note that the single pulses ultimately reach the similar fidelities as the ideal pulses can do. We then propose the combinatorial sine pulses (combinatorial pulses) to obtain a higher fidelity,
\[c(t)=\sum_{i=0}^{Q-1}I_{i}\sin\left[\left(i+1\right)\omega t\right], \tag{20}\]
where we turn to set the control function \(c(t)\) as a combination of Fourier sine components. Here \(Q\) denotes the number of Fourier components and we consider \(Q=10\). Notice that the zero-area condition [19; 35] is still satisfied. Obviously, when \(\Gamma=0.1\) and \(0.2\), combinatorial pulses overshadow the ideal and single counterparts, and there are minor but evident increases on QST fidelities. From now on, we choose to optimize combinatorial pulses alone.
In Fig. 3(b) and (c), we plot the influences of the parameters \(\gamma\) and temperature \(T\) on the fidelity. In Fig. 3(b), \(\Gamma=0.1\) and \(T=10\) while in Fig. 3(c), \(\Gamma=0.1\) and \(\gamma=10\). For all the situations, without exception, the combinatorial pulses outshine the ideal ones. Furthermore, an increasing \(\Gamma\), \(\gamma\) or \(T\) corresponds to a decreasing fidelity \(F\). But still, in a more stronger bath, the optimized pulses can make larger corrections for this fidelity deterioration. Fig. 4 gives the profiles of corresponding ideal and optimized (single and combinatorial) pulses in Fig. 3. Fig. 4(a) shows that the single pulses are almost indistinguishable from the ideal ones. As for the combinatorial pulses, they are only similar with the ideal pulses in terms of the magnitude. From the above analysis, we conclude that the scheme of optimized control pulses can play more helpful roles than the ideal ones, especially in stronger baths.
Figure 4: (Color on line) The corresponding ideal and optimized (single and combinatorial) pulses in Fig. 3.
At last we consider different types of Lindblad operator \(L\). We will compare the effects of \(L=\sum_{i=1}^{N}\sigma_{i}^{-}\) and \(L=\sum_{i=1}^{N}\sigma_{i}^{x}\), and the latter corresponds to the spin-boson interaction. We do not consider the dephasing (\(L=\sum_{i=1}^{N}\sigma_{i}^{z}\)) because \(\left[L,\rho_{s}\overline{O}^{\dagger}\right]=\left[L^{\dagger},\overline{O} \rho_{s}\right]=0\). Therefore the bath only randomly changes the global phase of system [9]. In Fig. 5 we plot the cases with different Lindblad operators. Fig. 5(a) shows that the fidelity obtained by the optimized couplings exceeds the PST ones whatever the Lindblad operator \(L\) is. The parameters are taken as \(N=6\), \(\Gamma=0.05\), \(\gamma=2\) and \(T=10\). Fig. 5(b) demonstrates the implications of \(L\) on performances of optimized pulses. Again optimized pulses show their advantage over the ideal counterparts on reducing the effects of environmental noise. We take \(N=4\), \(\Gamma=0.1\), \(\gamma=10\) and \(T=10\) in the simulation.
## IV Conclusions
QST is one of the basic tasks in quantum computation. PST and almost exact QST through a spin chain can be realized for PST couplings and LEO control, respectively. However, theses conditions are derived theoretically from the ideally closed systems and thus their effectiveness are lost when they are applied to an open system, i.e., being coupled to a heat bath results in their dissipation dynamics. In this paper, we take a one-dimensional \(XY\) spin chain with nearest-neighbor couplings as an example and introduce a well-developed optimization algorithm, Adam, to seek for the optimized couplings and control pulses in the presence of environment. By minimizing a predefined loss function, high-fidelity transmission is obtained for both schemes. In addition, we discuss the effects of system-bath coupling strength \(\Gamma\), environmental non-Markovianity parameter \(\gamma\) and temperature \(T\) on our schemes. Although the fidelity \(F\) decreases with anyone of these parameters increasing, our optimized schemes perform better, especially for a stronger bath. Our work shows that the Adam algorithm is a powerful tool to search the optimized parameters in open quantum systems, which are important in performing quantum information processing tasks.
## Acknowlegment
This paper is based upon work supported by the Natural Science Foundation of Shandong Province (Grants No. ZR2021LLZ004).
|
2303.14589
|
SASS: Data and Methods for Subject Aware Sentence Simplification
|
Sentence simplification tends to focus on the generic simplification of
sentences by making them more readable and easier to understand. This paper
provides a dataset aimed at training models that perform subject aware sentence
simplifications rather than simplifying sentences as a whole. We also test
models on that dataset which are inspired by model architecture used in
abstractive summarization. We hand generated portions of the data and augment
the dataset by further manipulating those hand written simplifications. Our
results show that data-augmentation, data-masking, and model architecture
choices used in summarization provide a solid baseline for comparison on
subject aware simplification.
|
Brad Windsor, Luke Martin, Anand Tyagi
|
2023-03-26T00:02:25Z
|
http://arxiv.org/abs/2303.14589v1
|
# SASS: Data and Methods for Subject Aware Sentence Simplification
###### Abstract
Sentence simplification tends to focus on the generic simplification of sentences by making them more readable and easier to understand. This paper provides a dataset aimed at training models that perform subject aware sentence simplifications rather than simplifying sentences as a whole. We also test models on that dataset which are inspired by model architecture used in abstractive summarization. We hand generated portions of the data and augment the dataset by further manipulating those hand written simplifications. Our results show that data-augmentation, data-masking, and model architecture choices used in summarization provide a solid baseline for comparison on subject aware simplification.
## 1 Introduction
Sentence simplification is a problem which aims to transform long, dense sentences into more accessible ones that are easier to understand. Current work in sentence simplification focuses on simplifying for the purpose of making sentences easier to understand. As a result, the output sentences tend to include as much of the information included in the original sentence as possible, altering the parts of the sentences which contain challenging vocabulary or excess words.
An alternative type of simplification which is relevant to explore is to simplify by topic. While making sentences easier to read is beneficial for helping people more easily understand the contents of various documents, simplifying by topic allows for different people to extract information directly useful to them at the sentence level.
Simplification by topic has currently been explored in the area of summarization, where models exist that allow for the creation of summaries tailored to a specific topic, as shown in Wang et al. (2009) and Hennig (2009). However, in order to use the models currently available for topic specific summarization, the input documents must be typically greater than one sentence long; topic specific summarization is usually associated with multi-document summarization.
Current datasets used for evaluating sentence simplification models focused on the aforementioned goal of simplifying for the ease of understanding. In this paper, we present the SASS (Subject Aware Sentence Simplification) dataset which consists of sentences from a YELP dataset (yel, 2015), simplified by one or more specified topics. This dataset can be used to test models specifically aiming to simplify sentences by topic, rather than just focused on ease of readability.
To come up with good candidates for the SASS dataset, we used the Spacy (Honnibal et al., 2020) NER tagger to identify sentences which discussed our subjects of interest, and hand-wrote simplifications of those sentences. We augmented this dataset with additional entities mined from the corpus.
The original and augmented datasets are the first multi-topic sentence simplification datasets. After creating them, we used these datasets to study how well techniques for multi-topic summarization generalize to simplification. Our tests include encoder-decoder models following Liu and Lapata (2019) and the use of artificial tokens following Scarton and Specia (2018).
## 2 Related Work
Our work draws on earlier approaches to simplification, including the control of the degree of simplification, and on Subject-Aware problems in abstractive summarization.
### Simplification process
Simplification is often a multi-step process with more than one model involved (Zhu et al., 2010).
Some of the common steps in sentence simplification:
* Splitting long sentences into shorter ones
* Dropping irrelevant information
* Replacing words or phrases
Sentence simplification models tend to either process information in several separate pipeline stages Xu et al. (2015), or solve the full problem in an end-to-end neural model Zhang and Lapata (2017).
### Controllable Sentence Simplification
Controllable sentence simplification is a new paradigm which aims to better control the degree of information omitted. Some examples:
* Separating the Newsela corpus by grade level, and using an initial token in a seq2seq model to signify the target grade level Scarton and Specia (2018)
* Training to produce a given compression ratio, degree of paraphrasing, or lexical complexity Martin et al. (2019)
### Subject-Aware Summarization
Subject-Aware summarization aims to condense a document but tailor it for a specific purpose. One such problem is Topic-Aware Abstractive Text summarisation Zheng et al. (2020), which attempts to leverage the underlying semantic structure of documents represented by their latent topics. In Fan et al. (2018), the authors propose methods that would allow a user to restrict the length of the summary, ask for entity-specific or source-specific summaries, and only summarise specific portions of the text. Wang et al. (2020) mines topic-specific words from using topic-modelling on a large corpus and uses these words as an input to an attention mechanism for used in summary generation. Finally, there have been other attempts to incorporate more information into the summarizing to tailor the information to specific requests Baumel et al. (2018).
## 3 Methods1
Footnote 1: Code and data are available at [https://github.com/bwindsor22/sentence-simp-target](https://github.com/bwindsor22/sentence-simp-target)
### Setup
#### 3.1.1 Data Preparation
For ease of understanding, we choose the Yelp Reviews Dataset as our base (yel, 2015). We use Spacy Honnibal et al. (2020) to pre-identify 1500 relevant sentences which had entities marked Organization (ORG), Nationality (NORP), and Location (LOC). From this, we denote a simplification which includes ORG/NORP as a "Culinary" simplification and ORG/LOC as a "Location" simplification.
From this, we hand-annotated 599 example sentences which were simplified into multiple sentences based on tagging the in each sentence, we identified sentences that included two or more of any individual entities. We then simplified each given sentence into two or more sentences with each sentence containing exactly one or more of the entities relevant to the topic.
Because sentences where both simplifications are possible are rare, our corpus includes sentences where only one of the two simplifications is possible. See Table 3 for summary of annotation process.
#### Augmenting data
To increase the volume of the data, we augmented the data by mining entity names from the remaining Yelp dataset and substituting into the annotated sentences. First a Spacy tagger was run over the dataset, extracting out all the entities and their corresponding tags. Next, for each row in the dataset, where a row consisted of the source sentence and its summaries, the entities in those sentences were replaced with ones sampled from elsewhere in the dataset. This created the same sentence but with different entities.
Using the example presented in Table 2, an ORG, NORP, LOC tuple mined from one sentence is inserted into another. This approach is inspired by Wang et al. (2020)'s keyword mining. We used this strategy to vary our data increase from 2 times to 6 times the amount of the original hand annotated data.
#### Masking data
We use the Spacy model to replace specific entities ("Islamic") with tags ("NORP0"). The Spacy
model is an example of a task-specific NER system for our dataset; we selected organizations, nationalities, and locations in part because Spacy understands these well.
#### 3.1.2 Model
For training, we use an encoder-decoder architecture sequence generation, inspired by Liu and Lapata (2019)'s work in summarization. We use Roberta Liu et al. (2019) as our base model and the HuggingFace Transformers Wolf et al. (2020) library for encoder-decoder implementation. Our hyperparameters are: batch size: 8, train epochs: 200, learning rate: 0.001.
As simplification is often a multi-model process Zhu et al. (2010), one of our data preparation techniques included using a task-specific model to mask, as in Fig. 1.
For training, we explore two model architectures per Fig. 2 and 3. In the first, two models are trained for two different tasks, with zero knowledge share between the two. In the second, we explore the use of task-specific tokens to specify the simplification style, following Scarton and Specia (2018).
\begin{table}
\begin{tabular}{|p{113.8pt}|p{227.6pt}|} \hline Source Sentence & Given the growing popularity of Indian cuisine, I am surprised that the Bombay Grill conglomerate (Green Street location, First Street location, Bombay Bazaar) have such a monopoly on Indian food in this town. \\ \hline Cuisine simplified & Bombay Grill and Bombay Bazaar are Indian restaurants. \\ \hline Location simplified & Bombay Grill is on Green Street and Bombay Bazaar is on First Street. \\ \hline \end{tabular}
\end{table}
Table 1: Subject-aware sentence simplification
\begin{table}
\begin{tabular}{|p{113.8pt}|p{227.6pt}|} \hline Total Annotated & 599 \\ \hline Culinary \& Location Simplifications & 261 \\ \hline Culinary Simplifications Only & 249 \\ \hline Location Simplifications Only & 49 \\ \hline \end{tabular}
\end{table}
Table 2: Data augmentation and masking
Figure 1: Pipeline for masking. A model replaces key terms with generic tags.
Figure 3: One model using an artificial keyword to specify the style of output
Figure 2: Two models, trained separately, each task-specific
#### 3.1.3 Evaluation
Results are taken as average BLEU scores when compared to the target sentence Papineni et al. (2002), using the NLTK version of BLEU.
### Results
Results are seen in Tables 4 and 5.
### Analysis
We note the following observations from the training:
* Models which have subject-specific expertise can significantly improve performance on the topic specific simplification task. This is seen in the data masking performance, where simplified sentences frequently took forms such as "ORGO is a NORP0 restaurant". By specifying the information the model should be interested in as ORGO/ORG1/etc. tags, we were able to allow the model to focus on the work of restructuring the sentence, rather than just finding the correct entities.
* Data masking works better for culinary simplifications, but we feel this is due to disagreement between annotator and the Spacy model: "North America", "2nd Floor Atrium" are examples of what annotators label as locations and Spacy does not.
* There are similar BLEU scores for both the Culinary and Location datasets. The degree of effectiveness is determined by the technique, rather than the subject. This raises hope that our methods would generalize well to other subjects.
* Mining subject-specific entities from a larger corpus can also be used to improve the sentence simplification task. Substituting various different restaurant names and cuisines during the data augmentation helped proved useful for helping the model learn.
* Data augmentation is a less useful preparation than data masking. The model does not as easily learn that "Iranian" and "Greek" play the same roles as it does in a dataset where both are labelled "ORGO". Data masking helps the model generalize better across examples, however, we note that sufficient data augmentation approaches the performance of masking.
* The paradigm of task-specific tokens successful in tailoring simplifications to a reading level Scarton and Specia (2018) works well with subject-specific simplification also. One model learning both for the original dataset outperforms two models. This is unsurprising in that our data is very scarce (\(<\)1K examples in each category). Allowing the model to see unrelated examples is similar to pre-training on the domain-specific text.
## 4 Conclusion
In this paper, we introduced the SASS dataset which allows for models focused on topic specific sentence simplification to be evaluated. The evaluation of our baseline encoder-decoder model showed that even with a simple model, we are able to generate a system which can perform simplification based on a specified topic.
We presented a new dataset for this challenge, found proof that augmentation techniques help in this domain, and proved that existing strategies in
\begin{table}
\begin{tabular}{|l|l|l|} \hline Dataset & Culinary & Location \\ \hline Original & 0.28 & 0.26 \\ \hline Masked & 0.48 & 0.43 \\ \hline Data Augmentation - 2X (100 epochs) & 0.29 & 0.29 \\ \hline Data Augmentation - 6X (59 epochs) & 0.30 & 0.30 \\ \hline \end{tabular}
\end{table}
Table 4: Model BLEU scores on datasets. Two scores are reported, one for each task (as in Fig 2.)
\begin{table}
\begin{tabular}{|l|l|} \hline Model & Avg BLEU \\ \hline Two models, trained for separate tasks, original dataset (Fig 2) & 0.28 \\ \hline One model, with artificial token to specify task (Fig 3) & 0.43 \\ \hline \end{tabular}
\end{table}
Table 5: Results of model architecture analysis. BLEU score is weighted average of both tasks.
summarization also apply in this domain. Our results help clarify a new approach to simplification, and shed light on how well techniques from other problems generalize.
Further work can be done to expand on the dataset we introduced, both manually tagging more sentences with a wider range of tags, and developing methods to automatically augment the dataset. We expect that paraphrasers such as Quillbot qui2020multitask could futher augment sentences, or that entity lists such as YAGO Suchanek et al. (2007) could be used to further fill out sentences.
Performing subject aware sentence simplification allows for texts to be simplified not only in a manner that can be more understood, but simplified in a way that is relevant to the individual. We hope that the SASS dataset can be used to evaluate new models created specifically for this purpose and further improve the overall area of sentence simplification.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.